Golang实践:数据类型
1. 基本数据
四大类型:
-
基础类型:数字、字符串、boolean
-
聚合类型:数组、结构体
-
引用类型:指针、slice、map、函数、channel
-
接口类型:
1.1. 整数
rune类型是int32类型的同义词;常常用于指明一个值是Unicode码点;
byte类型是uint8类型的同义词,强调一个值是原始数据,而非量值;
1.2. 浮点数
float32和float64两种大小的浮点数;其算术特性遵从IEEE 754标准,所有新式CPU都支持该标准;
float32的最大值:math.MaxFloat32,大约为3.4e38;最小正浮点数为1.4e-45
float64的最大值:math.MaxFloat64,大约为1.8e308;最小正浮点数为4.9e-324
十进制下,float32的有效数字大约是6位,float64的有效数字大约是15位;绝大多数情况下,应该优先选用float64,float32会迅速累积误差;
1.3. 复数
Go具备两种大小的复数complex64和complex128,二者分别由float32和float64构成;
内置的complex函数根据给定的实部和虚部创建复数,而内置的real和imag函数分别提取复数的实部和虚部;
package main
import "fmt"
var x complex128 = complex(1,2)
var y complex128 = complex(3, 4)
func main() {
fmt.Println(x*y)
fmt.Println(real(x*y))
fmt.Println(imag(x*y))
}
(-5+10i)
-5
10
1.4. 布尔值
&&表示逻辑乘法
||表示逻辑加法
1.5. 字符串
字符串是不可变的字节序列,可以包含任意数据,包括0值字节; 不可变意味着两个字符串能安全的共用同一段底层内存,使得复制任何长度字符串的开销都低廉;
字符串内部的数据不允许修改;
字符串字面量
....
,使用反引号而不是双引号,原生的字符串字面量内,转义序列不起作用;实质内容与字面写法严格一致,包括反斜杠和换行符;
唯一的特殊处理是回车符会被删除,换行符会被保留,使得同一字符串在所有平台上的值都相同;
适用于:正则表达式、HTML模板、JSON字面量、命令行提示信息、以及需要多行文本表达的场景;
Unicode
Unicode囊括了世界上所有文书体系的全部字符,还有重音符和其他变音符,控制码,以及许多特有文字,对它们各自赋予一个叫Unicode码点的标准数字;
在Go的术语中,叫做rune–文字符号;
Unicode第8版定义了超过一百种语言文字的12万个字符的码点;
天然适合保存单个文字符号的数据类型就是int32,为Go所采用,所以rune类型作为int32类型的别名;
可以将文字符号的序列表示成int32值序列,即UTF-32或UCS-4; 每个Unicode码点的编码长度相同,都是32位;这种编码简单划一,但是浪费空间,因为大部分可读文本是ASCII码; 广泛使用的字符的数目也少于65535个,16位就足够;所以需要UTF-8
UTF-8
由Ken Thompson和Rob Pike发明;
以字节为单位对Unicode码点作变长编码;
UTF-8是现行的一种Unicode标准,每个文字字符用1-4个字节表示;
一个文字符号编码的首字节的高位指明了后面还有多少字节;若最高位是0,则表示它是7位的ASCII码
缺点: 变长编码的字符串无法按下标直接访问第n个字符;
优点:
编码紧凑;兼容ASCII,并且自同步:最多追溯3字节,就能定位一个字符的起始位置;
UTF-8是前缀编码,能从左向右编码而不产生歧义;也无须超前预读;
于是查找文字符号仅需搜索它自身的字节,不考虑前文内容;
Go的源文件总是以UTF-8编码;需要Go程序操作的文本字符串也优先采用UTF-8编码;
unicode包具备针对单个文字符号的函数;
unicode/utf8包则提供了按UTF-8编码和解码文字符号的函数;
字符串和字节slice
strings
bytes
strconv
unicode
string的修改:
func StringIndex() {
s := "abcdefg"
// s[1] = "1"
// Cannot assign to s[1]
sBytes := []byte(s)
sBytes[0] = []byte("Q")[0]
fmt.Println(sBytes, string(sBytes))
// [81 98 99 100 101 102 103] Qbcdefg
sRunes := []rune(s)
sRunes[0] = []rune("在")[0]
fmt.Println(sRunes, string(sRunes))
// === RUN TestStringIndex
// [81 98 99 100 101 102 103] Qbcdefg
// [22312 98 99 100 101 102 103] 在bcdefg
}
字符串和数字的相互转换
strconv
- 整数转换为字符串:
1.fmt.Sprintf();
2.strconv
1.6. 常量
常量是一种表达式,其可以保证在编译阶段就计算出表达式的值,并不需要等到运行时;从而使编译器得以知晓其值; 所以常量本质上都属于基本类型:布尔型、字符串或数字;
因为编译器知晓其值,常量表达式可以出现在涉及类型的声明中,具体而言就是数组类型的长度:
const IPv4Len = 4
var p [IPv4Len]byte
常量生成器iota
创建一系列相关值,而不是逐个值显式写出;
const (
_ = uint64(1) << (10 * iota)
KiB //1024
MiB //1048576
GiB //1073741824
TiB //1099511627776 超过int32
PiB //1125899906842624
EiB //1152921504606846976
ZiB
YiB
)
2. 复合数据类型
2.1. 数组
数组初始化:
func Arr() {
arr := [3]int{1, 2, 3}
arr0 := [...]int{1, 2, 3}
fmt.Println(arr, arr0)
}
当元素数量小于或者等于4个时,会直接将数组中的元素放置在栈上;
当元素数量大于4个时,会将数组中的元素放置到静态区并在运行时取出;
2.2. slice
golang的引用类型包括slice、map、channel、function、pointer等; 在赋值时拷贝的是指针值,但拷贝后指针指向的地址是相同的;
数据结构
切片即动态数组;
// SliceHeader is the runtime representation of a slice.
// It cannot be used safely or portably and its representation may
// change in a later release.
// Moreover, the Data field is not sufficient to guarantee the data
// it references will not be garbage collected, so programs must keep
// a separate, correctly typed pointer to the underlying data.
type SliceHeader struct {
Data uintptr
Len int
Cap int
}
拷贝
深拷贝:copy(sliceA, sliceB) 浅拷贝:sliceA = sliceB
函数参数传递
golang函数的参数传递都是值传递,而map、channel、slice都是引用传递,会传递指针值;
在切片进行复制时,会将切片的值复制一份,在函数内部可以改变原切片的值; 但当涉及到append触发扩容时,原来指针指向的地址会发生变化,之后再对数组值进行更改, 原切片将不受影响;
初始化
1.通过下标的方式获得数组或者切片的一部分;
slice0 := s[:]
2.使用字面量初始化新的切片;
slice1 := []int{1,2,3}
3.使用关键字 make 创建切片:
slice2 := make([]int, 3)
扩容
func CapOfSlice() {
s := []int{}
// len: 0 cap: 0
preCap := 0
for i := 0; i < 1000; i++ {
s = append(s, i)
if cap(s) != preCap {
fmt.Println("len: ", len(s), " cap: ", cap(s))
preCap = cap(s)
}
}
// len: 1 cap: 1
// len: 2 cap: 2
// len: 3 cap: 4
// len: 5 cap: 8
// len: 9 cap: 16
// len: 17 cap: 32
// len: 33 cap: 64
// len: 65 cap: 128
// len: 129 cap: 256
// len: 257 cap: 512
// len: 513 cap: 1024
}
1.如果期望容量大于当前容量的两倍就会使用期望容量;
2.如果当前切片容量小于 1024 就会将容量翻倍;
3.如果当前切片容量大于 1024 就会每次增加 25% 的容量,直到新容量大于期望容量;
// growslice handles slice growth during append.
// It is passed the slice element type, the old slice, and the desired new minimum capacity,
// and it returns a new slice with at least that capacity, with the old data
// copied into it.
// The new slice's length is set to the old slice's length,
// NOT to the new requested capacity.
// This is for codegen convenience. The old slice's length is used immediately
// to calculate where to write new values during an append.
// TODO: When the old backend is gone, reconsider this decision.
// The SSA backend might prefer the new length or to return only ptr/cap and save stack space.
func growslice(et *_type, old slice, cap int) slice {
if raceenabled {
callerpc := getcallerpc()
racereadrangepc(old.array, uintptr(old.len*int(et.size)), callerpc, funcPC(growslice))
}
if msanenabled {
msanread(old.array, uintptr(old.len*int(et.size)))
}
if cap < old.cap {
panic(errorString("growslice: cap out of range"))
}
if et.size == 0 {
// append should not create a slice with nil pointer but non-zero len.
// We assume that append doesn't need to preserve old.array in this case.
return slice{unsafe.Pointer(&zerobase), old.len, cap}
}
newcap := old.cap
doublecap := newcap + newcap
if cap > doublecap {
newcap = cap
} else {
if old.len < 1024 {
newcap = doublecap
} else {
// Check 0 < newcap to detect overflow
// and prevent an infinite loop.
for 0 < newcap && newcap < cap {
newcap += newcap / 4
}
// Set newcap to the requested cap when
// the newcap calculation overflowed.
if newcap <= 0 {
newcap = cap
}
}
}
var overflow bool
var lenmem, newlenmem, capmem uintptr
// Specialize for common values of et.size.
// For 1 we don't need any division/multiplication.
// For sys.PtrSize, compiler will optimize division/multiplication into a shift by a constant.
// For powers of 2, use a variable shift.
switch {
case et.size == 1:
lenmem = uintptr(old.len)
newlenmem = uintptr(cap)
capmem = roundupsize(uintptr(newcap))
overflow = uintptr(newcap) > maxAlloc
newcap = int(capmem)
case et.size == sys.PtrSize:
lenmem = uintptr(old.len) * sys.PtrSize
newlenmem = uintptr(cap) * sys.PtrSize
capmem = roundupsize(uintptr(newcap) * sys.PtrSize)
overflow = uintptr(newcap) > maxAlloc/sys.PtrSize
newcap = int(capmem / sys.PtrSize)
case isPowerOfTwo(et.size):
var shift uintptr
if sys.PtrSize == 8 {
// Mask shift for better code generation.
shift = uintptr(sys.Ctz64(uint64(et.size))) & 63
} else {
shift = uintptr(sys.Ctz32(uint32(et.size))) & 31
}
lenmem = uintptr(old.len) << shift
newlenmem = uintptr(cap) << shift
capmem = roundupsize(uintptr(newcap) << shift)
overflow = uintptr(newcap) > (maxAlloc >> shift)
newcap = int(capmem >> shift)
default:
lenmem = uintptr(old.len) * et.size
newlenmem = uintptr(cap) * et.size
capmem, overflow = math.MulUintptr(et.size, uintptr(newcap))
capmem = roundupsize(capmem)
newcap = int(capmem / et.size)
}
// The check of overflow in addition to capmem > maxAlloc is needed
// to prevent an overflow which can be used to trigger a segfault
// on 32bit architectures with this example program:
//
// type T [1<<27 + 1]int64
//
// var d T
// var s []T
//
// func main() {
// s = append(s, d, d, d, d)
// print(len(s), "\n")
// }
if overflow || capmem > maxAlloc {
panic(errorString("growslice: cap out of range"))
}
var p unsafe.Pointer
if et.ptrdata == 0 {
p = mallocgc(capmem, nil, false)
// The append() that calls growslice is going to overwrite from old.len to cap (which will be the new length).
// Only clear the part that will not be overwritten.
memclrNoHeapPointers(add(p, newlenmem), capmem-newlenmem)
} else {
// Note: can't use rawmem (which avoids zeroing of memory), because then GC can scan uninitialized memory.
p = mallocgc(capmem, et, true)
if lenmem > 0 && writeBarrier.enabled {
// Only shade the pointers in old.array since we know the destination slice p
// only contains nil pointers because it has been cleared during alloc.
bulkBarrierPreWriteSrcOnly(uintptr(p), uintptr(old.array), lenmem)
}
}
memmove(p, old.array, lenmem)
return slice{p, old.len, newcap}
}
2.3. map
map的底层本质上是实现散列表,解决碰撞的方式是拉链法,map进行扩容时不会立即替换原内存, 而是通过GC的方式释放;
struct类型的map结构体成员不能修改的问题
type Node struct {
ID int
Name string
}
func MapPointer() {
nodes := make(map[int]*Node)
nodes[1] = &Node{123, "test"}
nodes[1].Name = "new_name"
}
func MapNoPointer() {
nodes := make(map[int]Node)
nodes[1] = Node{123, "test"}
nodes[1].Name = "new_name"
// Cannot assign to nodes[1].Name
}
go中的map的value本身是不可寻址的,因为map的扩容的时候,可能要做key/val pair迁移 value本身地址是会改变的;
哈希函数
golang的map进行哈希计算之后,将结果分为高位值和低位值,低位值用于定位buckets数组中的具体 bucket,而高位值用于定位这个bucket链表中具体的key;
2.4. 结构体
2.5. JSON
2.6. 文本和HTML模板
2.7. channel
源码:
type hchan struct {
qcount uint // len //total data in the queue
dataqsiz uint // cap //size of the circular queue
buf unsafe.Pointer // 指向buffer channel底层数组 //points to an array of dataqsiz elements
elemsize uint16 // channel里元素的大小
closed uint32 //是否close
elemtype *_type //通道类型 // element type
sendx uint //指向下一个写位置 // send index
recvx uint //指向下一个读位置 // receive index
recvq waitq //阻塞在读的goroutine,双向链表 // list of recv waiters
sendq waitq //阻塞在写的goroutine,双向链表 //list of send waiters
// lock protects all fields in hchan, as well as several
// fields in sudogs blocked on this channel.
//
// Do not change another G's status while holding this lock
// (in particular, do not ready a G), as this can deadlock
// with stack shrinking.
lock mutex // 控制并发
}
buf: 有缓冲通道用于存储缓存数据的空间, 它是一个循环链表.
sendx和recvx: 用于记录循环链表buf中的发送或者接受的index.
sendq和recvq: 是俩个双向队列,分别是发送、接受的goroutine抽象出来的sudog结构体的队列.
lock: 互斥锁. 在send和recv操作时锁住hchan.
refs
《The Go Programming Language》Alan A.A, Donovan & Brian W.Kernighan
https://go101.org/article/string.html