Go语言全实践
关键字
break default func interface select
case defer go map struct
chan else goto package switch
const fallthrough if range type
continue for import return var
数据结构
数组
数组初始化:
func Arr() {
arr := [3]int{1, 2, 3}
arr0 := [...]int{1, 2, 3}
fmt.Println(arr, arr0)
}
当元素数量小于或者等于 4 个时,会直接将数组中的元素放置在栈上;
当元素数量大于 4 个时,会将数组中的元素放置到静态区并在运行时取出;
切片
数据结构
// SliceHeader is the runtime representation of a slice.
// It cannot be used safely or portably and its representation may
// change in a later release.
// Moreover, the Data field is not sufficient to guarantee the data
// it references will not be garbage collected, so programs must keep
// a separate, correctly typed pointer to the underlying data.
type SliceHeader struct {
Data uintptr
Len int
Cap int
}
初始化
1.通过下标的方式获得数组或者切片的一部分;
slice0 := s[:]
2.使用字面量初始化新的切片;
slice1 := []int{1,2,3}
3.使用关键字 make 创建切片:
slice2 := make([]int, 3)
扩容
func CapOfSlice() {
s := []int{}
// len: 0 cap: 0
preCap := 0
for i := 0; i < 1000; i++ {
s = append(s, i)
if cap(s) != preCap {
fmt.Println("len: ", len(s), " cap: ", cap(s))
preCap = cap(s)
}
}
// len: 1 cap: 1
// len: 2 cap: 2
// len: 3 cap: 4
// len: 5 cap: 8
// len: 9 cap: 16
// len: 17 cap: 32
// len: 33 cap: 64
// len: 65 cap: 128
// len: 129 cap: 256
// len: 257 cap: 512
// len: 513 cap: 1024
}
1.如果期望容量大于当前容量的两倍就会使用期望容量;
2.如果当前切片容量小于 1024 就会将容量翻倍;
3.如果当前切片容量大于 1024 就会每次增加 25% 的容量,直到新容量大于期望容量;
// growslice handles slice growth during append.
// It is passed the slice element type, the old slice, and the desired new minimum capacity,
// and it returns a new slice with at least that capacity, with the old data
// copied into it.
// The new slice's length is set to the old slice's length,
// NOT to the new requested capacity.
// This is for codegen convenience. The old slice's length is used immediately
// to calculate where to write new values during an append.
// TODO: When the old backend is gone, reconsider this decision.
// The SSA backend might prefer the new length or to return only ptr/cap and save stack space.
func growslice(et *_type, old slice, cap int) slice {
if raceenabled {
callerpc := getcallerpc()
racereadrangepc(old.array, uintptr(old.len*int(et.size)), callerpc, funcPC(growslice))
}
if msanenabled {
msanread(old.array, uintptr(old.len*int(et.size)))
}
if cap < old.cap {
panic(errorString("growslice: cap out of range"))
}
if et.size == 0 {
// append should not create a slice with nil pointer but non-zero len.
// We assume that append doesn't need to preserve old.array in this case.
return slice{unsafe.Pointer(&zerobase), old.len, cap}
}
newcap := old.cap
doublecap := newcap + newcap
if cap > doublecap {
newcap = cap
} else {
if old.len < 1024 {
newcap = doublecap
} else {
// Check 0 < newcap to detect overflow
// and prevent an infinite loop.
for 0 < newcap && newcap < cap {
newcap += newcap / 4
}
// Set newcap to the requested cap when
// the newcap calculation overflowed.
if newcap <= 0 {
newcap = cap
}
}
}
var overflow bool
var lenmem, newlenmem, capmem uintptr
// Specialize for common values of et.size.
// For 1 we don't need any division/multiplication.
// For sys.PtrSize, compiler will optimize division/multiplication into a shift by a constant.
// For powers of 2, use a variable shift.
switch {
case et.size == 1:
lenmem = uintptr(old.len)
newlenmem = uintptr(cap)
capmem = roundupsize(uintptr(newcap))
overflow = uintptr(newcap) > maxAlloc
newcap = int(capmem)
case et.size == sys.PtrSize:
lenmem = uintptr(old.len) * sys.PtrSize
newlenmem = uintptr(cap) * sys.PtrSize
capmem = roundupsize(uintptr(newcap) * sys.PtrSize)
overflow = uintptr(newcap) > maxAlloc/sys.PtrSize
newcap = int(capmem / sys.PtrSize)
case isPowerOfTwo(et.size):
var shift uintptr
if sys.PtrSize == 8 {
// Mask shift for better code generation.
shift = uintptr(sys.Ctz64(uint64(et.size))) & 63
} else {
shift = uintptr(sys.Ctz32(uint32(et.size))) & 31
}
lenmem = uintptr(old.len) << shift
newlenmem = uintptr(cap) << shift
capmem = roundupsize(uintptr(newcap) << shift)
overflow = uintptr(newcap) > (maxAlloc >> shift)
newcap = int(capmem >> shift)
default:
lenmem = uintptr(old.len) * et.size
newlenmem = uintptr(cap) * et.size
capmem, overflow = math.MulUintptr(et.size, uintptr(newcap))
capmem = roundupsize(capmem)
newcap = int(capmem / et.size)
}
// The check of overflow in addition to capmem > maxAlloc is needed
// to prevent an overflow which can be used to trigger a segfault
// on 32bit architectures with this example program:
//
// type T [1<<27 + 1]int64
//
// var d T
// var s []T
//
// func main() {
// s = append(s, d, d, d, d)
// print(len(s), "\n")
// }
if overflow || capmem > maxAlloc {
panic(errorString("growslice: cap out of range"))
}
var p unsafe.Pointer
if et.ptrdata == 0 {
p = mallocgc(capmem, nil, false)
// The append() that calls growslice is going to overwrite from old.len to cap (which will be the new length).
// Only clear the part that will not be overwritten.
memclrNoHeapPointers(add(p, newlenmem), capmem-newlenmem)
} else {
// Note: can't use rawmem (which avoids zeroing of memory), because then GC can scan uninitialized memory.
p = mallocgc(capmem, et, true)
if lenmem > 0 && writeBarrier.enabled {
// Only shade the pointers in old.array since we know the destination slice p
// only contains nil pointers because it has been cleared during alloc.
bulkBarrierPreWriteSrcOnly(uintptr(p), uintptr(old.array), lenmem)
}
}
memmove(p, old.array, lenmem)
return slice{p, old.len, newcap}
}
string
Go 语言中的字符串其实是一个只读的字节数组
// StringHeader is the runtime representation of a string.
// It cannot be used safely or portably and its representation may
// change in a later release.
// Moreover, the Data field is not sufficient to guarantee the data
// it references will not be garbage collected, so programs must keep
// a separate, correctly typed pointer to the underlying data.
type StringHeader struct {
Data uintptr
Len int
}
// stringHeader is a safe version of StringHeader used within this package.
type stringHeader struct {
Data unsafe.Pointer
Len int
}
map
常见关键字
make-new
- make:
make也是用于内存分配的,但是和new不同,它只用于chan、map以及切片的内存创建, 而且它返回的类型就是这三个类型本身,而不是他们的指针类型,因为这三种类型就是引用类型,所以就没有必要返回他们的指针了。 注意,因为这三种类型是引用类型,所以必须得初始化,但是不是置为零值,这个和new是不一样的。
func make(t Type, size ...IntegerType) Type
- new:
它只接受一个参数,这个参数是一个类型,分配好内存后,返回一个指向该类型内存地址的指针。 同时请注意它同时把分配的内存置为零,也就是类型的零值。
// The new built-in function allocates memory. The first argument is a type,
// not a value, and the value returned is a pointer to a newly
// allocated zero value of that type.
func new(Type) *Type
二者都是内存的分配(堆上),但是make只用于slice、map以及channel的初始化(非零值; 而new用于类型的内存分配,并且内存置为零。 所以在我们编写程序的时候,就可以根据自己的需要很好的选择了。
make返回的还是这三个引用类型本身;而new返回的是指向类型的指针。
for-range
经典循环
范围循环
数组
map
字符串
字符串是一个只读的字节数组切片
在遍历时会获取字符串中索引对应的字节并将字节转换成 rune。我们在遍历字符串时拿到的值都是 rune 类型的变量
func RangeString() {
s := "fhjfhjdf在"
for i, val := range s {
fmt.Println(i, val)
}
fmt.Println(len(s))
}
//output
0 102
1 104
2 106
3 102
4 104
5 106
6 100
7 102
8 22312
11
channel
select
select是一种与switch相似的控制结构,与switch不同的是, select中虽然也有多个case,但是这些case中的表达式必须都是channel的收发操作
1.select 能在 Channel 上进行非阻塞的收发操作;
2.select 在遇到多个 Channel 同时响应时会随机挑选 case 执行;
defer
常被用于关闭文件描述符、关闭数据库连接以及解锁资源
defer的实现由编译器和运行时共同完成的
注意:
1.defer关键字的调用时机以及多次调用defer时执行顺序;
栈顺序 后调用的 defer 函数会先执行: 后调用的 defer 函数会被追加到 Goroutine _defer 链表的最前面; 运行 runtime._defer 时是从前到后依次执行;
func DeferFunc1(i int) (t int) {
t = i
defer func() {
t += 3
fmt.Println("t in defer1: ", t)
}()
return t
// t in defer1: 4
// t in func result: 4
}
func DeferFunc2(i int) int {
t := i
defer func() {
t += 3
fmt.Println("t in defer2: ", t)
}()
return t
}
// t in defer2: 4
// t in func result: 1
defer 传入的函数不是在退出代码块的作用域时执行的,它只会在当前函数和方法返回之前被调用
2.defer关键字使用传值的方式传递参数时会进行预计算,导致不符合预期的结果;
调用 defer 关键字会立刻对函数中引用的外部参数进行拷贝
解决:向 defer 关键字传入匿名函数
func VarInDefer1() {
var i = 1
defer fmt.Println("result: ", func() int { return i * 2 }())
i++
}
// output
result: 2
// 向 defer 关键字传入匿名函数
func VarInDefer() {
var i = 1
defer func() {
fmt.Println("result: ", func() int { return i * 2 }())
}()
i++
}
// output
result: 4
return比defer先执行
func VarInDefer() {
startedAt := time.Now()
defer func() { fmt.Println("defer time: ", time.Since(startedAt)) }()
time.Sleep(time.Second)
fmt.Println("return time: ", time.Since(startedAt))
// return time: 1.003767335s
// defer time: 1.003916954s
}
func VarInDefer1() {
startedAt := time.Now()
defer fmt.Println("defer time: ", time.Since(startedAt))
time.Sleep(time.Second)
fmt.Println("return time: ", time.Since(startedAt))
// return time: 1.002410955s
// defer time: 205ns
}
函数的参数会被预先计算; 调用 runtime.deferproc 函数创建新的延迟调用时就会立刻拷贝函数的参数,函数的参数不会等到真正执行时计算;
panic-recover
panic 只会触发当前 Goroutine 的延迟函数调用;
recover 只有在defer函数中调用才会生效;
panic 允许在 defer 中嵌套多次调用;
func DeferPanicTest() {
defer recover()
panic("test panic")
}
// panic
func DeferPanicTest1() {
defer func() {
fmt.Println("recover......")
recover()
}()
panic("test panic")
}
// recover......
Recover is a built-in function that regains control of a panicking goroutine. Recover is only useful inside deferred functions. During normal execution, a call to recover will return nil and have no other effect. If the current goroutine is panicking, a call to recover will capture the value given to panic and resume normal execution.
函数
通过堆栈传递参数,入栈的顺序是从右到左;
函数返回值通过堆栈传递并由调用者预先分配内存空间;
调用函数时都是传值,接收方会对入参进行复制再计算;