当前位置:网站首页>Go memory model for concurrency

Go memory model for concurrency

2022-07-29 04:44:00 Rainy days on the sun

Click the link to view my Personal blog , More complete and detailed articles

1. gossip

In ancient times cpu All execute machine instructions in the form of a single core . With the progress of science and technology and the times , Single core cannot meet the increasingly greedy needs of mankind , So multi-core cpu Applied . Programming languages are not to be outdone , Start using multicore cpu The advantage of is gradually moving towards the direction of parallelism .

Go Language at birth , It is already the world in the multi-core era . So that group of cattle Go What about the father of language , It combines the characteristics of multiple languages , Created Go The concurrency mechanism of the language itself . That's why Go Language has “ Naturally high concurrency “ The title of .

2. Memory model

2.1 What is the memory model

In the context of multi-core and multi threading , Many different cpu How to interact with memory in a unified form .

2.2 What are the memory models ?

Multithreading 、 The messaging 、 Sequential consistency memory model, etc

3. Return to

This article is written Go Concurrent memory model , Nothing else ( Because I don't understand )

chat Go Concurrency cannot be separated Goroutine.

4. What is? Goroutine

Goroutine yes go The unique concurrency of language . His name is Xie Cheng , In fact, it is a lightweight thread . We all know that each thread has a fixed stack size , The general default is 2MB,Goroutine There are also sizes , Her size stack defaults to 2KB or 4KB, In the era of high memory space , Small is awesome .

Threads fix the size of the stack, causing two problems :

  1. For many threads that only need a small stack space, it is a huge waste
  2. For a few threads that need huge stack space, they face the risk of stack overflow

How to solve :

Or reduce the fixed stack size , Improve space utilization ; Or increase the stack size to allow deeper recursive function calls , But you can't have both at the same time

So he called Goroutine Stand out . It says Goroutine The default stack size of is 2KB or 4KB, So when she starts, it takes up very small space resources , But when the current stack space is insufficient due to deep recursion ,Goroutine The stack size will be dynamically scaled as needed ( The maximum value of stack in mainstream implementation can reach 1GB). There is a problem that your thread can't solve , I solved ! A proud look .Goroutine Theoretically, thousands of . So fucking great , How do you start it , She only needs one go Keyword can start .

Go It comes with a scheduler , Can schedule Goroutine. The scheduler schedules in the form of half grabbing and half occupying , When the current Goroutine When blocking occurs , Will lead to scheduling .( It's kind of like a conversation between men , Can you No, I mean )

Don't gossip , It's no use suffering too much ~

5. Atomic manipulation

The scenario where this problem occurs is in the case of concurrency , There is a problem that multiple concurrent entities compete for the same shared resource data . How to solve it .

5.1 Lock

Can pass The mutex real .Go Language provides two packages , Respectively sync.Mutex and sync.RWMutex, When operating on the same resource, such as updating 、 Delete 、 Reading, etc , For the current Goroutine To lock .

package main

import (
	"fmt"
	"sync"
)

var (
	m     sync.Mutex
	v int
)

func do(wg *sync.WaitGroup) {
    
	defer wg.Done()

	for i := 0; i <= 100; i++ {
    
		m.Lock()
		v++
		m.Unlock()
	}
}

func main() {
    
	var wg sync.WaitGroup
	wg.Add(2)
	go do(&wg)
	go do(&wg)
	wg.Wait()

	fmt.Println(v)
}
5.2 sync/atomic package

Locking in the case of a large amount of concurrency and data , It will cause a performance download problem . that Go Language provides sync/atomic package , Built in atomic operation on a numerical shared resource . Don't be so nice ~

package main

import (
	"fmt"
	"sync"
	"sync/atomic"
)

var score uint64

func do(wg *sync.WaitGroup) {
    
	defer wg.Done()

	var i uint64
	for i = 0; i <= 100; i++ {
    
		atomic.AddUint64(&score, i)
	}
}

func main() {
    
	var wg sync.WaitGroup
	wg.Add(2)

	go do(&wg)
	go do(&wg)

	wg.Wait()
	fmt.Println(score)
}

If you used sync.Once Single column mode implemented , You will find that it is also used atomic Realized .

// A Once must not be copied after first use.
type Once struct {
    
	// done indicates whether the action has been performed.
	// It is first in the struct because it is used in the hot path.
	// The hot path is inlined at every call site.
	// Placing done first allows more compact instructions on some architectures (amd64/386),
	// and fewer instructions (to calculate offset) on other architectures.
	done uint32
	m    Mutex
}

// Do calls the function f if and only if Do is being called for the
// first time for this instance of Once. In other words, given
// var once Once
// if once.Do(f) is called multiple times, only the first call will invoke f,
// even if f has a different value in each invocation. A new instance of
// Once is required for each function to execute.
//
// Do is intended for initialization that must be run exactly once. Since f
// is niladic, it may be necessary to use a function literal to capture the
// arguments to a function to be invoked by Do:
// config.once.Do(func() { config.init(filename) })
//
// Because no call to Do returns until the one call to f returns, if f causes
// Do to be called, it will deadlock.
//
// If f panics, Do considers it to have returned; future calls of Do return
// without calling f.
//
func (o *Once) Do(f func()) {
    
	// Note: Here is an incorrect implementation of Do:
	//
	// if atomic.CompareAndSwapUint32(&o.done, 0, 1) {
    
	// f()
	// }
	//
	// Do guarantees that when it returns, f has finished.
	// This implementation would not implement that guarantee:
	// given two simultaneous calls, the winner of the cas would
	// call f, and the second would return immediately, without
	// waiting for the first's call to f to complete.
	// This is why the slow path falls back to a mutex, and why
	// the atomic.StoreUint32 must be delayed until after f returns.

	if atomic.LoadUint32(&o.done) == 0 {
    
		// Outlined slow-path to allow inlining of the fast-path.
		o.doSlow(f)
	}
}

6 Sequential consistent memory model

Goroutine It is executed in an asynchronous form . How to ensure multiple Goroutine Execute in sequence ?

Look at the code below

package main

func main() {
    
	go func() {
    
		println(" Will I execute ?")
	}()
}

Will this code print out this sentence ?

Write Go Everyone knows that ,main Function is also a Goroutine, When main Where Goroutine After execution , Will execute os.Exit(1) Quit current Goroutine,main Out of the , That sentence can't be printed .

How to achieve Print out Will I execute ? What about this sentence ?

There are many ways , First, realize , Maybe not all. , Hope to point out ~

6.1 The first one is for Blocking

There is a little unfriendly , It will keep blocking , Until the end of time ~

package main

func main() {
    
	go func() {
    
		println(" Will I execute ?")
	}()
	for {
    

	}
}
6.2 The second kind Lock
package main

import "sync"

func main() {
    
	var wg sync.Mutex

	wg.Lock()
	go func() {
    
		println(" Will I execute ?")
		wg.Unlock()
	}()
	wg.Lock()
	println(" I will definitely implement ...")
}
6.3 The third kind of passageway

Go Language built in channel Divided into two , One is unbuffered channel and buffered channel . Unbuffered channels , Send to the channel before receiving from the channel . In fact, it can be understood that unbuffered channels are synchronized . For a more detailed understanding of the channel, check the relevant documents by yourself

Using unbuffered channels

package main

func main() {
    
	c := make(chan int)

	go func() {
    
		println(" Will I execute ?")
		c <- 1
	}()
	<-c
}
6.4 A fourth

Go Language built in sync.WaitGroup package Wait for one Goroutine After execution , Carry out the next Goroutine. Code implementation

package main

import "sync"

func main() {
    
	var s sync.WaitGroup

	s.Add(1)
	go func() {
    
		println(" Will I execute ?")
		s.Done()
	}()
	s.Wait()
}

The above is the content of this article .

原网站

版权声明
本文为[Rainy days on the sun]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/210/202207290432533623.html