当前位置:网站首页>[go language learning] - Concurrent Programming

[go language learning] - Concurrent Programming

2022-06-11 00:17:00 Game programming

Concurrent programming


Reference blog
go Concurrency in the language is achieved through user state threads , Compared with java You need to maintain your own thread pool and perform scheduling and context switching , go The Lord needs to make use of goroutine To manage concurrency , And it is more lightweight than kernel threads , You can create thousands of goroutine Threads work , from go Medium runtime Dispatch completed , utilize channel Can be in goroutine Communicate with each other
Closures are often used in concurrent programming , But note that if the external parameter you want to call is the value at the time of the call , It needs to be passed in as a parameter instead of calling directly using the closure mechanism , Because the external parameters may have changed when the process is called .

  • goroutine
    > Add... Before the calling function go keyword , You can create a for this function goroutine
    > When goroutine When the corresponding function ends, the task ends , and main The function is over , Then all the goroutine It's all over
package mainimport "fmt"// func hello(i int) {//  fmt.Println("hello", i)// }//  After the program starts, a main goroutine perform main function func main() {    //  Open a separate goroutine perform hello function ( Mission )    for i := 0; i < 100; i++ {        // go hello(i)        //  If you don't pass in parameters i Words , Then a closure is formed , Every time the internal anonymous function needs to look for parameters i,        //  And because of startup goroutine It takes time , So when the anonymous function performs printing, the i It has changed         // go func() {        //  fmt.Println(i)        // }()        go func(i int) {            fmt.Println(i)        }(i)    }    fmt.Println("main")    // main End of the function , By main Function starts goroutine It's all over }
  • waitGroupsync.Waitgroup For synchronization , Each time you open a new goroutine You need to add one to the counter , When the function executed concurrently is completed, the counter is decremented by one , Finally, by using Wait() The function ensures that the counter is 0 That's all goroutine Exit only after the execution is completed . In essence, it is also a structure of value type , When you pass a parameter to a function, you must pass a pointer
package mainimport (    "fmt"    "math/rand"    "sync"    "time")//  Production random number // func f() {//  //  Generate random seeds to make each random //  rand.Seed(time.Now().UnixNano())//  for i := 0; i < 5; i++ {//      r1 := rand.Int()    //int//      r2 := rand.Intn(10) // Generate less than 10 The random number //      fmt.Println(r1, r2)//  }// }func f1(i int) {    //  The counter is decremented by one... After the function is completed     defer wg.Done()    time.Sleep(time.Duration(rand.Intn(300)) * time.Millisecond)    fmt.Println(i)}var wg sync.WaitGroupfunc main() {    // f()    for i := 0; i < 10; i++ {        //  The counter is incremented by one before the function is executed         wg.Add(1)        go f1(i)    }    //  wait for wg The counter of is decremented to 0    wg.Wait()}
  • goroutine With threads Stack space ==goroutine The stack space of is increased or decreased as needed .==OS Threads in have fixed stack memory ( Usually it is 2MB), and goroutine The stack memory size is not fixed (2KB~1GB), It's usually very small ( For the initial 2KB), So you can create a lot of goroutine Scheduling model Reference blog GPM yes Go Language runtime (runtime) Realization of level , yes go Language itself to achieve a set of scheduling system . Different from operating system scheduling OS Threads .

    G That is the representative. goroutine object , It stores its own information and the bound P Information about
    P Manages the goroutine queue , Storage goroutine Context of (OS The context of a thread is defined by OS preservation ),P I will manage myself goroutine Do some scheduling in the queue ( For example, take up CPU For a long time goroutine Pause 、 Run the following goroutine wait ) When your queue is consumed, go to the global queue to get , If the overall queue is also consumed, it will go to other P We're in line for a mission .
    M yes Go Runtime (runtime) Virtual to the kernel thread of the operating system , M The relationship with kernel thread is generally one-to-one mapping , One groutine In the end, it's to put M Performed on

    Operating system threads and goroutine The relationship between

    One operating system thread has more than one user state goroutine.
    go Programs can use multiple operating system threads at the same time .
    goroutine and OS Threads are many to many relationships , namely m:n(m individual goroutine Assigned to n Operating system thread execution )

    Go The runtime can use GOMAXPROCS Determine how many OS Threads execute GO Code , The default is CPU The core number , Can pass runtime.GOMAXPROCS() Custom occupied CPU Count

package mainimport (    "fmt"    "runtime"    "sync")var wg sync.WaitGroupfunc a() {    defer wg.Done()    for i := 0; i < 10; i++ {        fmt.Printf("A:%d\n", i)    }}func b() {    defer wg.Done()    for i := 0; i < 10; i++ {        fmt.Printf("B:%d\n", i)    }}func main() {    //  The default value is... On the machine CPU The core number     // runtime.GOMAXPROCS(1)  Serial output     runtime.GOMAXPROCS(2) // Parallel output     wg.Add(2)    go a()    go b()    wg.Wait()}
  • channel( passageway ) Go The concurrency model of language is CSP(Communicating Sequential Process), It is advocated to share memory through communication rather than realize communication through shared memory . channel Is a special reference type , Need to use make Allocate memory for initialization , And you need to define the type stored in it , You can make a goroutine Send a specific value to another goroutine Communication mechanism , Follow the first in, first out rule . channel The operation symbol of is -> , There are three operations , For non cached channel Must accept and send operations concurrently , Otherwise, it will lead to deadlock .
    > send out : ch<-10
    > Accept : x:=<-ch
    > close : close(ch)
package mainimport (    "fmt"    "sync")var a chan intvar b chan int //  You need to specify the type in the channel var wg sync.WaitGroupfunc noBufChannel() {    a = make(chan int) // Initialization of unbuffered channels , Only when a task thread accepts it can it put , Otherwise, it will cause deadlock if you put it directly     wg.Add(1)    //  Since the channel has no cache, it needs to accept and send values concurrently     go func() {        defer wg.Done()        x := <-a        fmt.Println(" backstage goroutine From the tunnel a I've got ", x)    }()    a <- 10    fmt.Println("10 Send to channel a It's in ... ")    wg.Wait()    //  Close channel     close(a)}func BufChannel() {    //  The channel should not be set too large , If it is too large, store the pointer     b = make(chan int, 16) // Initialization of channels with buffers , The thread can pre store the communication value into the buffer , Then other threads receive this value     b <- 10    fmt.Println("10 Send to channel b It's in ... ")    x := <-b    fmt.Println(" backstage goroutine From the tunnel b I've got ", x)    //  Close channel     close(b)}func main() {    noBufChannel()    BufChannel()}
  • channel practice The use of circular pairs of channels to achieve mass storage and value : Start one goroutine Generate 100 Put the number into the channel 1, Start another goroutine Put this 100 Put the square into channel 2 , Finally take it out .
package mainimport (    "fmt"    "sync")// channel practice // 1. Start a goroutine, Generate 100 The number is sent to ch1// 2. Start a goroutine, from ch1 The value of , Calculate its square and put it in ch2 in // 3. stay main in , from ch2 Medium value printing var wg sync.WaitGroupfunc f1(ch1 chan int) {    defer wg.Done()    for i := 0; i < 100; i++ {        ch1 <- i    }    //  It's closed here ch1 In order to f2 Finished reading in ch1 Data will not block and cause deadlock , And you can go back to false    close(ch1)}func f2(ch1, ch2 chan int) {    defer wg.Done()    for {        //  After closing ch1 After all the data in is read, the request for reading will return false, If it is not closed, it will always block and cause deadlock         x, ok := <-ch1        if !ok {            break        }        ch2 <- x * x    }    close(ch2)}func main() {    // a The cache of can not be set to full , Because I will take it while I save it     a := make(chan int, 100)    // b The cache of must be set to full , In this way, all the numbers can be stored , And then from b Can only be read when reading data in     b := make(chan int, 100)    wg.Add(2)    go f1(a)    go f2(a, b)    wg.Wait()    for ret := range b {        fmt.Println(ret)    }}
  • closeclose() Is not a required operation , channel It's a type of , After the program is finished, it will be recycled automatically . The closed channel has the following characteristics :
    Resending a value to a closed channel results in panic. Receiving a closed channel will get the value until the channel is empty . A receive operation on a closed channel with no value will result in a zero value of the corresponding type . Closing a closed channel can result in panic.
package mainimport "fmt"//  Close channel func main() {    ch1 := make(chan int, 2)    ch1 <- 10    ch1 <- 20    //  Use for range If the channel is not closed, a deadlock will occur     // for ret := range ch1 {    //  println(ret)    // }    // close(ch1)    //  The channel is closed and cannot be written, but it can still take values     //  Use for range You can cycle through all the values in the closed channel     for ret := range ch1 {        println(ret)    }    <-ch1    <-ch1    //  After the channel is closed, the empty channel can still take values , The retrieved value is the zero value of the corresponding type , And asserted as false    x, ok := <-ch1    fmt.Println(x, ok)}
  • One way passage A channel that can only be used for sending or receiving , It is often used in function parameters , Make sure that only the corresponding operation can be performed in this function , To declare a one-way channel, you only need to mark the corresponding direction symbol next to the keyword <-
    > chan<- Indicates that the channel can only accept values , Cannot send value
    > <-chan Indicates that the channel can only send values , The value... Is not acceptable
func counter(out chan<- int) {    for i := 0; i < 100; i++ {        out <- i    }    close(out)}func squarer(out chan<- int, in <-chan int) {    for i := range in {        out <- i * i    }    close(out)}func printer(in <-chan int) {    for i := range in {        fmt.Println(i)    }}func main() {    ch1 := make(chan int)    ch2 := make(chan int)    go counter(ch1)    go squarer(ch2, ch1)    printer(ch2)}
  • Summary of channel anomalies
channel state nil Non empty empty full Not full
Receive value from channel Blocking Receives the value Blocking Receives the value Receives the value
Send value to channel Blocking Send value Send value Blocking Send value
close panic Closed successfully , Zero value is returned after reading data Closed successfully , Return zero Closed successfully , Zero value is returned after reading data Closed successfully , Zero value is returned after reading data
  • work pool(goroutine pool )
package mainimport (    "fmt"    "time")//  Form a working pool of three workers , Then five tasks are processed in parallel func worker(id int, jobs <-chan int, results chan<- int) {    //  When jobs If no data can be read after closing, the loop will exit     for j := range jobs {        fmt.Printf("worker:%d start job:%d\n", id, j)        time.Sleep(time.Second)        fmt.Printf("worker:%d end job:%d\n", id, j)        results <- j * 2    }}func main() {    jobs := make(chan int, 100)    results := make(chan int, 100)    //  Turn on 3 individual goroutine, Form a work pool     for w := 1; w <= 3; w++ {        go worker(w, jobs, results)    }    // 5 A mission     for j := 1; j <= 5; j++ {        jobs <- j    }    //  close jobs prevent goroutine Blocking causes a deadlock     close(jobs)    //  Output results     for a := 1; a <= 5; a++ {        <-results    }}
  • workpool Improved version
package main//  Form a working pool of three workers , Then five tasks are processed concurrently ,//  Generate numbers , Digital doubling transfer , Read the numbers through goroutine Concurrent execution , Improve operational efficiency import (    "fmt"    "sync"    "time")var wg sync.WaitGroupvar notice sync.WaitGroupfunc worker(id int, jobs <-chan int, results chan<- int) {    //  When jobs If no data can be read after closing, the loop will exit     for j := range jobs {        fmt.Printf("worker:%d start job:%d\n", id, j)        time.Sleep(time.Second)        fmt.Printf("worker:%d end job:%d\n", id, j)        results <- j * 2    }    wg.Done()}func main() {    jobs := make(chan int, 100)    results := make(chan int, 100)    //  Turn on 3 individual goroutine, Form a work pool     wg.Add(3)    for w := 1; w <= 3; w++ {        go worker(w, jobs, results)    }    // 5 A mission     go func() {        for j := 1; j <= 5; j++ {            jobs <- j        }        //  close jobs prevent goroutine Blocking causes a deadlock         close(jobs)    }()    //  Output results     notice.Add(1)    go func() {        for x := range results {            fmt.Println(x)        }        notice.Done()    }()    //  Wait for all worker Finish the work , To shut down result passageway     wg.Wait()close(results)    //  Wait for all result All output     notice.Wait()}
  • workpool practice
package mainimport (    "fmt"    "math/rand"    "time")type result struct {    value int64    sum   int64}func random(jobChan chan<- int64) {    for {        rand.Seed(time.Now().UnixNano())        jobChan <- rand.Int63()        time.Sleep(time.Millisecond * 500)    }}func work(in int64) (out int64) {    for in != 0 {        out += in % 10        in = in / 10    }    return}func worker(jobChan <-chan int64, resultsChan chan<- *result) {    //  from jobChan The value of , If jobChan Empty blocks waiting , And because it turns off when all random numbers are put in jobChan So it won't deadlock     //  Here you can use infinite loops as well as for range, Because there will always be values in the channel that will not block and cause deadlock     for j := range jobChan {        // j := <-jobChan        s := work(j)        newResult := &result{            value: j,            sum:   s,        }        resultsChan <- newResult    }}func main() {    jobChan := make(chan int64, 100)    //  The structure is too large, so it is changed to a pointer     resultsChan := make(chan *result, 100)    go random(jobChan)    //  Open the working pool     for i := 1; i <= 24; i++ {        go worker(jobChan, resultsChan)    }    //  It's not used here for range The value is taken because resultChan Failure to close the direct value will lead to deadlock     for ret := range resultsChan {        //  Wait for resultsChan The value of         fmt.Printf("value:%d,result:%d\n", ret.value, ret.sum)    }}
  • select Multiplexing select Scenarios for receiving data from multiple channels , It can respond to the operation of multiple channels , Whichever channel operation meets the conditions will be executed first , If there are more than one corresponding at the same time, take one at random , Not in front to back order . select Summarized below
    > Can handle one or more channel Sending of / Receive operation .
    > If more than one case At the same time satisfy , select Will randomly choose one .
    > For no case Of select{} Will be waiting , It can be used to block main function .
package mainimport "fmt"func main() {    //  When only one buffer is used, it can only be loaded as , The order in which they are taken out     ch := make(chan int, 1)    //  When the buffer is large enough select All branches of can execute , The result of each execution will be random     // ch := make(chan int, 10)    for i := 0; i < 10; i++ {        // select Which conditions can be met will be implemented         select {        // If the channel has a value, the output value         case x := <-ch:            //  The result is 0,2,4,6,8, Because these times are all numbers , And the first 1,3,5,7,9 The second time, these numbers are taken out for printing             fmt.Println(x)        //  If the channel is empty, enter the value         case ch <- i:        }    }}
  • Asynchronous log Library Each type of log can be executed in parallel , Instead of executing tasks synchronously and sequentially
    Put the information that needs to be written into the channel and start a in the background goroutine Input the information in the channel into the log file ( You cannot open more than one goroutine , Multiple process writes can cause problems , And cutting the file will close the file and cause other running goroutine Read information about closed files , Cause errors )
package myloggerimport (    "fmt"    "os"    "path"    "time")//  Write a log in the file // Log file structure type FileLogger struct {    Level       LogLevel    filePath    string // The path where the log file is saved     fileName    string // The file name of the log file     fileObj     *os.File    errFileObj  *os.File    maxFileSize int64 // Maximum file size     timeFlag    int   // Log time flag     timeErrFlag int   // Error log time flags     logChan     chan *logMsg}type logMsg struct {    level     LogLevel    msg       string    funcName  string    fileName  string    timeStamp string    line      int}//FileLogger Constructors func NewFileLogger(levelStr, fp, fn string, maxSize int64) *FileLogger {    LogLevel, err := parseLogLevel(levelStr)    if err != nil {        panic(err)    }    tf := time.Now().Minute()    tef := time.Now().Minute()    fl := &FileLogger{        Level:       LogLevel,        filePath:    fp,        fileName:    fn,        maxFileSize: maxSize,        timeFlag:    tf,        timeErrFlag: tef,        logChan:     make(chan *logMsg, 50000),    }    // Open the file according to the file path and file name     err = fl.initFile()    if err != nil {        panic(err)    }    return fl}// Open the corresponding log and error log according to the specified log file path and file name func (f *FileLogger) initFile() error {    // Splice the file path and file name according to the format of the operating system     fullFileName := path.Join(f.filePath, f.fileName)    // Open the log and log error file     fileObj, err := os.OpenFile(fullFileName, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)    if err != nil {        fmt.Printf("open log file failed,err:%v\n", err)        return err    }    errfileObj, err := os.OpenFile(fullFileName+".err", os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)    if err != nil {        fmt.Printf("open errlog file failed,err:%v\n", err)        return err    }    f.fileObj = fileObj    f.errFileObj = errfileObj    //  Multiple... Cannot be used here goroutine perform , Because the file will be closed when cutting , Then others goroutine Failure to obtain information resulted in an error ( It should be possible to consider mutexes )    //  Open the backstage goroutine Write the log     go f.writeLogBackground()    return nil}func (f *FileLogger) Close() {    f.fileObj.Close()    f.errFileObj.Close()}// Determine whether the log needs to be recorded according to the level func (f *FileLogger) enable(loglevel LogLevel) bool {    return f.Level <= loglevel}//  Determine file size func (f *FileLogger) checkSize(file *os.File) bool {    fileInfo, err := file.Stat()    if err != nil {        fmt.Printf("get file info failed,err:%v\n", err)        return false    }    // Returns the comparison result between the current file size and the maximum value     return fileInfo.Size() >= f.maxFileSize}// Cutting documents , When the file reaches the maximum, the new file will be produced again func (f *FileLogger) splitFile(file *os.File) (*os.File, error) {    // Get old file information and generate new file information     nowStr := time.Now().Format("20060102150405000")    fileInfo, err := file.Stat()    if err != nil {        fmt.Printf("get fileInfo failed,err:%v\n", err)        return nil, err    }    // Out-of-service fileLogger File name in , No, err    // Use file Check the file name to distinguish between normal log and error log     logName := path.Join(f.filePath, fileInfo.Name())    newLogName := fmt.Sprintf("%s.bak%s", logName, nowStr)    // 1. Close the current log file , To rename     file.Close()    // 2. Rename a full file , Add time     os.Rename(logName, newLogName)    // 3. Open a new log file     fileObj, err := os.OpenFile(logName, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0644)    if err != nil {        fmt.Printf("open new log file failed,err:%v\n", err)        return nil, err    }    // 4. Assign the newly opened log file object to  f.fileObj    return fileObj, nil}func (f *FileLogger) writeLogBackground() {    for {        // Determine whether to cut the file according to the file size         if f.checkSize(f.fileObj) {            newFile, err := f.splitFile(f.fileObj)            if err != nil {                return            }            f.fileObj = newFile        }        select {        //  Can take out the log         case logTmp := <-f.logChan:            //  Splice log information             fmt.Fprintf(f.fileObj, "[%s] [%s] [%s:%s:%d] %s\n", logTmp.timeStamp, getLogString(logTmp.level), logTmp.fileName, logTmp.funcName, logTmp.line, logTmp.msg)            // The logging level is greater than Error Level , Need to be err Record it again in the log             if logTmp.level >= ERROR {                if f.checkSize(f.errFileObj) {                    newFile, err := f.splitFile(f.errFileObj)                    if err != nil {                        return                    }                    f.errFileObj = newFile                }                fmt.Fprintf(f.errFileObj, "[%s] [%s] [%s:%s:%d] %s\n", logTmp.timeStamp, getLogString(logTmp.level), logTmp.fileName, logTmp.funcName, logTmp.line, logTmp.msg)            }        default:            //  If you can't take it out, take a rest and then quit , To prevent blocking             time.Sleep(time.Millisecond * 500)        }    }}//  Cut logs according to file capitalization , Before logging each time, you must determine the size of the file currently being written func (f *FileLogger) log(lv LogLevel, format string, a ...interface{}) {    if f.enable(lv) {        msg := fmt.Sprintf(format, a...)        now := time.Now()        funcName, fileName, lineNo := getInfo(3)        //  Put the log information to be written into the channel         // 1. Create a new one logMsg object         logTmp := &logMsg{            level:     lv,            msg:       msg,            funcName:  funcName,            fileName:  fileName,            timeStamp: now.Format("2006-01-02 15:04:05"),            line:      lineNo,        }        //  Try to put logs into the channel , If the channel is full, discard the log and continue execution , Ensure that the overall business flows smoothly without blocking         select {        case f.logChan <- logTmp:        default:            //  Execute directly without operation         }    }}func (f *FileLogger) Debug(format string, a ...interface{}) {    f.log(DEBUG, format, a...)}func (f *FileLogger) Trace(format string, a ...interface{}) {    f.log(TRACE, format, a...)}func (f *FileLogger) Info(format string, a ...interface{}) {    f.log(INFO, format, a...)}func (f *FileLogger) Warning(format string, a ...interface{}) {    f.log(WARNING, format, a...)}func (f *FileLogger) Error(format string, a ...interface{}) {    f.log(ERROR, format, a...)}func (f *FileLogger) Fatal(format string, a ...interface{}) {    f.log(FATAL, format, a...)}
  • The mutex The essence of a lock is a structure , When you pass a parameter to a function, you must pass a pointer
package mainimport (    "fmt"    "sync")//  The mutex var x = 0var wg sync.WaitGroupvar lock sync.Mutexfunc add() {    for i := 0; i < 50000; i++ {        lock.Lock()        x = x + 1        lock.Unlock()    }    wg.Done()}func main() {    wg.Add(2)    go add()    go add()    wg.Wait()    fmt.Println(x)}
  • Read-write lock Read-write lock sync.RWMutex It is often used to read more and write less , When one goroutine After obtaining the read lock , Other processes can continue to acquire read locks to read , If you get a write lock, it will block , Make sure there's only one goroutine Able to perform operations .
package mainimport (    "fmt"    "sync"    "time")//  Read-write lock var (    x  = 0    wg sync.WaitGroup    // lock   sync.Mutex    rwLock sync.RWMutex)func read() {    defer wg.Done()    rwLock.RLock()    fmt.Println(x)    time.Sleep(time.Millisecond)    rwLock.RUnlock()}func write() {    defer wg.Done()    rwLock.Lock()    x = x + 1    time.Sleep(time.Millisecond * 5)    rwLock.Unlock()}func main() {    start := time.Now()    for i := 0; i < 10; i++ {        wg.Add(1)        go write()    }    for i := 0; i < 1000; i++ {        wg.Add(1)        go read()    }    wg.Wait()    fmt.Println(time.Since(start))}
  • sync.Oncesync.Once Medium Do() Method is used for scenarios in which certain operations are executed only once in high concurrency scenarios ( Load profile 、 Close the primary channel ), It contains a mutex and a Boolean value inside , Mutexes guarantee the security of Boolean values and data , Boolean values are used to record whether initialization is complete . For each execution, first judge whether the Boolean confirmation operation has been executed , without , Then lock with the mutex lock first and then execute the function , Release the lock after completion . Pay attention to is Do() Is a function that has no parameters and no return value , So if you want to execute a function with parameters , Anonymous functions need to be wrapped as closures and then used as parameters .
package mainimport (    "fmt"    "sync")var wg sync.WaitGroupvar once sync.Oncefunc f1(ch1 chan int) {    defer wg.Done()    for i := 0; i < 100; i++ {        ch1 <- i    }    //  It's closed here ch1 In order to f2 Finished reading in ch1 Data will not block and cause deadlock , And you can go back to false    close(ch1)}func f2(ch1, ch2 chan int) {    defer wg.Done()    for x := range ch1 {        ch2 <- x * x    }    //  Use once Make sure that the channel is closed only once ch2, Prevent emergence panic    once.Do(func() { close(ch2) })}func main() {    // a The cache of can not be set to full , Because I will take it while I save it     a := make(chan int, 100)    // b The cache of must be set to full , In this way, all the numbers can be stored , And then from b Can only be read when reading data in     b := make(chan int, 100)    wg.Add(3)    go f1(a)    go f2(a, b)    go f2(a, b)    wg.Wait()    for ret := range b {        fmt.Println(ret)    }}
  • sync.Map Go Built in map It's not concurrent security , When more than one goroutine stay map An error will be reported when the value is stored in and taken . and sync.Map Is a concurrent secure map, Not like the built-in map The use of make Function initialization can only be used , Can be declared and used directly , and key and value Any kind of . meanwhile sync.Map Built in things like Store ( Store value )、 Load ( Value )、 LoadOrStore ( Get the value first. If not, save the value )、 Delete ( Delete value )、 Range ( Traversal value ) And so on .
package mainimport (    "fmt"    "strconv"    "sync")// var m = make(map[string]int)// func get(key string) int {//  return m[key]// }// func set(key string, value int) {//  m[key] = value// }// func main() {//  wg := sync.WaitGroup{}//  // go Built in map It does not support safe concurrent operation , exceed 20 Concurrent execution will cause errors //  for i := 0; i < 19; i++ {//      wg.Add(1)//      go func(n int) {//          key := strconv.Itoa(n)//          set(key, n)//          fmt.Printf("k=:%v,v:=%v\n", key, get(key))//          wg.Done()//      }(i)//  }//  wg.Wait()// }// sync Medium map It can be used without allocating memory var m = sync.Map{}func main() {    wg := sync.WaitGroup{}    //  Use sync Built in Map It can execute safely and concurrently ,go Built in map It does not support safe concurrent operation , exceed 20 Concurrent execution will cause errors     for i := 0; i < 21; i++ {        wg.Add(1)        go func(n int) {            key := strconv.Itoa(n)            //  Take advantage of the store Method store value             m.Store(key, n)            //  Take advantage of the load Methods the values             value, _ := m.Load(key)            fmt.Printf("k=:%v,v:=%v\n", key, value)            wg.Done()        }(i)    }    wg.Wait()    //  Traverse     m.Range(func(key, value interface{}) bool {        fmt.Println(key, value)        return true    })}
  • Atomic manipulation The underlying mechanism of locking is based on atomic operations , It usually passes directly through CPU Instructions implement , Therefore, atomic operation is faster than lock operation .
package mainimport (    "fmt"    "sync"    "sync/atomic")//  Atomic manipulation var x int64var wg sync.WaitGroupfunc add() {    // x++    //  Atomic operations ensure concurrency security     atomic.AddInt64(&x, 1)    wg.Done()}func main() {    wg.Add(100000)    for i := 0; i < 100000; i++ {        go add()    }    wg.Wait()    fmt.Println(x)    //  Compare and exchange data     var y int64 = 100    //  The first parameter is compared with the second parameter , If it's equal, return true, Replace the value of the first parameter with the value of the third parameter     ok := atomic.CompareAndSwapInt64(&y, 100, 200)    fmt.Println(ok, y)}

author :KayCh

Game programming , A game development favorite ~

If the picture is not displayed for a long time , Please use Chrome Kernel browser .

原网站

版权声明
本文为[Game programming]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/162/202206102248257429.html