当前位置:网站首页>Why do the new generation of highly concurrent programming languages like go and rust hate shared memory?
Why do the new generation of highly concurrent programming languages like go and rust hate shared memory?
2022-06-24 05:53:00 【beyondma】
Today, I want to discuss the problem of high concurrency , We have seen recently with Rust、Go Cloud primitives are represented by 、Serverless Language of the times , When designing high concurrency programming mode, the pipeline mechanism is often the first , Traditionally, sharp tools for concurrency control, such as mutexes or semaphores, are not recommended .
Let's first look at the concepts of concurrency and parallelism , We know that concurrency is that a processor processes multiple tasks at the same time , It's also logical , In parallel, multiple physical devices execute different instructions at the same time , Here's also physical . Concurrency is when the currently executing task is blocked or waiting for operation , Release CPU, Allow other tasks to be scheduled , Parallel is to perform different tasks at the same time without affecting each other .
And traditional semaphores 、 Mutexes are designed to make a single core CPU Maximize performance , Let the program release when blocked CPU, By controlling the access of shared variables to avoid conflicts , And want to control the behavior of these shared variables , Therefore, the key is to design the timing , In essence, control timing is to add traffic lights and roadblocks to the system , And here you must remember , High performance systems require overpasses 、 Underground tunnels these infrastructure , Instead of traffic signals and other control means , A good concurrent system must be modeled with the concept of flow , Instead of adding checkpoints and roadblocks everywhere . The current processing is multi-core architecture , Therefore, programming should also tilt towards parallelism , However, I see many examples in the so-called high concurrency tutorial on the Internet , The timing of the signal lights is perfect , But he threw away all the overpasses …..
The signal light should serve the diversion , It should not be limited to flow
Next, let's look at the operation of three sections corresponding to signal lamp control , Mutually exclusive system “ Concurrent ”, And simple serial code , The goal of the code is actually to complete from 0 All the way up to 3000000 The operation of .
Signal light control
In fact, the semaphore code has basically degenerated back to the sequential execution scheme . As we mentioned earlier 《GO Look at your mistakes , however Rust I'll help you arrange the pit 》,Rust Variable life cycle checking mechanism , It does not support sharing memory between different threads , Even if it can save the country , Nor is it officially recommended , So let's use Go Take readers to explain .
package main
import (
"fmt"
"sync"
"time"
)
var count int
var wg1 sync.WaitGroup
var wg2 sync.WaitGroup
var wg3 sync.WaitGroup
var wg4 sync.WaitGroup
func goroutine1() {
wg1.Wait()
len := 1000000
for i := 0; i < len; i++ {
count++
}
wg2.Done()
}
func goroutine2() {
wg2.Wait()
len := 1000000
for i := 0; i < len; i++ {
count++
}
wg3.Done()
}
func goroutine3() {
wg3.Wait()
len := 1000000
for i := 0; i < len; i++ {
count++
}
wg4.Done()
}
func main() {
now := time.Now().UnixNano()
wg1.Add(1)
wg2.Add(1)
wg3.Add(1)
wg4.Add(1)
go goroutine1()
go goroutine2()
go goroutine3()
wg1.Done()
wg4.Wait()
fmt.Println(time.Now().UnixNano() - now)
fmt.Println(count)
}Here are three sub processes goroutine, stay 4 Under the control of semaphores, the shared variables are controlled in the form of dominoes count To operate , The result of running this code is as follows :
4984300 3000000 success : Process exit code 0.
Mutex control
It is different from the complete degradation of semaphores into sequential execution , Mutexes can essentially have only one at a time goroutine Execute to critical code , But every goroutine The order of execution doesn't matter , As follows :
package main
import (
"fmt"
"sync"
"time"
)
var count int
var wg1 sync.WaitGroup
var mutex sync.Mutex
func goroutine1() {
mutex.Lock()
len := 1000000
for i := 0; i < len; i++ {
count++
}
mutex.Unlock()
wg1.Done()
}
func main() {
now := time.Now().UnixNano()
wg1.Add(3)
go goroutine1()
go goroutine1()
go goroutine1()
wg1.Wait()
fmt.Println(time.Now().UnixNano() - now)
fmt.Println(count)
}From the point of view of running real sequence , The scheme of mutex should be similar to that of semaphore , But the result is satisfactory , Under the control of the mutex , The performance of this program has also decreased 30%, The results are as follows :
5986800 3000000 success : Process exit code 0.
Serial mode :
Finally, use the most primitive way , The serial operation code is as follows :
package main
import (
"fmt"
//"sync"
"time"
)
var count int
func goroutine1() {
len := 1000000
for i := 0; i < len; i++ {
count++
}
}
func main() {
now := time.Now().UnixNano()
goroutine1()
goroutine1()
goroutine1()
fmt.Println(time.Now().UnixNano() - now)
fmt.Println(count)
}You can see that in terms of efficiency , The direct serial mode is similar to the semaphore mode , give the result as follows :
4986700 3000000 success : Process exit code 0.
In other words, it took a long time , The end result may not be as good as direct serial execution .
Rust Future On
Rust Medium future The mechanism is a bit like JavaScript Medium promise Mechanism .Future Mechanism allows programmers to design highly concurrent asynchronous scenarios by using synchronous code . At present, although Go Some of them defer The mechanism of , But far from Rust Medium future So strong .Future The mechanism will return the value value It's not how it's calculated executor Separate , So that programmers can no longer focus on the design of specific timing mechanism , Just specify Future Conditions required for execution , And actuators .
Let's look at the following code .
notes :cargo.toml
[dependencies]
futures = { version = "0.3.5", features = ["thread-pool"] }The code is as follows :
use futures::channel::mpsc;
use futures::executor;
use futures::executor::ThreadPool;
use futures::StreamExt;
fn main() {
let poolExecutor = ThreadPool::new().expect("Failed");
let (tx, rx) = mpsc::unbounded::<String>();
let future_values = async {
let fut_tx_result = async move {
let hello = String::from("hello world");
for c in hello .chars() {
tx.unbounded_send(c.to_string()).expect("Failed to send");
}
};
poolExecutor.spawn_ok(fut_tx_result);
let future_values = rx
.map(|v| v)
.collect();
future_values.await
};
let values: Vec<String> = executor::block_on(future_values);
println!("Values={:?}", values);
}In the above code, we pass async It specifies future_values , And will the Future Assign to poolExecutor This thread pool executes , Finally through await Method , You can make future All done , Instead of using semaphores to control specific timing .
thus , As long as you have a deep grasp of future Mechanism , You don't have to care about mutexes anymore 、 Semaphore , The specific height method can be safely handed over to the computer for optimization , It can not only save programmers' time , It can also give full play to the power of the compiler , The tail number is to avoid throwing away the overpass , As long as the signal light is in a low-level wrong way .
Java Although there are certain Future Realization , And there are Rust Reflection ability that you don't have , But cold start has always been a problem Java Pain . So in the current era of cloud origin ,Go and Rust In especial Rust Language begins with C Language startup speed , And operational efficiency is really likely to be king in the future .
边栏推荐
- Enterprise management background user manual
- Tesseract-OCR helloworld
- How to buy a website domain name? How to choose a website domain name?
- Go concurrency - work pool mode
- Test development knowledge map
- How to check the domain name of the website? Are there any skills to speak of
- Groovy script engine practice in complex and changeable scenarios
- Ups and esxi realize automatic shutdown after power failure
- How to record the purchased domain name? Why should the purchased domain name be filed?
- Tencent (t-sec NTA) was listed in the report emerging trends: best use cases for network detection and response recently released by Gartner
猜你喜欢
随机推荐
Ups and esxi realize automatic shutdown after power failure
Several relations to be clarified in the process of digital transformation: stock and increment
C51 single chip microcomputer, an entry-level tutorial for lighting up small lights
What is the learning path for model deployment optimization?
How enterprises overcome the data security barrier with the imminent implementation of the new law | interview with think tank on industrial security concept
Sub process call - process arrangement in complex O & M scenarios
How to set the secondary domain name of the website? What should I pay attention to when setting the domain name?
How to make a secondary domain name? What is the purpose of a secondary domain name?
Tencent (t-sec NTA) was listed in the report emerging trends: best use cases for network detection and response recently released by Gartner
Mysql database backup under Windows Environment
The instrument industry builds the supplier SRM mode to manage the development of manufacturers + users
How to get the website domain name? Does it cost money to apply for a website domain name?
What is a website domain name and why do you want to register a domain name
Playing "honey in snow and ice city" with single chip microcomputer
PNAs: development of white matter pathways in human brain during the second and third trimester of pregnancy
When we talk about zero trust, what are we talking about?
Flutter - date of birth calculation age tool class
How to register an enterprise domain name? Can an enterprise domain name be directly registered by individuals?
Typora software installation
How to build a website with a domain name? What are the precautions for website construction?