当前位置:网站首页>Concurrent programming summary

Concurrent programming summary

2022-06-22 07:52:00 Glory eggplant

        1、 Books
            Java Programming idea
             Enterprise architecture pattern
            Java Concurrent programming practice


        2、 Registry Center
            Redis
            zookeeper
            SpringCloud

        3、IO
            BIO Blocking IO
            NIO Non blocking
        
        4、Netty

             The technology used Future

        5、 Message middleware
            RabbitMQ,RocketMQ,ActiveMQ,Kafka

        6、MQ

             scene 1
             Data communication between two systems , One production one consumption , The two are not the same beat
                     After that, the system will generate a serious memory accumulation , Finally, memory overflow
             Using message middleware , Buffer

             scene 2
             Distributed systems
             Data production , Unified placement MQ in , Then generate the subscription relationship
             Reduce the complexity of interleaved interfaces between systems

            RPC Dubbos

        7、 Thread safety

             When multiple threads access a class or object or method , This class
             Always show the right behavior

            【1】 nothing static Methods , keyword synchronized All locks obtained are object locks
                 The thread holds the lock of the object to which the method belongs
            【2】 keyword synchronized If the modification is a static method
                 Equivalent to locking a class , Monopoly .Class class

            private AtomicInteger num Atomic

            sychronized You can lock any object or method , And the code of locking
             They are called mutually exclusive regions or critical regions

             Multiple threads access a thread's run When the method is used , To deal with in a queue
             The line here is according to CPU According to the order of distribution , Not by thread
             Code created in the order of execution
            
        8、 Synchronization and asynchrony of object locks
             Sync synchronized
             The concept of synchronization is sharing , If it's not a shared resource , There's no need to synchronize

             asynchronous asychronized
             The concept of asynchrony is independence , There is no restriction on each other
             It's like we learn http when , Launched on the page ajax request , We can continue to browse or manipulate the contents of the page
             There is no relationship between the two

             The purpose of synchronization is for thread safety , For thread safety , There are two characteristics that need to be satisfied :
                 Atomicity
                 visibility

             The object is a lock , If the properties of an object change, the lock will not be affected
            
                
        9、 Dirty reading of business data
            
             If the set value method takes a long time
             Or the method is worth the time , Dirty data will be generated , That is, data inconsistency
            
             When we lock an object's method , The integrity of the business needs to be considered
             That is to say setValue/getValue Method , At the same time, add synchronized Sync keywords
             Ensure the atomicity of the business , Don't let business errors occur
            
             Database ACID

            A Atomicity
            C Uniformity
            I Isolation,
            D Permanence
            
        10、 Lock reentry
             When a thread gets a lock on an object , Ask again
             This object can be locked again
            
             Such as : Execute a queue task , Many objects go to
             Wait for an object to execute correctly before releasing the lock
             But the first object is due to an exception , Cause business logic
             The normal execution is not completed , The lock is released , The subsequent objects execute
             Wrong logic

        11、volatile
             The main function is to make variables visible among multiple threads
            
             stay Java in , Each thread has a working memory area
             All threads are stored in it 【 Shared main memory 】 Copy of variable values in
             When the thread executes , He manipulates these variables in his working memory area
             To access a shared variable , A thread usually acquires a lock and
             To clear these memory areas , Load these shared variables correctly from the shared area of all threads into
             His own working memory area , When the thread is unlocked
             Make sure that the values of variables in this workspace are written back to shared memory
            
            volatile Force threads to main memory ( Shared memory ) In go to
             Read variables , Instead of going to the thread working memory area to read
             Thus, the variable visibility among multiple threads is realized , That is, the visibility of thread safety
            
            volatile Although it has visibility between multiple threads , however
             but 【 No synchronization 】( That is atomicity ), No blocking ,
             For reading related operations, it is more suitable to implement atomicity. It is recommended to use atomic Class

        12、 Communication between threads
        
            wait Method release lock ,notify Method does not release the lock
            
            CountDownLatch

        13、BlockingQueue: Pair column , Support blocking mechanism
             Block putting and getting data , We want to achieve LinkedBlockingQueue The following two simple answers put and take
            put There's no space , The thread calling this method is blocked , Until there is room to continue
            take Take the first object in the queue , If it is empty, it will enter the waiting state , Until new data comes in
            
        14、ThreadLocal
             Thread-local variable , It is a solution to access variables concurrently between multiple threads
             With its synchronized And so on ,ThreadLocal Security does not provide locks
             And use the means of exchanging space for time , Provide a separate copy of the variable for each thread
             Ensure thread safety
            
             In terms of performance ,ThreadLocal Does not have an absolute advantage , When concurrency is not very high
             The locking performance will be better , But as a thread safety solution that has nothing to do with locks
             In highly concurrent or competitive scenarios , Use Threadlocal To a certain extent
             Reduce lock competition
            
        15、 Concurrent programming - Synchronization class container

            【 Synchronization class container 】:Vector,Hashtable
            Collections.synchronizedXXX Wait for the factory method to create a synchronization container
             The underlying mechanism is nothing more than traditional synchronized Keyword for each common method
             To synchronize , Only one thread can access the state of the container at a time , It is difficult to meet the requirements of high concurrency
            
             Concurrent class container
            
            JDK5.0 in the future , A concurrent class container is provided to replace the synchronous class container , To improve performance
            【 Synchronization class container 】 All States are serialized , Although they implement thread safety
             But the concurrency is greatly reduced , In a multithreaded environment , Severely reduces application throughput
            
            【 Concurrent class container 】 It is specially designed for concurrency , Use ConcurrentHashMap To replace
             Traditional hash based Hashtable, And in ConcurrentHashMap in , Added
             Some common compliant support
             And the use of CopyOnWriteArrayList replace Vector, concurrent CopyOnWriteArraySet
             And concurrent Queue,ConcurrentLinkedQueue and LinkedBlockingQueue
             The former is a high-performance queue , The latter is a queue in the form of blocking
            
             Concrete realization Queue There are still a lot of it , for example ArrayBlockingQueue
            PriorityBlockingQueue,SynchronousQueue
            
            ConcurrentHashMap
            ConcurrentSkipListMap( Support concurrent sorting function , make up ConcurrentHashMap
            )
            
            ConcurrentHashMap Internal use segment segment To represent these different parts
             Each paragraph is actually a small Hashtable, They have their own 【 lock 】
             As long as multiple modifications occur on different segments
             They can do it concurrently
             Divide a whole into 16 Segments , That is, the highest support 16 Concurrency of threads
             Modify the operating
            
            
             This is also a solution to reduce lock contention by reducing the strength of locks in multithreaded scenarios
             And most of the shared variables in the code use volatile Keyword declaration
             The purpose is to get the modified content at the first time , Very good performance
            
            
            CopyOnWrite Containers
            JDK in COW There are two kinds of containers :CopyOnWriteArrayList and CopyOnWriteArraySet
            
             Popular understanding : When we add elements to a container , It is not appropriate to add... Directly to the current container
             It's going to be 【 The current container is copy】, Copy out a new container
             Then add elements to the new container , After adding elements , Again, the reference of the original container points to the new container
             The advantage is that the container can be read concurrently , Without lock
             Because the current container will not add any elements ,CopyOnWrite Container is also an idea of separation of reading and writing
             Read and write in different containers
            
             More reading and less writing are suitable for
            
            
            Queue
            
            ConcurrentLinkedQueue 
             A queue for high concurrency scenarios ,
             Through the unlocked way , It realizes high concurrency
             A high performance , Usually CocurrentLinkedQueue
             Performance is higher than BlockingQueue
             It is an unbounded thread safe queue based on linked nodes
             The elements of the queue follow the first in, first out principle
            
             The head was the first to join , Tail is a recent addition
             This queue is not allowed to have null Elements

            ConcurrentLinkedQueue Method
                add()offer() They are all adding elements , stay ConcurrentLinkedQueue
                 There is little difference between the two , In blocking queue ,offer You can set the waiting time
                 If it goes beyond that time , Then add failed
                poll()peek() They are all header element nodes , The former will delete the element , The latter will not.
                
             Blocking Queue
            ArrayBlockingQueue
                 Implementation of blocking queue based on array
                 Internally maintain fixed length arrays , Cache the data objects in the queue
                 Length needs to be defined
                 There is no separation of reading and writing , That means , producer
                 And consumers can not be completely parallel , First in first out can be specified , Or in and out
                 Also called bounded queue
            
            LinkedBlockingQueue
                 Block pair column based on linked list
                 Internally, a data buffer queue is maintained - It consists of a linked list
                 The reason why it can efficiently handle concurrent data
                 Because its internal implementation adopts separate lock ( Read write separation of two locks )
                 Thus, the operation of producers and consumers can be completely parallel ,
                 It's a boundless line
            
            SynchronousQueue
                 An unbuffered queue , The data produced by the producer is directly
                 Will be acquired and consumed by consumers
                【 If only , No consumption , Will report queue full Error of 】
                 One way to put , One way to get
                 A thread is blocked waiting for the element
                 Just throw the elements inside , Then it is directly given to the thread
                 So you need a thread to start waiting for the element
                
                 It is suitable for scenarios with a small amount of data
                
            PriorityBlockingQueue
                 Priority based blocking queue
                 The object passed into the queue must implement Comparable Interface
                 The internal control thread synchronization lock uses fair lock
                 He is an unbounded queue
                
                 It's not in add Sort elements when
                 But in 【 call take When sorting containers 】
            
            DelayQueue
                 With delay time Queue, The element in it is only if it
                 The specified delay time has expired , To get the element from the queue
                DelayQueue The element in must implement Delayed Interface
                 There is no size limit for queues , Applications such as removing cache timeout data
                 Task timeout processing , Closing of idle connections, etc
            
                 application : Internet time in Internet cafes
                       Internet users realize Delayed Method
                      
                 Get off the plane take Block waiting
            
            
            Future Pattern
                 Be similar to Ajax Asynchronous requests

             Producer consumer model
            
                 Several producer threads and several consumer threads , They share it with each other
                 Memory buffer for communication
                
                MQ
                
             Multi task execution framework Executors
                 For better control of multithreading ,JDK Provides a thread framework Executors
                java.util.concurrent In bag
                Executors Play the role of thread factory
                 It allows you to create thread pools for specific functions
                
                 Method  
                    newFixedThreadPool()
                     Returns a fixed number of thread pools , The number of threads in this method is always the same
                     When a task is submitted , If the thread pool is free , Immediately
                     If there is no , Will be suspended in a task queue , Wait for a free thread to execute
                    
                     Unbounded queue LinkedBlockingQueue, A lot of work , There will be some risks , There are so many tasks ,
                     The queue grew larger , Prone to memory overflow
                    
                    
                    newSingleThreadExecutor(), Create a thread pool of threads
                     If idle, execute , If there is no idle thread, it will be suspended in the task queue
                    
                    newCachedThreadPool()
                     Return a thread pool that can adjust the number of threads according to the actual situation
                     Don't limit the maximum number of threads , If there is a mission , Then create thread , If no task
                     Do not create threads , If there is no task, the thread is in 60s Auto recycle after
                    
                     Thread pool initialization , Do not put threads ,SynchronousQueue, A mission
                     Create a thread to execute
                    
                     Thread has a 60 Seconds of thread idle time
                    
                    newScheduledThreadPool()
                     Return to one ScheduledExecutorService object , But the thread pool
                     You can specify the number of threads , similar Java Medium Timer, Each thread can implement a timer
                    DelayedWorkQueue - Removal of delay time
                    
                    Spring family , What do you know ?
                    
                    JPA - Spring Data - JDBCTEMPLATE
                    Spring MVC
                    batch
                    security - shiro
                    SpringBoot
                    SpringCloud - Distributed SOA service
                    spring Cache - redis ,mogodb
                    JMS - activemq ,rabbitmq
                    Spring mail
                    
                     The above method of creating thread pool , In the end, they all use :
                    return new ThreadPoolExecutor()
                     It's just that the parameters passed are different , And create different types of thread pools
                    
                    public ThreadPoolExecutor(
                        int corePoolSize,
                        int maximumPoolSize,
                        long keepAliveTime,
                        BlockingQueue<Runnable> workQueue,
                        ThreaFactory threadFactory,
                        RejectedExecutionHandler handler // Rejection strategy
                    )
                    
                    ExecutorService pool = Ececutors.newSingleThreadExecutor();
                    pool.execute();
                    
                    
                    
                     Custom thread pool
                     When using bounded queues , be based on ThreadPoolExecutor, If a new task needs to be performed
                     If the actual number of threads in the thread pool is less than corePoolSize, Priority is given to thread creation , If more than
                    corePoolSize, The task is queued , If the queue is full , Then the number of bus passes
                     No more than maximumPoolSize Under the premise of , Create a new thread , If the number of threads is greater than
                    maximumPoolSize, A reject policy is executed , Or other custom methods
                    
                     Unbounded task queue ,
                    LinkedBlockingQueue, Compared with a bounded queue , Unless the system runs out of resources
                     Otherwise, the unbounded task queue , There is no failure to join the team , When there is a new task
                     arrival , The number of threads in the system is less than corePoolSize, Create a new thread to execute the task
                     When reach corePoolSize after , Will not continue to increase , If there are still new
                     Task joining , There are no idle thread resources , Then the task goes directly into the queue to wait
                     If the speed of task creation and processing varies greatly , Unbounded queues will keep growing fast
                     Until you run out of system memory
                    
                    JDK No strategy
                    AbortPolicy: Throw an exception directly , The organization system works normally
                    CallerRunPolicy: As long as the thread pool is not closed , The policy is directly in the caller thread
                     Run the currently discarded task
                    DiscardOldestPolicy: Discard the oldest request , Try to commit the current task again
                    DiscardPolicy: Discard unmanageable tasks , Nothing to do with
                    
                     If you need to customize the rejection policy , Can achieve RejectedExecutionHandler Interface
                    
                    
                    Executors JDK Multitasking execution framework

原网站

版权声明
本文为[Glory eggplant]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202220531569067.html