当前位置:网站首页>Analysis on rendering principle of mobile terminal

Analysis on rendering principle of mobile terminal

2022-06-10 20:24:00 Byte bounce technical team

Focus on   Dry goods don't get lost  

Computer or mobile phone rendering is a very complex process , This article introduces some basic knowledge about rendering , And combine iOS And Android technology framework to introduce the principle of mobile end rendering , Finally, the detailed analysis of iOS Some methods of off screen rendering and fillet optimization in .

7b20eb90ff32aa37e1d26bb754f85d00.png

Rendering Basics

Raw data source for screen drawing

Bitmap

66dd565c054fa2449f1744f80b9ce1f5.png

The raw data we need to draw an image on the screen is called a bitmap . Bitmap (Bitmap) It's a data structure . A bitmap is composed of n*m Pixels make up , The color information of each pixel is determined by RGB Combination or grayscale values represent . According to bit depth , Bitmap can be divided into 1、4、8、16、24 And 32 Bit images, etc . The more bits of information each pixel uses , The more colors are available , The more realistic the color is , The richer , The larger the corresponding amount of data .

Physical and logical pixels

84cd4846155da1938a1c78ce37c47358.png

Bitmaps generally store physical pixels , The application layer generally uses logical pixels , There is a certain correspondence between physical pixels and logical pixels . for example ,iOS The corresponding relationship between physical pixels and logical pixels in is as follows :

  1. iOS1 Double screen 1pt Corresponding 1 Individual physical pixels

  2. iOS2 Double screen 1pt Corresponding 2 Individual physical pixels

  3. iOS3 Double screen 1pt Corresponding 3 Individual physical pixels

Draw bitmap to display

As mentioned above, the raw data needed to draw an image on the screen is called bitmap . So here comes the question , With bitmap data, how to draw an image onto the screen ? As shown in the figure below : The electron gun scans line by line from top to bottom , After scanning, the display will present a frame picture . Then the electron gun returns to the initial position of the screen for the next scan . To synchronize the display process of the display and the scanning process of the video controller , The display will use the hardware clock to generate a series of timing signals . When the gun is scanning in a new line , The display will send out a horizontal synchronization signal ; When a frame is drawn , The electron gun returns to its original position , Before you are ready to draw the next frame , The display will send out a vertical synchronization signal . The monitor usually refreshes at a fixed rate , This refresh rate is the frequency at which the vertical synchronization signal is generated .

5299eb859486f4d6be0dc6399e029abc.png

CPU、GPU、 Monitor collaboration workflow

The previous part introduced the process of displaying bitmap data on the physical screen by the video controller , So how do you get bitmap data ? In fact, the bitmap data is generated through CPU、GPU Work together to get . The following figure is a common CPU、GPU、 The process of display cooperation .CPU Calculate the display and submit it to GPU,GPU After rendering, the rendering results are stored in the frame buffer , Next, you need to display the pixel information on the physical screen , At this time, the video controller (Video Controller) The information in the frame buffer will be read and transmitted to the display (Monitor) Display . The complete process is shown in the figure below :

56312b0a6becf7df10aebf60100b0148.png

CPU and GPU The difference between

8d5f602cd1a4ff8b07d1265ddb35825d.png

speak of CPU、GPU、 Collaborative workflow for displays , I have to mention it CPU and GPU The difference between .

CPU It's the CPU , Suitable for single complex logic , and GPU It's a graphics processor , Suitable for high concurrency simple logic .

GPU There are a lot of computing units and very long pipelines , But the control logic is very simple , It also eliminates the need for caching , It is suitable for low latency operations . and CPU Not only by Cache Takes up a lot of space , And there is a particularly complex control logic , In contrast, computing power is just CPU A small part . Graphics rendering involves Matrix operations More , Matrix related operations can be split into parallel simple operations , So rendering is especially suitable for GPU To do .

In conclusion :GPU A lot of work is calculated , But the technical content is not high , It needs to be repeated many times . It's like having a job where you have to add, subtract, multiply and divide hundreds of times . and CPU Like an old professor , Integral and differential can be calculated , It is suitable for processing single complex logic operation .

General purpose rendering pipeline

We usually call the whole process of image rendering as rendering pipeline , The process is CPU and GPU Collaboration completed . Generally, a rendering process can be divided into 4 There are four conceptual stages , Namely : Application stage (Application Stage), Geometric stage (Geometry Stage), Grating stage (Rasterizer Stage), Pixel processing stage (Pixel Processing). stay 《Real–Time Rendering 4th》 It thoroughly explains various knowledge points of real-time rendering , Those interested in the principle of rendering can look at this book , This book is called “ Rendering the Bible in real time ”. The following will briefly introduce these processes .

6c2e5bb17dd7135d5bb3a17ef5dc0ed0.png

Application stage (Application Stage)

In short , It is the processing stage of the image in the application . To put it bluntly, it is a period running in CPU Procedure on , There is no such thing as GPU What's up? . This stage is mainly about CPU Responsible for handling user interaction and operation , Then do some processing related to the application layer layout , The final output Primitives ( spot 、 Lines and triangles ) Information to the next stage .

You may wonder , Primitives have only simple points 、 Line 、 triangle , Can you express rich three-dimensional graphics , The dolphin with a strong three-dimensional sense below can give a positive answer , Simple triangles with different shades , Can present a three-dimensional figure .

515f52f449624d22306ef6c62cea91de.png

Geometric stage (Geometry Stage)

1. Vertex shader (Vertex Shader)

Vertex shaders can do some basic processing on the attributes of vertices . Convert vertex information into perspective 、 Add lighting information 、 Add texture and other operations .CPU Throw it to GPU Information about , It's like standing in God's perspective and giving all the information seen from this perspective GPU. and GPU From the human point of view , The pictures that human beings can observe , Output on the display . So here is the human perspective as the center , Coordinate conversion .

2. Shape assembly (Shape Assembly) This stage takes all vertices output by vertex shader as input , And assemble all points to the shape of the specified element . Primitives (Primitive) Such as : spot 、 Line 、 triangle . This stage is also called Primitive Assembly .

3. Geometry shaders (Geometry Shader) Add extra vertices outside the primitive , Convert the original element to a new element , To build more complex models .

Grating stage (Rasterizer Stage)

The rasterization stage will process the elements obtained from the first three geometric stages (primitives) Convert to a series of pixels .

ed6646635ee6a8764b3be8dc89d147a2.png

As shown in the figure above , We can see , There is a point at the center of each pixel , Rasterization is divided by this central point , If the center point is inside the element , Then the pixel corresponding to this center point belongs to this primitive . In short , This stage is to convert continuous geometry into discrete pixels .

Pixel processing stage (Pixel Processing)

1. Fragment Shader (Fragment Shader)

After the above rasterization stage , We get the pixels corresponding to each primitive , The last thing to do at this stage is to give each Pixel Fill with the correct color , And then calculate through a series of processing , Get the corresponding image information , Finally output to the display . I'll do interpolation here , It's like making up room animation . For example, you want to connect a series of scattered points into a smooth curve , Many points may be missing between adjacent known points , At this time, it is necessary to fill in the missing data through interpolation , All points on the final smooth curve except the known points are interpolated . alike , After the three character values of the triangle are given , Other segments are calculated by interpolation , That is, the effect of gradient .

2. Testing and mixing (Tests and Blending)

This phase will detect the corresponding depth value (z coordinate ), To determine whether this pixel is in front of or behind pixels in other layers , Decide whether you should discard . Besides , This stage will also check alpha value ( alpha Value defines the transparency of a pixel ), To blend the layers .( In a word , Is to check the layer depth and transparency , And layer blending .)

R = S + D * (1 - Sa)

 meaning :
R:Result, Final pixel color .
S:Source, Source pixels ( The upper layer pixels ).
D:Destination, Target pixel ( The following layer pixels ).
a:alpha, transparency .

 result  = S( On ) The color of the  + D( Next ) The color of the  * (1 - S( On ) Transparency )

After the long pipeline above, we can get the original data source needed for screen rendering - The bitmap data , Then the video controller displays the bitmap data on the physical screen .

iOS Rendering principle

Rendering technology stack

After some basic knowledge about rendering is laid on the top , Here is the main introduction iOS Some principles and knowledge related to rendering . The picture below is iOS Graphics rendering technology stack , There are three related core system frameworks :Core GraphicsCore AnimationCore Image , These three frameworks are mainly used to draw visual content . They all pass through OpenGL To call GPU Do the actual rendering , Then the final bitmap data is generated and stored in the frame buffer , The video controller displays the data of the frame buffer on the physical screen .

f5f6456d8a50e79ded74021766e12fc1.png

UIKit

UIKit  yes iOS The framework most commonly used by developers , Can be set by  UIKit  Component's Layout And related attributes to draw the interface . however  UIKit   Does not have the ability to image on the screen , This framework is mainly responsible for user operations The response to the incident (UIView   Inherited from  UIResponder), Events are passed through the response chain .

Core Animation

Core Animation   Mainly responsible for combining different visual contents on the screen , These visual contents can be decomposed into independent layers, which we often contact in the daily development process CALayer, These layers are stored in the layer tree .CALayer Mainly responsible for page rendering , It is the foundation of everything the user can see on the screen .

Core Graphics

Core Graphics It is mainly used for Runtime The plot . Developers can use this framework to handle path based drawing , transformation , Color management , Off screen rendering , pattern , Gradients, shadows, etc .

Core Image

Core Image   And  Core Graphics Just the opposite ,Core Graphics   Is to create an image at runtime , and  Core Image   Is to create an image before running .

OpenGL ES and Metal

OpenGL ES and Metal Are third-party standards , Based on these standards, the specific internal implementation is implemented by the corresponding GPU Developed by the manufacturer .Metal It is a set of third-party standards of apple , By apple . Many developers haven't used it directly  Metal, But through Core Animation、Core Image These core system frameworks are used indirectly metal.

CoreAnimation And UIKit The relationship between frames

Mentioned in the above rendering framework Core Animation yes iOS and OS X Upper graph Rendering And the basic framework of animation , It is mainly used to animate views and other visual elements of the application .Core Animation The implementation logic of is to hand over most of the actual drawing work to GPU Accelerated rendering , This will not give CPU Burden , It can also achieve smooth animation .CoreAnimation The core class of is CALayer,UIKit The core class of the framework is UIView, The following describes the relationship between these two classes in detail .

UIView And CALayer The relationship between

b4ae1f85a84815a8266fe62ec2b005fd.png

As shown in the figure above ,UIView and CALayer It's a one-to-one relationship , every last UIView There is one. CALayer With the corresponding , One is responsible for the layout 、 Interactive response , One is responsible for page rendering .

Their two The core relationship as follows :

  1. CALayer yes UIView One of the attributes of , Responsible for rendering and animation , Provide presentation of visual content .

  2. UIView Provide for the right to CALayer Function encapsulation , Responsible for handling interactive events .

Take a more vivid example ,UIView It's a drawing board ,CALayer It's the canvas , When you create a Sketchpad , A canvas will be bound automatically , The sketchpad will respond to your actions , For example, you can move the drawing board , Canvas is responsible for rendering specific graphics , The two have clear responsibilities . One is responsible for interaction , One is responsible for rendering .

Why should we separate CALayer and UIView?

iOS The platform and MacOS The interaction modes of users on the platform are fundamentally different , But the rendering logic is generic , stay iOS In the system, we use UIKit and UIView, And in the MacOS In the system, we use AppKit and NSView, So in this case, the logic of the presentation part is separated and reused across platforms .

CALayer Medium contents Property saves the rendered by the device rendering pipeline Bitmap bitmap( Usually called backing store), That is, the most original data source required for screen drawing . When the device screen is refreshed , From CALayer Read the generated bitmap, And then on the screen .

@interface CALayer : NSObject <NSSecureCoding, CAMediaTiming>
/** Layer content properties and methods. **/

/* An object providing the contents of the layer, typically a CGImageRef,
 * but may be something else. (For example, NSImage objects are
 * supported on Mac OS X 10.6 and later.) Default value is nil.
 * Animatable. */

@property(nullable, strong) id contents;

@end

Core Animation Assembly line

In fact as early as WWDC Of Advanced Graphics and Animations for iOS Apps(WWDC14 419, About UIKit and Core Animation The basis of session) Chinese apple gives CoreAnimation Rendering pipeline of framework , The specific process is shown in the figure below :

7cd59c6cca9ead35817f25cbc242ce4a.png

In the whole assembly line app It's not responsible for rendering itself , Rendering is done by a separate process , namely  Render Server   process . The following will introduce the whole pipeline The process of .

4b5b47d9520bba96ccaa7798296d4db3.png
  • Application stage

    • View creation

    • Layout calculation

    • Package the layers , The next time RunLoop Send it to  Render Server

    • app Handle user clicks , In the process app It may need to be updated View tree , If the view tree is updated , Layer tree It will also be updated

    • secondly ,app adopt CPU Complete the calculation of the displayed content

  • Render Server & GPU

    • This stage mainly implements metal、Core Graphics And other related procedures , And call GPU Finish rendering the image on the physical layer

    • GPU Store rendered bitmap data in Frame Buffer

  • Display

    • The video controller displays the bitmap data of the frame buffer frame by frame on the physical screen

If you string the above steps together , You will find that they take more time to execute than 16.67 ms, So in order to satisfy the 60 FPS Refresh rate support , These steps need to be executed in parallel through pipeline , As shown in the figure below . Each stage is continuously delivering products to the next stage . At this time, we can meet 16.67 The request to generate a frame of data in milliseconds .

3e2b4e924a68b605d7f7ddcbf9a66449.png

Android rendering principle

Android upper display system

An zhuozhong Activity One of the important responsibilities of is to manage the interface lifecycle , This is accompanied by the management of the view window . This involves two Android Two main services in ,AMS(ActivityManagerService) and WMS(WindowManagerService).

stay Android in , One view There will be a correspondence canvas. The view tree corresponds to a canvas Trees ,Surfaceflinger Control multiple canvas Synthesis . The final rendering is finished and the bitmap data is output , Display to the phone screen .

d6751256270299b7218326dfb155af4e.png4ade24e8ed771ff6d1ff91865278e365.png

Application layer layout

View and ViewGroup

View yes Android The base class of all controls in ,View Class has a very important subclass :ViewGroup,ViewGroup As other view Use the container of .Android All of the UI Components are built on View、ViewGroup Based on , Overall adoption “ Combine ” Design with the idea of View and ViewGroup:ViewGroup yes View Subclasses of , therefore ViewGroup It can also be regarded as View Use . One Android app The graphical user interface corresponds to a view tree , The view tree corresponds to a canvas Trees . This is a bit like iOS Medium UIView and CALayer The concept of , One is responsible for the layout of the application layer , One is responsible for the underlying rendering .

9cb462b3065cab28dd7484a37ceb819c.png

The system bottom rendering shows

The application layer of the view Corresponding to canvas,canvas To the system process, it becomes layer.SurfaceFlinger Mainly provide layer Rendering compositing services for .SurfaceFlinger Is a resident binder service , Will follow init The start of a process . The following figure describes the upper layer in detail view To the bottom layer The transformation of , as well as SurfaceFlinger For many layer Render compositing for .

7228e0ccff064b8e73130f1ee8abbfa1.png

iOS Off screen rendering

Off screen rendering principle and definition

First, let's introduce the principle of off screen rendering . Our normal rendering process is :CPU and GPU Collaboration , Keep putting the bitmap data obtained after the content rendering into Framebuffer ( Framebuffer ) in , The video controller is constantly changing from Framebuffer Get content from , Show real-time content .

09952b715a88f457a57888b9fa6818e8.png

The off screen rendering process is like this :

ae2da6274b985c953ff7598dd64cd348.png

As usual GPU Put the rendered content directly into Framebuffer Different from , Off screen rendering requires the creation of an additional off screen rendering buffer , Put the pre rendered content into it , Wait until the right time to Offscreen Buffer The contents of the are further superimposed 、 Rendering , Write the result to Framebuffer in .

Why store data in the off screen rendering buffer first ? There are two reasons , One is passive , One is active .

  1. some Special effects You need to use extra Offscreen Buffer To save the intermediate state of rendering ( passive )

  2. For efficiency purposes , You can render the content in advance and save it in Offscreen Buffer in , Achieve reuse .( Take the initiative )

Passive off screen rendering

Common scenes that trigger passive off screen rendering

transparent 、 Shadows with rounded corners are often referred to as UI Three treasures , But these effects are iOS However, it often leads to passive off screen rendering in the daily development process of , Below are some common scenes that trigger passive off screen rendering .

3a36cc01c04650b60661f2d557ba30df.png

The reason for triggering off screen rendering

02fb9be60d5ee7c6b2d0a2e791c529bb.png

When talking about off screen rendering, I have to mention the painter's algorithm , The overall idea of the artist's algorithm is to draw by layer , First, draw a scene with a long distance , Then cover the far part with the scene drawn closer . The layer here is iOS The rendering technology stack of can be mapped to layer.

295fc7294faf83b650ff57fc0e23f48c.png

Usually for each layer layer,Render Server Will follow “ Painter algorithm ”, Output in order to frame buffer, The latter layer covers the previous layer , You can get the final display result , For this layer The tree is based on the depth first algorithm layer Output to frame buffer.

As “ Painter ” Of GPU Although you can output to the canvas layer by layer , But there is no way to render a layer after it is finished , Go back and change a part of it . Because there are several layers before this layer layer pixel data , Has been composited together in rendering . Actually, here and photoshop Layer merging in is very similar to , Once multiple layers are merged , You can no longer modify a layer alone . So you need to put a handle in the off screen buffer layer Draw... In turn , Then cut the four corners and mix them with the previous layer .

a27e68a518c4c82a91daf1ddcb20e653.png15625306c484a13c82f5918df3394e17.png

GPU Performance impact of off screen rendering

A mention of off screen rendering , Our intuitive feeling is that it will affect the performance . Because in order to satisfy 60fps Refresh rate of ,GPU All operations are highly pipelined . Originally, all the calculation work was moving towards frame buffer Output , Suddenly, some special effects trigger off screen rendering , Need to switch context , Output data to another memory , At this time, many intermediates in the assembly line can only be discarded , This frequent context switching is important to GPU Has a very big impact on rendering performance .

How to prevent unnecessary off screen rendering ?

  • For some fillets, you can create four arcs of background color layer Cover the four corners , Visually create rounded corners

  • about view Round border , without backgroundColor, Safe to use cornerRadius To do it

  • For all shadows , Use shadowPath To avoid off screen rendering

  • For special shapes view, Use layer mask And open shouldRasterize To cache the rendering results

Optimization strategy for fillet implementation

Use CALayer Of cornerRadius And set up cliptobounds Off screen rendering will be triggered later (offscreen rendering). Scrolling needs to be done at... Per second 60 Perform clipping operation on the frame , Even if the content doesn't change .GPU You must also switch the context between frames , Compose the entire frame and crop . These performance costs directly affect Render Server This independent rendering process , Make a frame drop . To optimize rendering performance , We can choose some other schemes to implement fillet . The following are the conditions to be considered for the specific implementation of fillet .

The conditions to be considered for the specific implementation of fillet

  1. Under fillet (movement underneath the corner) Is there any slippage .

  2. Whether there is sliding through the fillet (movement through the corner).

  3. Whether the four fillets are in the same layer On , Are there any other Son layer The intersection .

19c47721b7f719b094e2c76b4af411fe.png

The specific implementation scheme of fillet

2c4df8e6b35b71a21569da94c08dd5bd.png

How to select the implementation scheme of fillet according to the corresponding conditions

The conditions to be considered for fillet optimization and different fillet implementation schemes are mentioned above , The following flow chart is to map conditions and schemes , The best implementation scheme of fillet is given .

62b05636c8acef97d40494a2092c202a.png

summary

This paper mainly introduces the principle of mobile rendering . At the beginning of this article, I introduced the basic knowledge of rendering , The original data source required for rendering - Bitmap and CPU and GPU How to work together to get bitmap data . Later, it is combined with iOS And Android technology framework to introduce the relevant principles of mobile end rendering . Finally, in-depth analysis iOS Off screen rendering in , Some existing schemes of fillet optimization are explained .

Reference article

  1. iOS Image rendering principle http://chuquan.me/2018/09/25/ios-graphics-render-principle/

  2. iOS Rendering Render full resolution https://juejin.cn/post/6844904162765832206

  3. iOS Rendering process https://www.jianshu.com/p/464c08d87f75

  4. from Auto Layout On the performance of the layout algorithm https://draveness.me/layout-performance/

  5. Auto Layout How to do automatic layout , How the performance ?https://juejin.cn/post/6844904055790108680

  6. iOS Interface rendering process analysis https://www.jianshu.com/p/39b91ecaaac8

  7. iOS Talking about GPU And “App Rendering process ” https://juejin.cn/post/6844904106419552269

  8. CPU and GPU What's the difference ?https://www.zhihu.com/question/19903344

  9. IOS Advanced - Layers and rendering https://bytedance.feishu.cn/wiki/wikcnWq4HdGQygFEolgKgAVv9Oh

  10. An article to understand what is rendering pipeline https://segmentfault.com/a/1190000020767062

  11. GPU Rendering Pipeline——GPU Introduction to rendering pipeline https://zhuanlan.zhihu.com/p/61949898

  12. About iOS In depth study of off screen rendering https://zhuanlan.zhihu.com/p/72653360

  13. texture https://texturegroup.org/docs/corner-rounding.html

  14. Android Various rendering frameworks and Android Layer rendering principle https://juejin.cn/post/7021840737431978020

原网站

版权声明
本文为[Byte bounce technical team]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/161/202206101918054915.html