当前位置:网站首页>Comparison between webgl and webgpu [3] - vertex buffer

Comparison between webgl and webgpu [3] - vertex buffer

2022-07-05 23:18:00 Song in four seasons


1. WebGL Medium VBO

1.1. establish WebGLBuffer

WebGL Use TypedArray Data transfer , this WebGPU It's the same .

The following code is WebGL 1.0 The conventional VertexBuffer establish 、 assignment 、 The configuration process .

const positions = [
  0, 0,
  0, 0.5,
  0.7, 0,
]

/*
 Create shader program  program...
*/

//  obtain  vertex attribute  Position in shader 
const positionAttributeLocation = gl.getAttribLocation(program, "a_position")

//#region  establish  WebGLBuffer  And bind the , Write data immediately 
const positionBuffer = gl.createBuffer()
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer)
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW)
//#endregion

//#region  Enable the corresponding  attribute, Bind the data again , And let us know  WebGL  How to read  VertexBuffer
gl.enableVertexAttribArray(positionAttributeLocation)
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer)
gl.vertexAttribPointer(positionAttributeLocation, size, type, normalize, stride, offset)
//#endregion

WebGL adopt gl Variable createBufferbindBufferbufferData Method to create a buffer 、 Bind the current buffer to be used and the purpose of the buffer 、 Pass to buffer CPU Terminal TypedArray Data and indicate the drawing mode , adopt gl Variable enableVertexAttribArrayvertexAttribPointer Method to enable attribute Pit position 、 Tell shaders how to get from VertexBuffer Get vertex data in .

1.2. Vertex shader

A very simple vertex shader :

precision mediump float;
attribute vec2 a_position;

void main() {
  gl_Position = vec4(a_position, 0.0, 0.0);
}

If you use a higher version of syntax ( for example WebGL 2.0 Use a higher version of glsl grammar ), You could write it like this :

#version 300 es
precision mediump float;
layout(location = 0) in vec2 a_position;

void main() {
  gl_Position = vec4(a_position, 0.0, 0.0);
}

2. WebGPU

2.1. establish GPUBuffer And transfer data

const verticesData = [
  //  coordinate  xy      //  Color  RGBA
  -0.5, 0.0,     1.0, 0.0, 0.0, 1.0, // ←  The vertices  1
  0.0, 0.5,      0.0, 1.0, 0.0, 1.0, // ←  The vertices  2
  0.5, 0.0,      0.0, 0.0, 1.0, 1.0  // ←  The vertices  3
])
const verticesBuffer = device.createBuffer({
  size: vbodata.byteLength,
  usage: GPUBufferUsage.VERTEX,
  mappedAtCreation: true //  Map immediately upon creation , Give Way  CPU  The client can read and write data 
})

//  Give Way  GPUBuffer  Map out a piece  CPU  End memory , namely  ArrayBuffer, At this point  Float32Array  Still empty 
const verticesBufferArray = new Float32Array(verticesBuffer.getMappedRange())

//  Pass data into this  Float32Array
verticesBufferArray.set(verticesData)
//  Make  GPUBuffer  Unmap , here  verticesBufferArray  That memory can be  GPU  visit 
verticesBuffer.unmap()

WebGPU establish VertexBuffer It is to call the device object createBuffer Method , Return to one GPUBuffer object , What it needs is to specify GPUBuffer Type and buffer size . How to write this buffer ? Then I have to mention “ mapping ” The concept .

Mapping is simply to let CPU/GPU Unilateral access . Create... Here GPUBuffer There is one of the parameters of mappedAtCreation It means mapping when creating .

About WebGPU in Buffer Mapping 、 demapping , I have a special article about , Don't expand too much here .

In the above code verticesBuffer.getMappedRange() Back to a ArrayBuffer, Then set Operation to fill data . After data filling , It also needs to be unmap To solve the mapping , For follow-up GPU Access to .

2.2. Pass the format information of vertex buffer to vertex shader

Vertex shading phase is Render pipeline (GPURenderPipeline) An integral part of , The pipeline needs to know the data specification of vertex buffer , Told by the shader module .

Creating a rendering pipeline requires Shader module object (GPUShaderModule, The creation parameter of vertex shader module has one buffers attribute , Is an array , Used to describe the vertex data specifications accessed in vertex shaders :

const vsShaderModule = device.createShaderModule({
  // ...
  buffers: [
    {
      // 2  individual  float32  representative  xy  coordinate 
      shaderLocation: 0,
      offset: 0,
      format: 'float32x2'
    }, {
      // 4  individual  float32  representative  rgba  Color value 
      shaderLocation: 1,
      offset: 2 * verticesData.BYTES_PER_ELEMENT,
      format: 'float32x4'
    }
  ]
})

For details, please refer to the official API About device objects in the document createShaderModule Method requirements .

2.3. Set the vertex buffer in the render channel

Use Render channel encoder (GPURenderPassEncoder To encode the whole process of a single rendering channel , One step is to set the vertex buffer of this channel . This is simpler :

// ...
renderPassEncoder.setVertexBuffer(0, verticesBuffer)
// ...

2.4. Vertex shader

struct PositionColorInput {
  @location(0) in_position_2d: vec2<f32>;
  @location(1) in_color_rgba: vec4<f32>;
};

struct PositionColorOutput {
  @builtin(position) coords_output: vec4<f32>;
  @location(0) color_output: vec4<f32>;
};

@stage(vertex)
fn main(input: PositionColorInput) 
    -> PositionColorOutput {
  var output: PositionColorOutput;
  output.color_output = input.in_color_rgba;
  output.coords_output = vec4<f32>(input.in_position_2d, 0.0, 1.0);
  return output;
}

WGSL Shader code can customize the entry function name of vertex shader 、 The structure of the incoming parameter , You can also customize the output of the next stage ( That is, the return value ) Structure .

You can see , In order to receive from WebGPU API Vertex attributes passed in , That is, in the custom structure PositionColorInput In structure xy coordinate in_position_2d, And color values in_color_rgba, There needs to be one “ characteristic ”, be called location, Its value in parentheses is the same as that in the shader module object shaderLocation Must correspond to .

And for output , The code corresponds to the structure PositionColorOutput, The next stage ( That is, the segment coloring stage ) The output uses built-in features (builtin), be called position, And a customized one vec4:color_output, It is the rasterized color in the clip shader , These two outputs , similar glsl Medium varying( perhaps out) effect .

2.5. Application of buffered data in memory and video memory 、 Delivery and destruction

establish GPUBuffer When , without mappedAtCreation: true, So memory 、 Video storage has not been applied .

After code testing , When the mapping request is executed and the mapping is successful , Memory will occupy the corresponding GPUBuffer Of size, Now it's done ArrayBuffer The creation of , It takes up space .

So when will video storage be applied for ? Guess it is device.queue.commit() when , Instruction buffers carry various channels 、 Various Buffer Pass it on to GPU, Execute instruction buffer , I hope some experts can test my guess .

As for destruction , I use destory Method test CPU Memory condition of , It was found that it was not recycled within two minutes , This is to be tested ArrayBuffer The recovery of .

3. comparison

gl.vertexAttribPointer() The method works like device.createShaderModule() in buffers The role of , Tell the shader vertex buffer the data specification of a single vertex .

gl.createBuffer() and device.createBuffer() It's similar , Is to create a CPU In end memory Buffer object , But there is actually no incoming data .

Data transmission is not consistent ,WebGL Only one can be specified at a time VertexBuffer, therefore gl.bindBuffer()gl.bufferData() A series of function calls follow the logic ; and WebGPU You need to go through mapping and unmapping .

stay WebGPU The most important thing is , stay renderPassEncoder Records from draw Before the order , To be called renderPassEncoder.setVertexBuffer() Method explicitly specifies which VertexBuffer.

Shader code, please compare and study by yourself , Just grammatical differences .

4. VertexArrayObject

VAO I have also written an article 《WebGPU Disappeared in VAO》, We will not expand it in detail here , Interested readers please move to my blog list to find .

WebGPU There's no need for VAO 了 , From the WebGPU The mechanism of WebGL Different ,VAO Itself is OpenGL The concept proposed by the system , It can save WebGL The burden of switching vertex related states , That is to help you cache one VBO Setting state of , Without having to gl.bindBuffer()gl.bufferData()gl.vertexAttribPointer() Wait again .

WebGPU The assembly thought of is naturally related to VAO It's consistent .VAO The functions of are transferred to GPURenderPipeline complete , Its creation parameters GPURenderPipelineDescriptor.vertex.buffers The attribute is GPUVertexBufferLayout[] Type of , Every one of these GPUVertexBufferLayout The object has a part VAO Functions .

原网站

版权声明
本文为[Song in four seasons]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202140332005535.html