当前位置:网站首页>Vulkan official example interpretation shadows (rasterization)
Vulkan official example interpretation shadows (rasterization)
2022-06-11 04:15:00 【A little caviar】
List of articles
Preface
- Render shadows for directional lights . For the first time, store the pov Depth value of , Compare these values a second time to check if the clip is shaded . Use depth deviation to avoid shadow artifacts , And Application PCF Filter to smooth shadow edges .
- Core principles and OpenGL In the same , Just use vulkanAPI.
- Shadow mapping (Shadow Mapping) The idea behind it is very simple : We use The position of the light is the angle of view Rendering , Everything we can see will be lit up , What cannot be seen must be in the shadow . Suppose there is a floor , There is a big box between the light source and it . Because the light source looks in the direction of light , You can see this box , But I can't see part of the floor , This part should be in the shadow .
- In this example, the depth perspective of the light source is specially added to observe the object .
One 、CPP file
1. Classes and variables
class VulkanExample : public VulkanExampleBase
{
public:
bool displayShadowMap = false;
bool filterPCF = true;
// Keep depth range as small as possible
// for better shadow map precision
float zNear = 1.0f;
float zFar = 96.0f;
// Depth bias (and slope) are used to avoid shadowing artifacts
//( depth offset ( And the slope ) Used to avoid shadow artifacts )
// Constant depth bias factor (always applied)
// Constant depth offset factor ( Always apply )
float depthBiasConstant = 1.25f;
// Slope depth bias factor, applied depending on polygon's slope
// Slope depth offset factor , Apply according to the slope of the polygon
float depthBiasSlope = 1.75f;
glm::vec3 lightPos = glm::vec3();// Light source location
float lightFOV = 45.0f;
std::vector<vkglTF::Model> scenes;
std::vector<std::string> sceneNames;
int32_t sceneIndex = 0;
struct {
vks::Buffer scene;
vks::Buffer offscreen;
} uniformBuffers;
// Set up UBO
struct {
glm::mat4 projection;
glm::mat4 view;
glm::mat4 model;
glm::mat4 depthBiasMVP;
glm::vec4 lightPos;
// Used for depth map visualization
float zNear;
float zFar;
} uboVSscene;
// Off screen information for viewing depth of light source
struct {
glm::mat4 depthMVP;
} uboOffscreenVS;
// Multiple render pipelines are used for rendering
struct {
VkPipeline offscreen;
VkPipeline sceneShadow;
VkPipeline sceneShadowPCF;
VkPipeline debug;
} pipelines;
VkPipelineLayout pipelineLayout;
struct {
VkDescriptorSet offscreen;
VkDescriptorSet scene;
VkDescriptorSet debug;
} descriptorSets;
VkDescriptorSetLayout descriptorSetLayout;
// Framebuffer for offscreen rendering
// Frame buffer for off screen rendering
struct FrameBufferAttachment {
VkImage image;
VkDeviceMemory mem;
VkImageView view;
};
struct OffscreenPass {
int32_t width, height;
VkFramebuffer frameBuffer;
FrameBufferAttachment depth;
VkRenderPass renderPass;
VkSampler depthSampler;
VkDescriptorImageInfo descriptor;
} offscreenPass;
2. Important functions
1) by Off screen frame buffer Set up Separate render channels . This is a It's necessary Of , Because the off screen frame buffer attachment uses the same format as in the example The format is different .
void prepareOffscreenRenderpass()
{
VkAttachmentDescription attachmentDescription{
};
attachmentDescription.format = DEPTH_FORMAT;
attachmentDescription.samples = VK_SAMPLE_COUNT_1_BIT;
attachmentDescription.loadOp = VK_ATTACHMENT_LOAD_OP_CLEAR; // Clear depth at beginning of the render pass
attachmentDescription.storeOp = VK_ATTACHMENT_STORE_OP_STORE; // We will read from depth, so it's important to store the depth attachment results
attachmentDescription.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
attachmentDescription.stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
attachmentDescription.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED; // We don't care about initial layout of the attachment
attachmentDescription.finalLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL;// Attachment will be transitioned to shader read at render pass end
VkAttachmentReference depthReference = {
};
depthReference.attachment = 0;
depthReference.layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL; // Attachment will be used as depth/stencil during render pass
VkSubpassDescription subpass = {
};
subpass.pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS;
subpass.colorAttachmentCount = 0; // No color attachments
subpass.pDepthStencilAttachment = &depthReference; // Reference to our depth attachment
// Use subpass dependencies for layout transitions
std::array<VkSubpassDependency, 2> dependencies;
dependencies[0].srcSubpass = VK_SUBPASS_EXTERNAL;
dependencies[0].dstSubpass = 0;
dependencies[0].srcStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
dependencies[0].dstStageMask = VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT;
dependencies[0].srcAccessMask = VK_ACCESS_SHADER_READ_BIT;
dependencies[0].dstAccessMask = VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT;
dependencies[0].dependencyFlags = VK_DEPENDENCY_BY_REGION_BIT;
dependencies[1].srcSubpass = 0;
dependencies[1].dstSubpass = VK_SUBPASS_EXTERNAL;
dependencies[1].srcStageMask = VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT;
dependencies[1].dstStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
dependencies[1].srcAccessMask = VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT;
dependencies[1].dstAccessMask = VK_ACCESS_SHADER_READ_BIT;
dependencies[1].dependencyFlags = VK_DEPENDENCY_BY_REGION_BIT;
VkRenderPassCreateInfo renderPassCreateInfo = vks::initializers::renderPassCreateInfo();
renderPassCreateInfo.attachmentCount = 1;
renderPassCreateInfo.pAttachments = &attachmentDescription;
renderPassCreateInfo.subpassCount = 1;
renderPassCreateInfo.pSubpasses = &subpass;
renderPassCreateInfo.dependencyCount = static_cast<uint32_t>(dependencies.size());
renderPassCreateInfo.pDependencies = dependencies.data();
VK_CHECK_RESULT(vkCreateRenderPass(device, &renderPassCreateInfo, nullptr, &offscreenPass.renderPass));
}
2) Set up Off screen frame buffer , Used to render a scene from the viewpoint of a light source . then , The depth connection of this frame buffer will be used to sample from the shadow channel's clip shader .
void prepareOffscreenFramebuffer()
{
offscreenPass.width = SHADOWMAP_DIM;
offscreenPass.height = SHADOWMAP_DIM;
// For shadow mapping we only need a depth attachment
VkImageCreateInfo image = vks::initializers::imageCreateInfo();
image.imageType = VK_IMAGE_TYPE_2D;
image.extent.width = offscreenPass.width;
image.extent.height = offscreenPass.height;
image.extent.depth = 1;
image.mipLevels = 1;
image.arrayLayers = 1;
image.samples = VK_SAMPLE_COUNT_1_BIT;
image.tiling = VK_IMAGE_TILING_OPTIMAL;
image.format = DEPTH_FORMAT; // Depth stencil attachment
image.usage = VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT | VK_IMAGE_USAGE_SAMPLED_BIT; // We will sample directly from the depth attachment for the shadow mapping
VK_CHECK_RESULT(vkCreateImage(device, &image, nullptr, &offscreenPass.depth.image));
VkMemoryAllocateInfo memAlloc = vks::initializers::memoryAllocateInfo();
VkMemoryRequirements memReqs;
vkGetImageMemoryRequirements(device, offscreenPass.depth.image, &memReqs);
memAlloc.allocationSize = memReqs.size;
memAlloc.memoryTypeIndex = vulkanDevice->getMemoryType(memReqs.memoryTypeBits, VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT);
VK_CHECK_RESULT(vkAllocateMemory(device, &memAlloc, nullptr, &offscreenPass.depth.mem));
VK_CHECK_RESULT(vkBindImageMemory(device, offscreenPass.depth.image, offscreenPass.depth.mem, 0));
VkImageViewCreateInfo depthStencilView = vks::initializers::imageViewCreateInfo();
depthStencilView.viewType = VK_IMAGE_VIEW_TYPE_2D;
depthStencilView.format = DEPTH_FORMAT;
depthStencilView.subresourceRange = {
};
depthStencilView.subresourceRange.aspectMask = VK_IMAGE_ASPECT_DEPTH_BIT;
depthStencilView.subresourceRange.baseMipLevel = 0;
depthStencilView.subresourceRange.levelCount = 1;
depthStencilView.subresourceRange.baseArrayLayer = 0;
depthStencilView.subresourceRange.layerCount = 1;
depthStencilView.image = offscreenPass.depth.image;
VK_CHECK_RESULT(vkCreateImageView(device, &depthStencilView, nullptr, &offscreenPass.depth.view));
// Create sampler to sample from to depth attachment
// Used to sample in the fragment shader for shadowed rendering
VkFilter shadowmap_filter = vks::tools::formatIsFilterable(physicalDevice, DEPTH_FORMAT, VK_IMAGE_TILING_OPTIMAL) ?
DEFAULT_SHADOWMAP_FILTER :
VK_FILTER_NEAREST;
VkSamplerCreateInfo sampler = vks::initializers::samplerCreateInfo();
sampler.magFilter = shadowmap_filter;
sampler.minFilter = shadowmap_filter;
sampler.mipmapMode = VK_SAMPLER_MIPMAP_MODE_LINEAR;
sampler.addressModeU = VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE;
sampler.addressModeV = sampler.addressModeU;
sampler.addressModeW = sampler.addressModeU;
sampler.mipLodBias = 0.0f;
sampler.maxAnisotropy = 1.0f;
sampler.minLod = 0.0f;
sampler.maxLod = 1.0f;
sampler.borderColor = VK_BORDER_COLOR_FLOAT_OPAQUE_WHITE;
VK_CHECK_RESULT(vkCreateSampler(device, &sampler, nullptr, &offscreenPass.depthSampler));
prepareOffscreenRenderpass();
// Create frame buffer
VkFramebufferCreateInfo fbufCreateInfo = vks::initializers::framebufferCreateInfo();
fbufCreateInfo.renderPass = offscreenPass.renderPass;
fbufCreateInfo.attachmentCount = 1;
fbufCreateInfo.pAttachments = &offscreenPass.depth.view;
fbufCreateInfo.width = offscreenPass.width;
fbufCreateInfo.height = offscreenPass.height;
fbufCreateInfo.layers = 1;
VK_CHECK_RESULT(vkCreateFramebuffer(device, &fbufCreateInfo, nullptr, &offscreenPass.frameBuffer));
}
3) The create command buffer in this example : Notice that two render channels are used , First for Generate shadow maps , And choosing whether to render with the light angle is in the second .
void buildCommandBuffers()
{
VkCommandBufferBeginInfo cmdBufInfo = vks::initializers::commandBufferBeginInfo();
VkClearValue clearValues[2];
VkViewport viewport;
VkRect2D scissor;
for (int32_t i = 0; i < drawCmdBuffers.size(); ++i)
{
VK_CHECK_RESULT(vkBeginCommandBuffer(drawCmdBuffers[i], &cmdBufInfo));
/* First render pass: Generate shadow map by rendering the scene from light's POV( First render channel : From the light source POV Render the scene to generate shadow maps ) */
{
clearValues[0].depthStencil = {
1.0f, 0 };
VkRenderPassBeginInfo renderPassBeginInfo = vks::initializers::renderPassBeginInfo();
renderPassBeginInfo.renderPass = offscreenPass.renderPass;
renderPassBeginInfo.framebuffer = offscreenPass.frameBuffer;
renderPassBeginInfo.renderArea.extent.width = offscreenPass.width;
renderPassBeginInfo.renderArea.extent.height = offscreenPass.height;
renderPassBeginInfo.clearValueCount = 1;
renderPassBeginInfo.pClearValues = clearValues;
vkCmdBeginRenderPass(drawCmdBuffers[i], &renderPassBeginInfo, VK_SUBPASS_CONTENTS_INLINE);
viewport = vks::initializers::viewport((float)offscreenPass.width, (float)offscreenPass.height, 0.0f, 1.0f);
vkCmdSetViewport(drawCmdBuffers[i], 0, 1, &viewport);
scissor = vks::initializers::rect2D(offscreenPass.width, offscreenPass.height, 0, 0);
vkCmdSetScissor(drawCmdBuffers[i], 0, 1, &scissor);
// Set depth bias (aka "Polygon offset")
// Required to avoid shadow mapping artifacts
vkCmdSetDepthBias(
drawCmdBuffers[i],
depthBiasConstant,
0.0f,
depthBiasSlope);
vkCmdBindPipeline(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelines.offscreen);
vkCmdBindDescriptorSets(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayout, 0, 1, &descriptorSets.offscreen, 0, nullptr);
scenes[sceneIndex].draw(drawCmdBuffers[i]);
vkCmdEndRenderPass(drawCmdBuffers[i]);
}
//Note: Explicit synchronization is not required between the render pass, as this is done implicit via sub pass dependencies
// There is no need for explicit synchronization between render channels , Because this is done implicitly through child pass dependencies
/* //Second pass: Scene rendering with applied shadow map */
{
// Note that this render channel has both color and depth
clearValues[0].color = defaultClearColor;
clearValues[1].depthStencil = {
1.0f, 0 };
VkRenderPassBeginInfo renderPassBeginInfo = vks::initializers::renderPassBeginInfo();
renderPassBeginInfo.renderPass = renderPass;
renderPassBeginInfo.framebuffer = frameBuffers[i];
renderPassBeginInfo.renderArea.extent.width = width;
renderPassBeginInfo.renderArea.extent.height = height;
renderPassBeginInfo.clearValueCount = 2;
renderPassBeginInfo.pClearValues = clearValues;
vkCmdBeginRenderPass(drawCmdBuffers[i], &renderPassBeginInfo, VK_SUBPASS_CONTENTS_INLINE);
viewport = vks::initializers::viewport((float)width, (float)height, 0.0f, 1.0f);
vkCmdSetViewport(drawCmdBuffers[i], 0, 1, &viewport);
scissor = vks::initializers::rect2D(width, height, 0, 0);
vkCmdSetScissor(drawCmdBuffers[i], 0, 1, &scissor);
// Visualize shadow map
if (displayShadowMap) {
vkCmdBindDescriptorSets(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayout, 0, 1, &descriptorSets.debug, 0, nullptr);
vkCmdBindPipeline(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelines.debug);
vkCmdDraw(drawCmdBuffers[i], 3, 1, 0, 0);
}
// 3D scene
vkCmdBindDescriptorSets(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayout, 0, 1, &descriptorSets.scene, 0, nullptr);
vkCmdBindPipeline(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, (filterPCF) ? pipelines.sceneShadowPCF : pipelines.sceneShadow);
scenes[sceneIndex].draw(drawCmdBuffers[i]);
drawUI(drawCmdBuffers[i]);
vkCmdEndRenderPass(drawCmdBuffers[i]);
}
VK_CHECK_RESULT(vkEndCommandBuffer(drawCmdBuffers[i]));
}
}
4) Descriptor layout
void setupDescriptorSetLayout()
{
// Shared pipeline layout for all pipelines used in this sample
std::vector<VkDescriptorSetLayoutBinding> setLayoutBindings = {
// Binding 0 : Vertex shader uniform buffer
vks::initializers::descriptorSetLayoutBinding(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_FRAGMENT_BIT, 0),
// Binding 1 : Fragment shader image sampler (shadow map)
vks::initializers::descriptorSetLayoutBinding(VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, VK_SHADER_STAGE_FRAGMENT_BIT, 1)
};
VkDescriptorSetLayoutCreateInfo descriptorLayout = vks::initializers::descriptorSetLayoutCreateInfo(setLayoutBindings);
VK_CHECK_RESULT(vkCreateDescriptorSetLayout(device, &descriptorLayout, nullptr, &descriptorSetLayout));
VkPipelineLayoutCreateInfo pPipelineLayoutCreateInfo = vks::initializers::pipelineLayoutCreateInfo(&descriptorSetLayout, 1);
VK_CHECK_RESULT(vkCreatePipelineLayout(device, &pPipelineLayoutCreateInfo, nullptr, &pipelineLayout));
}
5) Descriptor set
void setupDescriptorSets()
{
std::vector<VkWriteDescriptorSet> writeDescriptorSets;
// Image descriptor for the shadow map attachment
VkDescriptorImageInfo shadowMapDescriptor =
vks::initializers::descriptorImageInfo(
offscreenPass.depthSampler,
offscreenPass.depth.view,
VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL);
// Debug display
VkDescriptorSetAllocateInfo allocInfo = vks::initializers::descriptorSetAllocateInfo(descriptorPool, &descriptorSetLayout, 1);
VK_CHECK_RESULT(vkAllocateDescriptorSets(device, &allocInfo, &descriptorSets.debug));
writeDescriptorSets = {
// Binding 0 : Parameters uniform buffer
vks::initializers::writeDescriptorSet(descriptorSets.debug, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 0, &uniformBuffers.scene.descriptor),
// Binding 1 : Fragment shader texture sampler
vks::initializers::writeDescriptorSet(descriptorSets.debug, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 1, &shadowMapDescriptor)
};
vkUpdateDescriptorSets(device, writeDescriptorSets.size(), writeDescriptorSets.data(), 0, nullptr);
// Offscreen shadow map generation
VK_CHECK_RESULT(vkAllocateDescriptorSets(device, &allocInfo, &descriptorSets.offscreen));
writeDescriptorSets = {
// Binding 0 : Vertex shader uniform buffer
vks::initializers::writeDescriptorSet(descriptorSets.offscreen, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 0, &uniformBuffers.offscreen.descriptor),
};
vkUpdateDescriptorSets(device, writeDescriptorSets.size(), writeDescriptorSets.data(), 0, nullptr);
// Scene rendering with shadow map applied
VK_CHECK_RESULT(vkAllocateDescriptorSets(device, &allocInfo, &descriptorSets.scene));
writeDescriptorSets = {
// Binding 0 : Vertex shader uniform buffer
vks::initializers::writeDescriptorSet(descriptorSets.scene, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 0, &uniformBuffers.scene.descriptor),
// Binding 1 : Fragment shader shadow sampler
vks::initializers::writeDescriptorSet(descriptorSets.scene, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, 1, &shadowMapDescriptor)
};
vkUpdateDescriptorSets(device, writeDescriptorSets.size(), writeDescriptorSets.data(), 0, nullptr);
}
6) pipeline
void preparePipelines()
{
VkPipelineInputAssemblyStateCreateInfo inputAssemblyStateCI = vks::initializers::pipelineInputAssemblyStateCreateInfo(VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST, 0, VK_FALSE);
VkPipelineRasterizationStateCreateInfo rasterizationStateCI = vks::initializers::pipelineRasterizationStateCreateInfo(VK_POLYGON_MODE_FILL, VK_CULL_MODE_BACK_BIT, VK_FRONT_FACE_COUNTER_CLOCKWISE, 0);
VkPipelineColorBlendAttachmentState blendAttachmentState = vks::initializers::pipelineColorBlendAttachmentState(0xf, VK_FALSE);
VkPipelineColorBlendStateCreateInfo colorBlendStateCI = vks::initializers::pipelineColorBlendStateCreateInfo(1, &blendAttachmentState);
VkPipelineDepthStencilStateCreateInfo depthStencilStateCI = vks::initializers::pipelineDepthStencilStateCreateInfo(VK_TRUE, VK_TRUE, VK_COMPARE_OP_LESS_OR_EQUAL);
VkPipelineViewportStateCreateInfo viewportStateCI = vks::initializers::pipelineViewportStateCreateInfo(1, 1, 0);
VkPipelineMultisampleStateCreateInfo multisampleStateCI = vks::initializers::pipelineMultisampleStateCreateInfo(VK_SAMPLE_COUNT_1_BIT, 0);
std::vector<VkDynamicState> dynamicStateEnables = {
VK_DYNAMIC_STATE_VIEWPORT, VK_DYNAMIC_STATE_SCISSOR };
VkPipelineDynamicStateCreateInfo dynamicStateCI = vks::initializers::pipelineDynamicStateCreateInfo(dynamicStateEnables.data(), dynamicStateEnables.size(), 0);
std::array<VkPipelineShaderStageCreateInfo, 2> shaderStages;
VkGraphicsPipelineCreateInfo pipelineCI = vks::initializers::pipelineCreateInfo(pipelineLayout, renderPass, 0);
pipelineCI.pInputAssemblyState = &inputAssemblyStateCI;
pipelineCI.pRasterizationState = &rasterizationStateCI;
pipelineCI.pColorBlendState = &colorBlendStateCI;
pipelineCI.pMultisampleState = &multisampleStateCI;
pipelineCI.pViewportState = &viewportStateCI;
pipelineCI.pDepthStencilState = &depthStencilStateCI;
pipelineCI.pDynamicState = &dynamicStateCI;
pipelineCI.stageCount = shaderStages.size();
pipelineCI.pStages = shaderStages.data();
// Shadow mapping debug quad display
rasterizationStateCI.cullMode = VK_CULL_MODE_NONE;
shaderStages[0] = loadShader(getShadersPath() + "shadowmapping/quad.vert.spv", VK_SHADER_STAGE_VERTEX_BIT);
shaderStages[1] = loadShader(getShadersPath() + "shadowmapping/quad.frag.spv", VK_SHADER_STAGE_FRAGMENT_BIT);
// Empty vertex input state
VkPipelineVertexInputStateCreateInfo emptyInputState = vks::initializers::pipelineVertexInputStateCreateInfo();
pipelineCI.pVertexInputState = &emptyInputState;
VK_CHECK_RESULT(vkCreateGraphicsPipelines(device, pipelineCache, 1, &pipelineCI, nullptr, &pipelines.debug));
// Scene rendering with shadows applied
pipelineCI.pVertexInputState = vkglTF::Vertex::getPipelineVertexInputState({
vkglTF::VertexComponent::Position, vkglTF::VertexComponent::UV, vkglTF::VertexComponent::Color, vkglTF::VertexComponent::Normal});
rasterizationStateCI.cullMode = VK_CULL_MODE_BACK_BIT;
shaderStages[0] = loadShader(getShadersPath() + "shadowmapping/scene.vert.spv", VK_SHADER_STAGE_VERTEX_BIT);
shaderStages[1] = loadShader(getShadersPath() + "shadowmapping/scene.frag.spv", VK_SHADER_STAGE_FRAGMENT_BIT);
// Use specialization constants to select between horizontal and vertical blur
uint32_t enablePCF = 0;
VkSpecializationMapEntry specializationMapEntry = vks::initializers::specializationMapEntry(0, 0, sizeof(uint32_t));
VkSpecializationInfo specializationInfo = vks::initializers::specializationInfo(1, &specializationMapEntry, sizeof(uint32_t), &enablePCF);
shaderStages[1].pSpecializationInfo = &specializationInfo;
// No filtering
VK_CHECK_RESULT(vkCreateGraphicsPipelines(device, pipelineCache, 1, &pipelineCI, nullptr, &pipelines.sceneShadow));
// PCF filtering
enablePCF = 1;
VK_CHECK_RESULT(vkCreateGraphicsPipelines(device, pipelineCache, 1, &pipelineCI, nullptr, &pipelines.sceneShadowPCF));
// Offscreen pipeline (vertex shader only)
shaderStages[0] = loadShader(getShadersPath() + "shadowmapping/offscreen.vert.spv", VK_SHADER_STAGE_VERTEX_BIT);
pipelineCI.stageCount = 1;
// No blend attachment states (no color attachments used)
colorBlendStateCI.attachmentCount = 0;
// Cull front faces
depthStencilStateCI.depthCompareOp = VK_COMPARE_OP_LESS_OR_EQUAL;
// Enable depth bias
rasterizationStateCI.depthBiasEnable = VK_TRUE;
// Add depth bias to dynamic state, so we can change it at runtime
dynamicStateEnables.push_back(VK_DYNAMIC_STATE_DEPTH_BIAS);
dynamicStateCI =
vks::initializers::pipelineDynamicStateCreateInfo(
dynamicStateEnables.data(),
dynamicStateEnables.size(),
0);
pipelineCI.renderPass = offscreenPass.renderPass;
VK_CHECK_RESULT(vkCreateGraphicsPipelines(device, pipelineCache, 1, &pipelineCI, nullptr, &pipelines.offscreen));
}
7)uniform
// Prepare and initialize uniform buffer containing shader uniforms
void prepareUniformBuffers()
{
// Offscreen vertex shader uniform buffer block
VK_CHECK_RESULT(vulkanDevice->createBuffer(
VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT,
VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT,
&uniformBuffers.offscreen,
sizeof(uboOffscreenVS)));
// Scene vertex shader uniform buffer block
VK_CHECK_RESULT(vulkanDevice->createBuffer(
VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT,
VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT,
&uniformBuffers.scene,
sizeof(uboVSscene)));
// Map persistent
VK_CHECK_RESULT(uniformBuffers.offscreen.map());
VK_CHECK_RESULT(uniformBuffers.scene.map());
updateLight();
updateUniformBufferOffscreen();
updateUniformBuffers();
}
Two 、Shader
1.offscreen Shadow map rendering
vert
#version 450
layout (location = 0) in vec3 inPos;
layout (binding = 0) uniform UBO
{
mat4 depthMVP;
} ubo;
out gl_PerVertex
{
vec4 gl_Position;
};
void main()
{
gl_Position = ubo.depthMVP * vec4(inPos, 1.0);
}
frag
#version 450
layout(location = 0) out vec4 color;
void main()
{
color = vec4(1.0, 0.0, 0.0, 1.0);
}
2.scene Model rendering
vert
#version 450
layout (location = 0) in vec3 inPos;
layout (location = 1) in vec2 inUV;
layout (location = 2) in vec3 inColor;
layout (location = 3) in vec3 inNormal;
layout (binding = 0) uniform UBO
{
mat4 projection;
mat4 view;
mat4 model;
mat4 lightSpace;
vec4 lightPos;
float zNear;
float zFar;
} ubo;
layout (location = 0) out vec3 outNormal;
layout (location = 1) out vec3 outColor;
layout (location = 2) out vec3 outViewVec;
layout (location = 3) out vec3 outLightVec;
layout (location = 4) out vec4 outShadowCoord;
const mat4 biasMat = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.5, 0.5, 0.0, 1.0 );
void main()
{
outColor = inColor;
outNormal = inNormal;
gl_Position = ubo.projection * ubo.view * ubo.model * vec4(inPos.xyz, 1.0);
vec4 pos = ubo.model * vec4(inPos, 1.0);
outNormal = mat3(ubo.model) * inNormal;
outLightVec = normalize(ubo.lightPos.xyz - inPos);
outViewVec = -pos.xyz;
outShadowCoord = ( biasMat * ubo.lightSpace * ubo.model ) * vec4(inPos, 1.0);
}
frag
#version 450
layout (binding = 1) uniform sampler2D shadowMap;
layout (location = 0) in vec3 inNormal;
layout (location = 1) in vec3 inColor;
layout (location = 2) in vec3 inViewVec;
layout (location = 3) in vec3 inLightVec;
layout (location = 4) in vec4 inShadowCoord;
layout (constant_id = 0) const int enablePCF = 0;
layout (location = 0) out vec4 outFragColor;
#define ambient 0.1
float textureProj(vec4 shadowCoord, vec2 off)
{
float shadow = 1.0;
if ( shadowCoord.z > -1.0 && shadowCoord.z < 1.0 )
{
float dist = texture( shadowMap, shadowCoord.st + off ).r;
if ( shadowCoord.w > 0.0 && dist < shadowCoord.z )
{
shadow = ambient;
}
}
return shadow;
}
float filterPCF(vec4 sc)
{
ivec2 texDim = textureSize(shadowMap, 0);
float scale = 1.5;
float dx = scale * 1.0 / float(texDim.x);
float dy = scale * 1.0 / float(texDim.y);
float shadowFactor = 0.0;
int count = 0;
int range = 1;
for (int x = -range; x <= range; x++)
{
for (int y = -range; y <= range; y++)
{
shadowFactor += textureProj(sc, vec2(dx*x, dy*y));
count++;
}
}
return shadowFactor / count;
}
void main()
{
float shadow = (enablePCF == 1) ? filterPCF(inShadowCoord / inShadowCoord.w) : textureProj(inShadowCoord / inShadowCoord.w, vec2(0.0));
vec3 N = normalize(inNormal);
vec3 L = normalize(inLightVec);
vec3 V = normalize(inViewVec);
vec3 R = normalize(-reflect(L, N));
vec3 diffuse = max(dot(N, L), ambient) * inColor;
outFragColor = vec4(diffuse * shadow, 1.0);
}
3.quad Deep rendering
vert
#version 450
layout (location = 0) out vec2 outUV;
void main()
{
outUV = vec2((gl_VertexIndex << 1) & 2, gl_VertexIndex & 2);
gl_Position = vec4(outUV * 2.0f - 1.0f, 0.0f, 1.0f);
}
frag
#version 450
layout (binding = 1) uniform sampler2D samplerColor;
layout (location = 0) in vec2 inUV;
layout (location = 0) out vec4 outFragColor;
layout (binding = 0) uniform UBO
{
mat4 projection;
mat4 view;
mat4 model;
mat4 lightSpace;
vec4 lightPos;
float zNear;
float zFar;
} ubo;
float LinearizeDepth(float depth)
{
float n = ubo.zNear;
float f = ubo.zFar;
float z = depth;
return (2.0 * n) / (f + n - z * (f - n));
}
void main()
{
float depth = texture(samplerColor, inUV).r;
outFragColor = vec4(vec3(1.0-LinearizeDepth(depth)), 1.0);
}
summary
Tips : Here is a summary of the article :
for example : That's what we're going to talk about today , This article only briefly introduces pandas Use , and pandas Provides a large number of functions and methods that enable us to process data quickly and conveniently .
边栏推荐
- Market prospect analysis and Research Report of digital line scan camera in 2022
- Feature selection algorithm based on bare bones particleswarm optimization
- Final review of software engineering notes (short answer)
- 如何检查域名解析是否生效?
- What is the time-consuming domain name resolution? What are the influencing factors of domain name resolution time?
- Market prospect analysis and Research Report of integrated scanner in 2022
- Statistical knowledge required by data analysts
- 三层带防护内网红队靶场
- d结构用作多维数组的索引
- 27 pieces of advice for communication young people
猜你喜欢

JVM(6):Slot变量槽、操作数栈、代码追踪、栈顶缓存技术

Code replicates CSRF attack and resolves it

Discussion on the development trend of remote power management unit (Intelligent PDU)

未来已来,5G-Advanced时代开启

【网络篇】套接字编程

Eth Transfer

司马炎爷爷 告诉你什么叫做内卷!

游戏数学: 计算屏幕点中的平面上的点(上帝视角)

Fundamentals of embedded audio processing

Rational use of thread pool and thread variables
随机推荐
B - 刷墙 (C语言)
Guanghetong won the "science and Technology Collaboration Award" of Hello travel, driving two rounds of green industries to embrace digital intelligence transformation
如何检查域名解析是否生效?
Radar emitter modulation signal simulation (code)
Golang generics: generics
Vulkan-官方示例解读-RayTracing
A - Eddy's AC puzzle (C language)
Image detection related model data format
Eth Of Erc20 And Erc721
Esp32 development -lvgl display picture
图像检测相关模型数据格式
强烈推荐这款神器,一行命令将网页转PDF!
Esp32 gattc configuration UUID
This artifact is highly recommended. One line command will convert the web page to PDF!
app直播源码,平台登录页面实现和修改密码页面实现
雷达辐射源调制信号仿真
Detailed explanation of network time synchronization (NTP network timing) of video monitoring system
BP神经网络C语言实现总结
ESP series module burning firmware
Market prospect analysis and Research Report of electronic pelletizing counter in 2022