当前位置:网站首页>IBL of directx11 advanced tutorial PBR (3)

IBL of directx11 advanced tutorial PBR (3)

2022-06-12 06:01:00 Senior brother Dai Dai

IBL

IBL yes Image Based Lighting Abbreviation , That is, image-based lighting ,  Think of the image surrounding the whole scene as a large light source , It affects the coloring of objects . Usually referred to in graphic rendering SkyBox, be used for IBL Illuminated SkyBox Usually HDR Format (RGB-float16) Of CubeMap, Because the real-world sky illumination value is not limited to 1.0 within .

 

IBL Just like other light sources, they all follow what was mentioned in the previous article PBR Equations and BRDF, in other words IBL Colouring substances simultaneously Specular Item and Diffuse Item.

Let's first analyze IBL Of Diffuse Item

 

IBL-Diffuse

Above is the illumination equation , Also include Specular Item and Diffuse Item, Let's separate diffuseItem

 

HDR CubeMap Each point of the can be regarded as a small light source , For a certain direction , We take this direction as Z Direction , Do positive hemispherical integration ( The illumination of the negative hemisphere has no illumination contribution due to the direction problem ), Solving convolution ( The calculated values of the illumination calculus equation in this direction are accumulated ), In this direction Diffuse Light contribution value , Commonly known as diffuse reflectance radiance (Diffuse Irradiance).

here Convolution It is a common method to approach the value of calculus by accumulation .

 

 

technological process

Convolution preprocessing is actually rendering in six directions Cube,  In each direction HDR CubeMap Perform illumination convolution , Calculate... In each direction Diffuse Cumulative value of illumination , Finally, when rendering to a CubeMap On . You can simply understand it as , One HDR CubeMap Through this method, a corresponding Diffuse Irradiance CubeMap

	GDirectxCore->TurnOnRenderSkyBoxDSS();
	GDirectxCore->TurnOnCullFront();
	renderCubeMap->ClearRenderTarget(1.0f, 1.0f, 1.0f, 1.0f);
	XMMATRIX projMatrix = cubeCamera->GetProjMatrix();

	for (int index = 0; index < 6; ++index)
	{
		renderCubeMap->ClearDepthBuffer();
		renderCubeMap->SetRenderTarget(index);
		GShaderManager->cubeMapToIrradianceShader->SetMatrix("View", cubeCamera->GetViewMatrix(index));
		GShaderManager->cubeMapToIrradianceShader->SetMatrix("Proj", projMatrix);
		GShaderManager->cubeMapToIrradianceShader->SetTexture("HdrCubeMap", hdrCubeMap->GetTexture());
		GShaderManager->cubeMapToIrradianceShader->SetTextureSampler("ClampLinear", GTextureSamplerBilinearClamp);
		GShaderManager->cubeMapToIrradianceShader->Apply();
		cubeGameObject->RenderMesh();
	}

	// It has been rendered MipMap On the chain Top Grade , call GenerateMips Automatically generate the remaining Mip Grade 
	GDirectxCore->GenerateMips(renderCubeMap->GetSRV());

	GDirectxCore->RecoverDefaultDSS();
	GDirectxCore->RecoverDefualtRS();

CubeMalpToIradiance.fx

TextureCube HdrCubeMap:register(t0);
SamplerState ClampLinear:register(s0);

static const float PI = 3.1415926;

cbuffer CBMatrix:register(b0)
{
	matrix View;
	matrix Proj;
};

struct VertexIn
{
	float3 Pos:POSITION;
	float3 Color:COLOR;
	float3 Normal:NORMAL;
	float3 Tangent:TANGENT;
	float2 Tex:TEXCOORD;
};


struct VertexOut
{
	float4 Pos:SV_POSITION;
	float3 SkyPos:TEXCOORD0;
};


VertexOut VS(VertexIn ina)
{
	VertexOut outa;
	outa.SkyPos = ina.Pos;
	outa.Pos = float4(ina.Pos, 1.0f);
	outa.Pos = mul(outa.Pos, View);
	outa.Pos = mul(outa.Pos, Proj);
	return outa;
}

float4 PS(VertexOut outa) : SV_Target
{
	float3 N = normalize(outa.SkyPos);
	float3 irradiance = float3(0.0, 0.0, 0.0);
	float3 up = float3(0.0, 1.0, 0.0);
	float3 right = cross(up, N);
	up = cross(N, right);

	float sampleDelta = 0.025;
	float nrSamples = 0.0;
	for (float phi = 0.0; phi < 2.0 * PI; phi += sampleDelta)
	{
		for (float theta = 0.0; theta < 0.5 * PI; theta += sampleDelta)
		{
			// spherical to cartesian (in tangent space)
			float3 tangentSample = float3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta));
			float3 sampleVec = tangentSample.x * right + tangentSample.y * up + tangentSample.z * N;
			
			irradiance += HdrCubeMap.Sample(ClampLinear, sampleVec).xyz * cos(theta) * sin(theta);
			nrSamples += 1.0;
		}
	}

	irradiance = PI * irradiance * (1.0 / float(nrSamples));
	return float4(irradiance, 1.0);
}
	float3 irradiance = IrradianceTex.Sample(clampLinearSample, worldNormal).xyz;
	float3 iblDiffuse = irradiance * albedo * kd / PI;

  The background of the first picture below is HDRCubeMap, The background of the second picture is IrradianceCubeMap

 

 

 

IBL-Specular

IBL Like any other light source Specular Item Of , Look at the light equation above

Separate out Specular Item

First think about whether you can use Diffuse Item The convolution method is similar to the pre settlement method DiffuseIrradiance Of CubeMap?

In fact, if you look at the calculus above , I found something wrong , BRDF Of specular item There is no camera orientation ViewDir, This ViewDir It may exist in any direction , This means that if the above convolution pre settlement is used , You're just like DiffuseIrradiance Convolute each incident ray direction as well , And for every direction ViewDir Convolution , The result of double convolution is that you have to precompute countless IrradianceCubeMap To satisfy the second dimension ViewDir Information , In general, these problems lead to the whole BRDF Specular It is difficult to convolute the equation directly .

Unreal Engine graphics program Brians Karis  Put forward  split sum approximation To solve these problems , split sum approximation Through the BRDF Of specular The term is divided into two parts that can be convoluted separately , Then at the last PBR Shader To perform combined calculation . The first part is similar to the above DiffuseIrradiance, It's also true HDRCubeMap Convolution , This part can be called PrefliterHDRCubeMap.  The second part is to function some parameters into LUT Query table , This part can be called ConvolutedBRDF.

Look at the following equation decomposition :

 

split sum approximation The two parts of calculus are as follows :

 

Prefliter HDR CubeMap

This part is actually the same as the request DifffuseIrradiance similar , But it should be noted that IBL Specular and IBL Diffuse Some places are different ,IBL Diffuse Is independent of roughness , The whole positive hemisphere has accumulated light in the same direction . however IBL Specular Dissimilarity ,specular Term is closely related to surface roughness . Take a look at the figure below :

Formed by different rough surfaces spcular lobe Of different sizes , The rougher the surface ,spcular lobe The bigger it is , From the reversibility of light , Of a certain direction of reflection specular Light is the contribution from more incident light directions , Cause convolution CubeMap The effect is blurry , Look like Diffuse Irradiance CubeMap. And the smoother the surface , A certain direction of reflected light is only light from a very few directions , Cause convolution CubeMap The effect is clear , Look like CubeMap.

There is a problem with pre convolution HDR CubeMap When ,Specular Camera orientation ViewDir I don't know ,epic games Simplified , Suppose the reflection direction of light ( Sampling direction ) and ViewDir It's the same

The effect is not as good as that of normal reflection , But it solves the problem of camera orientation to some extent

Here we use CubeMapMip To correspond to multiple roughness Prefliter Of CubeMap

Here, Monte Carlo integration is used for the randomness of the incident direction , and specular lobe Size ( The incident light direction limits the range ) It uses GGX Importance sampling .

The Monte Carlo integral here uses  Hammersley sequence

// ----------------------------------------------------------------------------
// http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
// efficient VanDerCorpus calculation.
float RadicalInverse_Vdc(uint bits)
{
	bits = (bits << 16u) | (bits >> 16u);
	bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
	bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
	bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
	bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);
	return float(bits) * 2.3283064365386963e-10; // / 0x100000000
}

float2 Hammersley(uint i, uint N)
{
	return float2(float(i) / float(N), RadicalInverse_Vdc(i));
}

Importance sampling is based on GGX The distribution of the

float3 ImportanceSampleGGX(float2 Xi, float3 N, float roughness)
{
	float a = roughness * roughness;
	float phi = 2.0 * PI * Xi.x;
	float cosTheta = sqrt((1.0 - Xi.y) / (1.0 + (a*a - 1.0) * Xi.y));
	float sinTheta = sqrt(1.0 - cosTheta * cosTheta);

	float3 H;
	H.x = cos(phi) * sinTheta;
	H.y = sin(phi) * sinTheta;
	H.z = cosTheta;

	float3 up = abs(N.z) < 0.999 ? float3(0.0, 0.0, 1.0) : float3(1.0, 0.0, 0.0);
	float3 tangent = normalize(cross(up, N));
	float3 bitangent = cross(N, tangent);

	float3 sampleVec = tangent * H.x + bitangent * H.y + N * H.z;
	return normalize(sampleVec);
}

The last and the top Diffuse Irradiance Do the same RenderToCubeMap, But we have to consider roughness It is divided into 5 Render to levels TargetCubeMap Corresponding Mip in .

	const int MaxMipLevel = 5;
	const int MaxCubeMapFaces = 6;
	ID3D11RenderTargetView* rtvs[MaxCubeMapFaces];
	const float color[4] = { 0.0, 0.0, 0.0, 1.0f };
	XMMATRIX projMatrix = cubeCamera->GetProjMatrix();
	// First of all , fill 2D Texture describes structure , And create 2D Render Target Texture 

	//Texture2D
	D3D11_TEXTURE2D_DESC cubeMapTextureDesc;
	ZeroMemory(&cubeMapTextureDesc, sizeof(cubeMapTextureDesc));
	cubeMapTextureDesc.Format = DXGI_FORMAT_R16G16B16A16_FLOAT;
	cubeMapTextureDesc.Width = textureWidth;
	cubeMapTextureDesc.Height = textureHeight;
	cubeMapTextureDesc.MipLevels = 0;
	cubeMapTextureDesc.ArraySize = MaxCubeMapFaces;
	cubeMapTextureDesc.Usage = D3D11_USAGE_DEFAULT;
	cubeMapTextureDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
	cubeMapTextureDesc.CPUAccessFlags = 0;
	cubeMapTextureDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE | D3D11_RESOURCE_MISC_GENERATE_MIPS;
	cubeMapTextureDesc.SampleDesc.Count = 1;
	cubeMapTextureDesc.SampleDesc.Quality = 0;
	HR(g_pDevice->CreateTexture2D(&cubeMapTextureDesc, NULL, &cubeMapTexture));

	//SRV
	D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
	shaderResourceViewDesc.Format = cubeMapTextureDesc.Format;
	shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE;
	shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
	shaderResourceViewDesc.Texture2D.MipLevels = MaxMipLevel; //-1 It will generate to 1X1 Pixel MipMap
	HR(g_pDevice->CreateShaderResourceView(cubeMapTexture, &shaderResourceViewDesc, &srv));

	GDirectxCore->TurnOnRenderSkyBoxDSS();
	GDirectxCore->TurnOnCullFront();
	for (int mip = 0; mip < MaxMipLevel; ++mip)
	{
		D3D11_RENDER_TARGET_VIEW_DESC envMapRTVDesc;
		ZeroMemory(&envMapRTVDesc, sizeof(envMapRTVDesc));
		envMapRTVDesc.Format = cubeMapTextureDesc.Format;
		envMapRTVDesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2DARRAY;
		envMapRTVDesc.Texture2D.MipSlice = mip;
		envMapRTVDesc.Texture2DArray.ArraySize = 1;

		int mipWidth = textureWidth * pow(0.5, mip);
		int mipHeight = textureHeight * pow(0.5, mip);

		D3D11_VIEWPORT envMapviewport;
		envMapviewport.Width = mipWidth;
		envMapviewport.Height = mipHeight;
		envMapviewport.MinDepth = 0.0f;
		envMapviewport.MaxDepth = 1.0f;
		envMapviewport.TopLeftX = 0.0f;
		envMapviewport.TopLeftY = 0.0f;
		float roughness =(float)mip / (float)(MaxMipLevel - 1);

		for (int index = 0; index < MaxCubeMapFaces; ++index)
		{
			envMapRTVDesc.Texture2DArray.FirstArraySlice = index;
			g_pDevice->CreateRenderTargetView(cubeMapTexture, &envMapRTVDesc, &rtvs[index]);
			g_pDeviceContext->ClearRenderTargetView(rtvs[index], color);
			g_pDeviceContext->OMSetRenderTargets(1, &rtvs[index], 0);
			g_pDeviceContext->RSSetViewports(1, &envMapviewport);
			GShaderManager->prefilterCubeMapShader->SetMatrix("View", cubeCamera->GetViewMatrix(index));
			GShaderManager->prefilterCubeMapShader->SetMatrix("Proj", projMatrix);
			GShaderManager->prefilterCubeMapShader->SetFloat("Roughness", roughness);
			GShaderManager->prefilterCubeMapShader->SetTexture("HdrCubeMap", hdrCubeMap->GetTexture());
			GShaderManager->prefilterCubeMapShader->SetTextureSampler("TrilinearFliterClamp", GTrilinearFliterClamp);
			GShaderManager->prefilterCubeMapShader->Apply();
			cubeGameObject->RenderMesh();
		}

		for (int index = 0; index < MaxCubeMapFaces; ++index)
		{
			ReleaseCOM(rtvs[index]);
		}
	}

	GDirectxCore->RecoverDefaultDSS();
	GDirectxCore->RecoverDefualtRS();

PreFilterHdrCubeMap.fx

float3 N = normalize(outa.SkyPos);
	float3 R = N;
	float3 V = R;

	const uint SAMPLE_COUNT = 1024;
	float3 perfilteredColor = float3(0.0, 0.0, 0.0);
	float totalWeight = 0.0;

	for (uint i = 0; i < SAMPLE_COUNT; ++i)
	{
		// generates a sample vector that's biased towards the preferred alignment direction (importance sampling).
		float2 Xi = Hammersley(i, SAMPLE_COUNT);
		float3 H = ImportanceSampleGGX(Xi, N, Roughness);
		float3 L = normalize(2.0 * dot(V, H) * H - V);

		float NdotL = max(dot(N, L), 0.0);
		if (NdotL > 0.0)
		{
			float D = DistributionGGX(N, H, Roughness);
			float NdotH = max(dot(N, H), 0.0);
			float HdotV = max(dot(H, V), 0.0);
			float pdf = D * NdotH / (4.0 * HdotV) + 0.0001;

			float resolution = 512.0; // resolution of source cubemap (per face)
			float saTexel = 4.0 * PI / (6.0 * resolution * resolution);
			float saSample = 1.0 / (float(SAMPLE_COUNT) * pdf + 0.0001);

			float mipLevel = Roughness == 0.0 ? 0.0 : 0.5 * log2(saSample / saTexel);
			perfilteredColor += HdrCubeMap.SampleLevel(TrilinearFliterClamp, L, mipLevel).rgb * NdotL;
			totalWeight += NdotL;
		}
	}

	perfilteredColor /= totalWeight;
	return float4(perfilteredColor, 1.0);

ConvolutedBRDF

Here is the convolution of the second part of the above formula

 

Some simplification is carried out here ( Below F(wo, h) Is the Fresnel function )

because fr(p, wi, wo) The reflection function already contains F term

You can round off the denominator F(wo, h), Get the following :   

For this simplified BRDF Formula for convolution calculation , Get one 2D Mapping LUT, With NdotV and roughness As U,V Axis , To calculate the IBL specular BRDF Simplify the value of the formula .

ConvolutedBRDFShader.fx, Here we have to pay attention to IBL Of BRDF Geometric occlusion function of k Coefficients and point lights , Directional light is different

float GeometrySchlickGGXForIBL(float NdotV, float roughness)
{
	// note that we use a different k for IBL
	float a = roughness;
	float k = (a * a) / 2.0;

	float nom = NdotV;
	float denom = NdotV * (1.0 - k) + k;

	return nom / denom;
}

float GeometrySmithForIBL(float3 N, float3 V, float3 L, float roughness)
{
	float NdotV = max(dot(N, V), 0.0);
	float NdotL = max(dot(N, L), 0.0);
	float ggx2 = GeometrySchlickGGXForIBL(NdotV, roughness);
	float ggx1 = GeometrySchlickGGXForIBL(NdotL, roughness);

	return ggx1 * ggx2;
}

float2 IntegrateBRDF(float NDotV, float roughness)
{
	float3 V;
	V.x = sqrt(1.0 - NDotV * NDotV);
	V.y = 0.0;
	V.z = NDotV;

	float A = 0.0;
	float B = 0.0;

	float3 N = float3(0.0, 0.0, 1.0);
	const uint SAMPLE_COUNT = 1024;

	for (uint i = 0; i < SAMPLE_COUNT; ++i)
	{
		// generates a sample vector that's biased towards the
		// preferred alignment direction (importance sampling).
		float2 Xi = Hammersley(i, SAMPLE_COUNT);
		float3 H = ImportanceSampleGGX(Xi, N, roughness);
		float3 L = normalize(2.0 * dot(V, H) * H - V);

		float NdotL = max(L.z, 0.0);
		float NdotH = max(H.z, 0.0);
		float VdotH = max(dot(V, H), 0.0);

		if (NdotL > 0)
		{
			float G = GeometrySmithForIBL(N, V, L, roughness);
			float G_Vis = (G * VdotH) / (NdotH * NDotV);

			//V = Wo
			float FC = pow(1.0 - VdotH, 5.0);

			A += (1.0 - FC) * G_Vis;
			B += FC * G_Vis;
		}
	}

	A /= float(SAMPLE_COUNT);
	B /= float(SAMPLE_COUNT);

	return float2(A, B);
}

struct VertexIn
{
	float3 Pos:POSITION;
	float2 Tex:TEXCOORD;
};

struct VertexOut
{
	float4 Pos:SV_POSITION;
	float2 Tex:TEXCOORD0;
};

VertexOut VS(VertexIn ina)
{
	VertexOut outa;
	outa.Pos = float4(ina.Pos.xy, 1.0, 1.0);
	outa.Tex = ina.Tex;
	return outa;
}

float4 PS(VertexOut outa) : SV_Target
{
	float2 color2 =  IntegrateBRDF(outa.Tex.x, outa.Tex.y);
	return float4(color2, 0.0, 0.0);
}

Final PrefliterCubeMap and ConvolutedBrdf Bind together , as follows :

	const float MAX_REF_LOD = 4.0;
	float3 prefliterColor = PrefliterCubeMap.SampleLevel(TrilinearFliterClamp, R, MAX_REF_LOD * roughness).rgb;
	float2 brdf = BrdfLut.Sample(clampLinearSample, float2(nDotv, roughness)).xy;
	float3 iblSpecular = prefliterColor * (ks * brdf.x + brdf.y);

 

The final rendering result

 

Project code link

https://github.com/2047241149/SDEngine

 

Reference

[1]  https://learnopengl.com/PBR/IBL/Diffuse-irradiance

[2]  https://learnopengl.com/PBR/IBL/Specular-IBL

[3] https://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_slides.pdf

[4] https://matheowis.github.io/HDRI-to-CubeMap/

[5] https://github.com/shadercoder/PhysicallyBasedRendering

 

 

 

原网站

版权声明
本文为[Senior brother Dai Dai]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/03/202203010613124049.html