CIPP模式四个步骤_CIPP模式四个步骤[通俗易懂]

CIPP模式四个步骤_CIPP模式四个步骤[通俗易懂]这段时间研究了一下顺序无关的半透明实体绘制实现方法,顺便完善了OSG显示引擎半透明实体绘制部分,看了看相关资料,正确性、效率最好的是基于权重函数的混合算法和GPU端链表法,还有个基于矩的数学方法

 

这段时间研究了一下顺序无关的半透明实体绘制实现方法,顺便完善了OSG显示引擎半透明实体绘制部分,看了看相关资料,正确性、效率最好的是基于权重函数的混合算法和GPU端链表法,还有个基于矩的数学方法,绘制的效果比基于权重函数的混合算法效果更好,太复杂,俺也懒得看了,看也看不懂,只有DX代码,放弃了。两种方法存在的问题罗列一下,给其他有兴趣实现的OIT透明绘制的朋友做个参考,本人比较水:)。先来张我心中的女神的图片,内衣是基于权重混合函数实现的:)—,(我自己都觉得自己是艺术家了,看什么都心静如水:)。。。,我老婆说这些美女(还有几个没穿衣服的)皮肤太油不真实(皮肤绘制也是个课题,难点),她也没怪我拿个裸体模型天天在她眼前显摆)。

 

WB OIT
WB OIT

 

CIPP模式四个步骤_CIPP模式四个步骤[通俗易懂]
WB OIT

     内内特写:)—-, 这就是推动俺不断学习的动力,比看仓老师的片子还有成就感。

CIPP模式四个步骤_CIPP模式四个步骤[通俗易懂]
WB OIT

CIPP模式四个步骤_CIPP模式四个步骤[通俗易懂]

 

    再放个车的图片,香车美女。。。

 

CIPP模式四个步骤_CIPP模式四个步骤[通俗易懂]
WB OIT

下面简单说说算法公式,其实不知原理,套公式一样能实现,只是不明白原理的话,出了问题浪费点精气神,好在公式简单明了,如果想了解更详细些,搜一下大神的论文Weighted Blended Order-Independent Transparency,就是G3D 革新引擎的作者 Morgan McGuire,这个家伙是个牛人,Nvidia 的人。

  •  Weighted Blended OIT

   算法的演进:

第一版 Meshkin 2007 年的首先提出的 Sort-independent alpha blending 论文,公式如下

CIPP模式四个步骤_CIPP模式四个步骤[通俗易懂]

简单明了,CIPP模式四个步骤_CIPP模式四个步骤[通俗易懂]   , C0 是背景颜色, 从该公式可以看出他只是简单的将源颜色求和 ,再加上 目标颜色 * (1-源颜色a值求和) 做为混合后的最终颜色,该公式也不是人家瞎整,虽然不具备通用性,但对于颜色相近和a值较小的情况下效果最好,公式推断可以看看论文,如何把顺序相关的因子排除掉,为后续 平均加权 OIT 方法奠定了坚实的基础。

第二版 Bavoil and Myers 的加权平均法 ,也算是对上一方法的改进版,公式如下

CIPP模式四个步骤_CIPP模式四个步骤[通俗易懂]

该公式基于加权求和的方式对 Meshkin 的方法做了改进,正确性提高了很多,而且更具有通用性,但当a 为 0 的时候,本来应该不贡献颜色的片元也参与了加权求和,使颜色变淡,总是透明的。该公式也有个缺点,后续版本也一样,如果不透明的实体使用该公式绘制,C0项目 为0, 但公式前半部分 变为 Ci 求和项 除于 ai 求和项,  平均了颜色,所以明明不透明的实体也变得透明了。

第三版 由 Morgan McGuire 2013 年提出,对上述公式加以改进,解决了a 为0 ,红字部分没解决。公式如下:

CIPP模式四个步骤_CIPP模式四个步骤[通俗易懂]

对a 求和改成 乘法了,全透明实体的绘制问题解决了, 该公式考虑到 片元深度和 a 的影响 ,加入了w()权重函数。一般我们认为离的近的半透明物体罩着后面的物体,看上去颜色也最贴最前面的物体的颜色,论文里几个效果比较好的权重公式:

CIPP模式四个步骤_CIPP模式四个步骤[通俗易懂]

照公式套,实现上还是比较简单,基于加权平均和权重的混合OIT实现方法,无非是求和,平均。

下面我给出我的实现代码, demo 程序就不给了,代码粘贴拷贝很容易 使用OSG后处理来实现。

void WB_OITRenderPass::initialize()
{
	_pass = new osg::Group();
	_pass->setName("WB_OIT");

	_accumTexture = createTexture2D(getFrameBufferWidth(), getFrameBufferHeight(), GL_RGBA16F_ARB, GL_RGBA, GL_FLOAT);
	_accumTexture->setWrap(osg::Texture2D::WRAP_S, osg::Texture2D::CLAMP_TO_EDGE);
	_accumTexture->setWrap(osg::Texture2D::WRAP_T, osg::Texture2D::CLAMP_TO_EDGE);
	_accumTexture->setFilter(osg::Texture::MIN_FILTER, osg::Texture::NEAREST);
	_accumTexture->setFilter(osg::Texture::MAG_FILTER, osg::Texture::NEAREST);

	_accumAlphaTexture = createTexture2D(getFrameBufferWidth(), getFrameBufferHeight(), GL_R16F, GL_RED, GL_FLOAT);
	_accumAlphaTexture->setWrap(osg::Texture2D::WRAP_S, osg::Texture2D::CLAMP_TO_EDGE);
	_accumAlphaTexture->setWrap(osg::Texture2D::WRAP_T, osg::Texture2D::CLAMP_TO_EDGE);
	_accumAlphaTexture->setFilter(osg::Texture::MIN_FILTER, osg::Texture::NEAREST);
	_accumAlphaTexture->setFilter(osg::Texture::MAG_FILTER, osg::Texture::NEAREST);

	// Accum pass.
	_accumPass = createRTTCamera(getFrameBufferWidth(), getFrameBufferHeight(), false, GL_COLOR_BUFFER_BIT);
	_accumPass->setName("WB_OIT_AccumPass");
	_accumPass->attach(osg::Camera::COLOR_BUFFER0, _accumTexture);
	_accumPass->attach(osg::Camera::COLOR_BUFFER1, _accumAlphaTexture);
	_accumPass->attach(osg::Camera::DEPTH_BUFFER, getContext()->_depthBuffer);
	_accumPass->addChild(getContext()->getPipeline()->getSceneRoot());
	_accumPass->setCullCallback(new PassCallback(getContext()->getPipeline()));
	_accumPass->setClearColor(osg::Vec4(0.0, 0.0, 0.0, 1.0));

	osg::StateSet* ss = setShaderProgram(_accumPass, "trans_accum", osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE);
	ss->setMode(GL_CULL_FACE,  osg::StateAttribute::OFF|osg::StateAttribute::OVERRIDE);

	osg::Depth* depth = new osg::Depth;
	depth->setFunction(osg::Depth::LEQUAL);
	depth->setWriteMask(false);
	ss->setAttributeAndModes(depth, osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE);
	
	osg::BlendFunc* bf = new osg::BlendFunc(osg::BlendFunc::ONE, osg::BlendFunc::ONE, osg::BlendFunc::ZERO, osg::BlendFunc::ONE_MINUS_SRC_ALPHA);
	ss->setAttributeAndModes(bf, osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE);

	// Draw pass
	_drawPass = createRTTCamera(getFrameBufferWidth(), getFrameBufferHeight(), true, GL_DEPTH_BUFFER_BIT|GL_COLOR_BUFFER_BIT);
	_drawPass->attach(osg::Camera::COLOR_BUFFER, getContext()->_outputTextureWithLum);
	_drawPass->attach(osg::Camera::DEPTH_BUFFER, getContext()->_tempDepthBuffer);
	_drawPass->setName("WB_OIT_DrawPass");

	ss = setShaderProgram(_drawPass, "trans_draw");

	ss->setTextureAttributeAndModes(0, _accumTexture);
	ss->addUniform(new osg::Uniform("Accumulate", 0));

	ss->setTextureAttributeAndModes(1, _accumAlphaTexture);
	ss->addUniform(new osg::Uniform("AccumulateAlpha", 1));

	ss->setTextureAttributeAndModes(2, getContext()->_outputTexture);
	ss->addUniform(new osg::Uniform("Opacity", 2));
	
	_pass->addChild(_accumPass);
	_pass->addChild(_drawPass);

	getContext()->addPass(this);
}

shader 部分代码:

#version 420 core
 
in vec4 osg_Vertex;
in vec3 osg_Normal;
in vec4 osg_MultiTexCoord0;

uniform mat4 osg_ViewMatrix;
uniform mat4 osg_ViewMatrixInverse;
uniform mat4 osg_ModelViewMatrix;
uniform mat4 osg_ModelViewProjectionMatrix;

out vec2 TexCoords;
out vec3 WorldPos;
out vec3 WorldNormal;

void main()
{
    TexCoords   = osg_MultiTexCoord0.xy;  

    mat4 worldMatrix = osg_ViewMatrixInverse * osg_ModelViewMatrix;
    WorldPos = (worldMatrix * osg_Vertex).xyz;
          
    mat3 normalMatrix = mat3(worldMatrix);
    WorldNormal = normalize(normalMatrix * osg_Normal); 
          
    gl_Position = osg_ModelViewProjectionMatrix * osg_Vertex;
}

#version 420 core
 
#extension GL_ARB_shader_image_load_store : enable
layout (early_fragment_tests) in;
 
#include "chunk_math.glsl"
#include "forward_pbr_shading_parameters.frag"
#include "chunk_shadowmap.frag"
#include "chunk_light.frag"
#include "tone_mapping.frag"

 float weight(float z, float a) 
 {
	return clamp(pow(min(1.0, a * 10.0) + 0.01, 3.0) * 1e8 * pow(1.0 - z * 0.9, 3.0), 1e-2, 3e3);
 }
 
vec4 shading()
{

	vec4  _albedo    = getAlbedo();
	float _roughness = getRoughness();
	float _metallic  = getMetallic();
	float _ao        = getAo();
	vec3 camPos      = getCameraPosition();
	vec3 worldNormal = normalize(getWorldNormal());
	vec4 worldPos    = getWorldPosition();
	
	float _ssao      = 1.0;
	float _shadow    = 0.0;
	
	#if !defined(MATERIAL_IS_TRANSPARENT) && !defined(MAP_IS_TRANSPARENT)
		if(transparency < 0.0001)
		{
			_ssao   = getSSAO();
			_shadow = getShadow(vec4(WorldPos,1.0), WorldNormal);
		}
	#endif
	
	vec3 F0 = 0.16 * reflectance * reflectance * (1.0 - metallic) + _albedo.rgb * metallic;
	vec3 diffuse = _albedo.rgb * (1.0 - metallic);
	
	vec3 Lo = vec3(0.0);
    vec3 ambient = vec3(0.0);
	
#ifdef NUMBER_LIGHTS
	for (int i = 0; i < NUMBER_LIGHTS; ++i)
	{
		Light light = Lights[i];
		Lo += CalcPointOrDirectionalLight( light, camPos, worldPos.xyz, worldNormal, F0, diffuse, metallic, roughness, ambient );
	}
#endif

	//Ambient lighting 
	#ifdef IRRADIANCEMAP
		vec3 kS = fresnelSchlick(max(dot(N, V), 0.0), F0);
		vec3 kD = 1.0 - kS;
		kD *= 1.0 - _metallic;	  
		vec3 irradiance = texture(irradianceMap, N).rgb;
		vec3 diffuse      = irradiance * _albedo;
		ambient = (kD * diffuse) * _ao;
	#else
		ambient = ambient * _albedo.rgb * _ao;
	#endif
	
	Lo = Lo*( 1.0 - _shadow);
	vec3 color = ambient* _ssao + Lo;
	
	#ifdef IRRADIANCEMAP
          //vec3 I = normalize(WorldPos - camPos);
          //vec3 R = reflect(I, normalize(WorldNormal));
          //color = texture(irradianceMap, R).rgb;
		  //color = texture(irradianceMap,TexCoords2).rgb;
	#endif
	
	color = tonemap(color);
	
	vec4 outputColor = vec4(color, _albedo.a);
	
	return outputColor;
}

layout (location = 0) out vec4 out_accumColor;
layout (location = 1) out float out_accumAlpha;

void main()
{		
     vec4 color = shading();
     color.rgb *= color.a;
     float w = weight(gl_FragCoord.z, color.a);
     out_accumColor = vec4(color.rgb * w, color.a);
     out_accumAlpha = color.a * w;
}
#version 420 core

in vec4 osg_MultiTexCoord0;
in vec4 osg_Vertex;

uniform mat4 osg_ModelViewProjectionMatrix;

out vec2 TexCoord;

void main()
{
	TexCoord    = osg_MultiTexCoord0.xy;
	gl_Position = osg_ModelViewProjectionMatrix * osg_Vertex;
}
#version 420 core

#include "chunk_math.glsl"

in vec2 TexCoord;

uniform sampler2D Accumulate;
uniform sampler2D AccumulateAlpha;
uniform sampler2D Opacity;

layout (location = 0) out vec4 fragColor;

void main()
{    	 
	 ivec2 fragCoord = ivec2(gl_FragCoord.xy);
	 
     vec4 accum = texelFetch(Accumulate, fragCoord, 0);
	 float r = accum.a;
     accum.a = texelFetch(AccumulateAlpha, fragCoord, 0).r;
	 
     vec4 color = vec4(accum.rgb / clamp(accum.a, 0.0001, 50000.0), r);	
	 color.rgb = pow(color.rgb, vec3(1.0/2.2));
	 
	 vec4 opaqueColor = texelFetch(Opacity, fragCoord, 0).rgba;
	 vec3 outputColor = mix(color.rgb, opaqueColor.rgb, color.a);

     //luminance 为实现FXAA反走样计算亮度值,如果只是测试,a为 1 就可以了。
	 fragColor = vec4(outputColor, luminance(outputColor));
}

 
   

结论: Weighted Blended OIT ,效果还过得去,比非OIT 绘制正确性合理的多,但毕竟是去掉了公式中顺序相关项推出来的透明融合,存在一些缺陷:

1) 不透明实体被绘制成透明实体,见上文红字部分,如果纹理带a通道,透明通道内不能实现镂空效果。

2) 绘制的不是十分正确,只是整体上过的去;

 

  • GPU 端链表法 实现OIT  

直接上源码,至于实现原理过程,OpenGL 编程指南介绍的很清楚了,最后说下该方法存在的问题:

 

程序初始化部分,这部分代码最有价值的是 OSG 原子计数器, TBO 存储的实现。很庆幸OSG 虽然没落了,但API更新还算及时,支持 计算,几何着色器, 原子操作, TBO 等等。

void PPLL_OITPass::initialize()
{
	_OITRoot = new osg::Group();

	_FinalOIT = createTexture2D(getViewportWidth(), getViewportHeight(), GL_RGBA, GL_RGBA, GL_UNSIGNED_BYTE);
	_FinalOIT->setWrap(osg::Texture2D::WRAP_S, osg::Texture2D::CLAMP_TO_EDGE);
	_FinalOIT->setWrap(osg::Texture2D::WRAP_T, osg::Texture2D::CLAMP_TO_EDGE);
	_FinalOIT->setFilter(osg::Texture::MIN_FILTER, osg::Texture::LINEAR_MIPMAP_LINEAR);
	_FinalOIT->setFilter(osg::Texture::MAG_FILTER, osg::Texture::LINEAR);

	//Head pointer texture.
	_head_pointer_texture = createTexture2D(getViewportWidth(), getViewportHeight(), GL_R32UI, GL_RED_INTEGER_EXT, GL_UNSIGNED_INT);
	_head_pointer_texture->setWrap(osg::Texture2D::WRAP_S, osg::Texture2D::CLAMP_TO_EDGE);
	_head_pointer_texture->setWrap(osg::Texture2D::WRAP_T, osg::Texture2D::CLAMP_TO_EDGE);
	_head_pointer_texture->setFilter(osg::Texture::MIN_FILTER, osg::Texture::NEAREST);
	_head_pointer_texture->setFilter(osg::Texture::MAG_FILTER, osg::Texture::NEAREST);
	_head_pointer_image = new osg::BindImageTexture(0,
		_head_pointer_texture,
		osg::BindImageTexture::READ_WRITE, GL_R32UI,
		0,
		false,
		0);

    // 原子计数器创建部分
	osg::ref_ptr<osg::UIntArray> atomicCounterArray = new osg::UIntArray;
	atomicCounterArray->push_back(0);
	osg::ref_ptr<osg::AtomicCounterBufferObject> acbo = new osg::AtomicCounterBufferObject;
	acbo->setUsage(GL_STREAM_COPY);
	atomicCounterArray->setBufferObject(acbo.get());
	osg::ref_ptr<osg::AtomicCounterBufferBinding> acbb = new osg::AtomicCounterBufferBinding(0, atomicCounterArray.get(), 0, sizeof(GLuint));
	acbb->setUpdateCallback(new ResetAtomicCounter);

	//创建链表
#define OIT_LAYERS  3
	int linked_list_buffer_item_size = 2048 * 2048 * OIT_LAYERS;
	osg::ref_ptr<osg::UIntArray> linked_list_buffer = new osg::UIntArray;
	osg::ref_ptr<osg::PixelDataBufferObject> pdbo = new osg::PixelDataBufferObject();
	pdbo->setUsage(GL_DYNAMIC_COPY);
	pdbo->setTarget(GL_TEXTURE_BUFFER);
	pdbo->setDataSize(linked_list_buffer_item_size * sizeof(GLuint) * 4);
	linked_list_buffer->setBufferObject(pdbo);

	osg::ref_ptr<osg::TextureBuffer> tbo = new osg::TextureBuffer;
	tbo->setBufferData(linked_list_buffer);
	tbo->setInternalFormat(GL_RGBA32UI_EXT);
	osg::BindImageTexture* linked_list_image = new osg::BindImageTexture(1,
		tbo.get(),
		osg::BindImageTexture::WRITE_ONLY, GL_RGBA32UI_EXT,
		0,
		false,
		0);

    //每绘制一帧前,用前序渲染初始化链表纹理。红宝书里直接通过绑定PBO 操作初始化,我这里直接做一次离屏渲染。 
	_OITClearHeadPointerPass = createRTTCamera(getViewportWidth(), getViewportHeight(), true, GL_DEPTH_BUFFER_BIT);
	_OITClearHeadPointerPass->attach(osg::Camera::DEPTH_BUFFER, getContext()->_tempDepthBuffer);
	osg::StateSet* ss = setShaderProgram(_OITClearHeadPointerPass, "clear_head_pointer");
	ss->setAttribute(_head_pointer_image);

    //生成链表的过程。
	_OITPass = createRTTCamera(getViewportWidth(), getViewportHeight(), false, 0);
	_OITPass->setName("OIT_Trans_accum");
	_OITPass->attach(osg::Camera::DEPTH_BUFFER, getContext()->_depthBuffer);
	_OITPass->addChild(getContext()->getPipeline->getSceneRoot());
	_OITPass->setCullCallback(new PassCallback(getContext()->getPipeline()));
	_OITPass->setComputeNearFarMode(osg::CullSettings::DO_NOT_COMPUTE_NEAR_FAR);

	ss = setShaderProgram(_OITPass, "build_lists", osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE);
	ss->setAttribute(_head_pointer_image);
	ss->setAttribute(linked_list_image);
	ss->setAttribute(acbb);
	ss->setMode(GL_BLEND, osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE);
	ss->setMode(GL_CULL_FACE, osg::StateAttribute::OFF | osg::StateAttribute::OVERRIDE);
	ss->setMode(GL_DEPTH_TEST, osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE);
	ss->addUniform(new osg::Uniform("itemCount", linked_list_buffer_item_size));

    //使用链表通过排序,混合生成最终图像。
	_OITDrawPass = createRTTCamera(getViewportWidth(), getViewportHeight(), true);
	_OITDrawPass->attach(osg::Camera::DEPTH_BUFFER, getContext()->_tempDepthBuffer);
	_OITDrawPass->attach(osg::Camera::COLOR_BUFFER0, _FinalOIT);
	ss = setShaderProgram(_OITDrawPass, "resolve_lists");
	ss->setAttribute(_head_pointer_image);
	ss->setAttribute(linked_list_image);
	ss->setTextureAttributeAndModes(0, getContext()->_outputTexture);
	ss->addUniform(new osg::Uniform("Final", 0));

	_OITRoot->addChild(_OITClearHeadPointerPass);
	_OITRoot->addChild(_OITPass);
	_OITRoot->addChild(_OITDrawPass);

	getContext()->addPass(this);
}

void PPLL_OITPass::uninitialize()
{
	getContext()->removePass(this);
}

 

#version 420 core
 
in vec4 osg_MultiTexCoord0;
in vec3 osg_Normal;
in vec4 osg_Vertex;
 
out vec2 TexCoords;
out vec3 WorldPos;
out vec3 WorldNormal;

uniform mat4 osg_ViewMatrixInverse;
uniform mat4 osg_ModelViewMatrix;
uniform mat4 osg_ViewMatrix;
uniform mat4 osg_ModelViewProjectionMatrix;

void main()
{
    TexCoords = osg_MultiTexCoord0.xy;  

    mat4 worldMatrix = osg_ViewMatrixInverse * osg_ModelViewMatrix;
    WorldPos = (worldMatrix * osg_Vertex).xyz;
          
    mat3 normalMatrix = mat3(worldMatrix);
    WorldNormal = normalize(normalMatrix * osg_Normal); 
          
    gl_Position = osg_ModelViewProjectionMatrix * osg_Vertex;
}
#version 420 core

layout (early_fragment_tests) in;
layout (binding = 0, r32ui) uniform uimage2D head_pointer_image;
layout (binding = 1, rgba32ui) uniform writeonly uimageBuffer list_buffer;
layout (binding = 0, offset = 0) uniform atomic_uint list_counter;

#include "chunk_math.glsl"
#include "forward_pbr_shading_parameters.frag"
#include "chunk_light.frag"
#include "tone_mapping.frag"

vec4 shading()
{		
	vec4  _albedo    = getAlbedo();
	float _roughness = getRoughness();
	float _metallic  = getMetallic();
	float _ao        = getAo();
	vec3 camPos      = getCameraPosition();
	vec4 worldPos    = getWorldPosition();
	
	vec3 F0 = 0.16 * reflectance * reflectance * (1.0 - metallic) + _albedo.rgb * metallic;
	vec3 diffuse = _albedo.rgb * (1.0 - metallic);
	
	vec3 worldNormal;
	
	worldNormal = WorldNormal;

	vec3 Lo = vec3(0.0);
    vec3 ambient = vec3(0.0);
	
#ifdef NUMBER_LIGHTS
	for (int i = 0; i < NUMBER_LIGHTS; ++i)
	{
		Light light = Lights[i];
		Lo += CalcPointOrDirectionalLight( light, camPos, worldPos.xyz, worldNormal, F0, diffuse, metallic, roughness, ambient );
	}
#endif

	//Ambient lighting 
	#ifdef IRRADIANCEMAP
		vec3 kS = fresnelSchlick(max(dot(N, V), 0.0), F0);
		vec3 kD = 1.0 - kS;
		kD *= 1.0 - _metallic;	  
		vec3 irradiance = texture(irradianceMap, N).rgb;
		vec3 diffuse    = irradiance * _albedo;
		ambient = (kD * diffuse) * _ao;
	#else
		ambient = ambient * _albedo.rgb * _ao;
	#endif
	
	vec3 color = ambient + Lo;
	
	color = tonemap(color);
	
	//Gamma correction
	color = pow(color, vec3(1.0/2.2)); 

	#ifdef IRRADIANCEMAP
          //vec3 I = normalize(WorldPos - camPos);
          //vec3 R = reflect(I, normalize(WorldNormal));
          //color = texture(irradianceMap, R).rgb;
		  //color = texture(irradianceMap,TexCoords2).rgb;
	#endif
	
	return vec4(color, _albedo.a);
}

uniform int itemCount;

void main(void)
{
    uint index;
    uint old_head;
    uvec4 item;
	
    index = atomicCounterIncrement(list_counter) + 2;
	
	if(index > itemCount-1)
	{
	  index = 1;
	  old_head = 0;
	  imageAtomicExchange(head_pointer_image, ivec2(gl_FragCoord.xy), uint(index));
	  
	  item.x = old_head;
	  item.y = packUnorm4x8(vec4(1.0,0.0,0.0,1.0));
	  item.z = floatBitsToUint(gl_FragCoord.z);
	  imageStore(list_buffer, int(0), item);
	}
	else
	{
	    old_head = imageAtomicExchange(head_pointer_image, ivec2(gl_FragCoord.xy), uint(index));
		
		vec4 surface_color = shading();
		item.x = old_head;
		item.y = packUnorm4x8(surface_color);
		item.z = floatBitsToUint(gl_FragCoord.z);
		imageStore(list_buffer, int(index-1), item);
	}

}
#version 420 core

in vec4 osg_Vertex;
in vec4 osg_MultiTexCoord0;
uniform mat4 osg_ModelViewProjectionMatrix;

out vec2 TexCoord;

void main(void)
{
    TexCoord = osg_MultiTexCoord0.xy;
    gl_Position = osg_ModelViewProjectionMatrix * osg_Vertex;
}

#version 420 core

#pragma import_defines ( USE_FXAA )

#include "chunk_math.glsl"

// The per-pixel image containing the head pointers
layout (binding = 0, r32ui) uniform uimage2D head_pointer_image;
// Buffer containing linked lists of fragments
layout (binding = 1, rgba32ui) uniform uimageBuffer list_buffer;

// This is the output color
layout (location = 0) out vec4 finalColor;

// This is the maximum number of overlapping fragments allowed
#define MAX_FRAGMENTS 40

// Temporary array used for sorting fragments
uvec4 fragment_list[MAX_FRAGMENTS];

in vec2 TexCoord;
uniform sampler2D Final;

void main(void)
{
    int current_index;
    uint fragment_count = 0;

    current_index = int(imageLoad(head_pointer_image, ivec2(gl_FragCoord).xy).x) - 1;
	
    while (current_index >= 0 && fragment_count < MAX_FRAGMENTS)
    {
        uvec4 fragment = imageLoad(list_buffer, int(current_index));
        fragment_list[fragment_count] = fragment;
        current_index = int(fragment.x) -1;
        fragment_count++;
    }
	
    uint i, j;
    if (fragment_count > 1)
    {

        for (i = 0; i < fragment_count - 1; i++)
        {
            for (j = i + 1; j < fragment_count; j++)
            {
                uvec4 fragment1 = fragment_list[i];
                uvec4 fragment2 = fragment_list[j];

                float depth1 = uintBitsToFloat(fragment1.z);
                float depth2 = uintBitsToFloat(fragment2.z);

                if (depth1 < depth2)
                {
                    fragment_list[i] = fragment2;
                    fragment_list[j] = fragment1;
                }
            }
        }

    }
	
    vec3 backgroundColor = texture(Final, TexCoord.xy).rgb;
    for (i = 0; i < fragment_count; i++)
    {
        vec4 modulator  = unpackUnorm4x8(fragment_list[i].y);
        backgroundColor = mix(backgroundColor.rgb, modulator.rgb, modulator.a);
    }

	finalColor = vec4(backgroundColor, luminance(backgroundColor));
}

链表法OIT 无疑绘制半透明实体是最正确的,但它有不可克服的缺陷:

1)资源占用未可预知, 要预先分配,对复杂透明实体,层次比较深,很容易把链表预分配的内存资源吃尽,导致绘制不正确。链表一个项占用 64字节, 比如满屏2K屏,一个透明层占用 2048 *2048*64 bytes = 256M ;

2)物体缩小后会闪烁,放大后闪烁消失,不清楚是不是原子操作问题还是没有mipmap.

如果对正确性要求不是很高,基于权重混合的OIT 方法足够了,而且效率也比较高。

Demo 模型浏览器下载链接: https://pan.baidu.com/s/1H4lS-iKoTqroq6V-xiwpkw 提取码: f3ig 

今天的文章CIPP模式四个步骤_CIPP模式四个步骤[通俗易懂]分享到此就结束了,感谢您的阅读。

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
如需转载请保留出处:https://bianchenghao.cn/84396.html

(0)
编程小号编程小号

相关推荐

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注