three.js / 自定義 ShaderMaterial 實現 normal mapping

效果:
這裏寫圖片描述

這裏寫圖片描述

這次主要碰到兩個難點:
1. 如何進行切線空間的變換,使得即使 mesh 發生變換,法線也依然是正確的。(圖中平面已繞 x 軸旋轉90°)
2. 正確計算光照

探索過程

一開始考慮基本上有3種可能:
1. 作爲 attribute 引入,這也是 opengl 教程的方法
2. 在 vertex shader 裏實現,有博客http://www.zwqxin.com/archives/shaderglsl/review-normal-map-bump-map.html 但是並看不懂
3. 在 frag shader 裏實現
https://github.com/mrdoob/three.js/blob/6e89128f1ae239f29f2124a43133bb3d767b19bf/src/renderers/shaders/ShaderChunk/normalmap_pars_fragment.glsl

原理博客:http://hacksoflife.blogspot.com/2009/11/per-pixel-tangent-space-normal-mapping.html

https://github.com/mrdoob/three.js/issues/7094 這個串就是討論關於 three.js 的 tangent space 的實現問題的,可以看到在約 r70之前是有支持的,但是現在已經去掉了。
一開始是覺得按 attribute 的方法實現好像太累贅了,如果要做的話(按 learn-opengl 教程的風格),應該得取 geometry 的 face 信息,然後計算,然後新建一個 bufferGeometry 放入自定義 attribute。
不過經過這兩天的研究,發現 three.js 用的是第三種辦法,用 GLSL內置的函數dFdydFdx在片段着色器中計算TBN矩陣

一開始我用的就是這段代碼:

https://github.com/mrdoob/three.js/blob/6e89128f1ae239f29f2124a43133bb3d767b19bf/src/renderers/shaders/ShaderChunk/normalmap_pars_fragment.glsl

// Per-Pixel Tangent Space Normal Mapping
// http://hacksoflife.blogspot.ch/2009/11/per-pixel-tangent-space-normal-mapping.html

vec3 perturbNormal2Arb( vec3 eye_pos, vec3 surf_norm ) {

// Workaround for Adreno 3XX dFd*( vec3 ) bug. See #9988

vec3 q0 = vec3( dFdx( eye_pos.x ), dFdx( eye_pos.y ), dFdx( eye_pos.z ) );
vec3 q1 = vec3( dFdy( eye_pos.x ), dFdy( eye_pos.y ), dFdy( eye_pos.z ) );
vec2 st0 = dFdx( vUv.st );
vec2 st1 = dFdy( vUv.st );

float scale = sign( st1.t * st0.s - st0.t * st1.s ); // we do not care about the magnitude

vec3 S = normalize( ( q0 * st1.t - q1 * st0.t ) * scale );
vec3 T = normalize( ( - q0 * st1.s + q1 * st0.s ) * scale );
vec3 N = normalize( surf_norm );
mat3 tsn = mat3( S, T, N );

vec3 mapN = texture2D( normalMap, vUv ).xyz * 2.0 - 1.0;

mapN.xy *= normalScale;
mapN.xy *= ( float( gl_FrontFacing ) * 2.0 - 1.0 );

return normalize( tsn * mapN );

}

通過搜索 github 倉庫,調用這個函數的代碼是在
https://github.com/mrdoob/three.js/blob/6e89128f1ae239f29f2124a43133bb3d767b19bf/src/renderers/shaders/ShaderChunk/normal_fragment_maps.glsl#L23

normal = perturbNormal2Arb( -vViewPosition, normal );

vViewPosition = -modelViewPosition

// vertex shader
modelViewPosition = modelViewMatrix * vec4(position, 1.0);

這個時候我還沒有發現光照計算的錯誤,加上看不太懂這個方法,於是(莫名其妙)找到了另外一份代碼:
https://github.com/mrdoob/three.js/blob/f0936b0c3e4d050dc412b5b922e38400d54f4010/examples/js/ShaderSkin.js#L426

這份代碼實現的是一個皮膚的 shader,也是很值得看的,它展示瞭如何去手動計算光照,以及如何手動實現了 normal mapping,我就是參考它的,另外它做切線空間變換的代碼也更容易懂,不過暫時只是大概有個印象,至於爲什麼要傳入-vViewPosition,tangent爲什麼是這樣計算還需要繼續研究。。。:

// normal mapping

"vec4 posAndU = vec4( -vViewPosition, vUv.x );",
"vec4 posAndU_dx = dFdx( posAndU ),  posAndU_dy = dFdy( posAndU );",
"vec3 tangent = posAndU_dx.w * posAndU_dx.xyz + posAndU_dy.w * posAndU_dy.xyz;",
"vec3 normal = normalize( vNormal );",
"vec3 binormal = normalize( cross( tangent, normal ) );",
"tangent = cross( normal, binormal );", // no normalization required
"mat3 tsb = mat3( tangent, binormal, normal );",

"vec3 normalTex = texture2D( tNormal, vUv ).xyz * 2.0 - 1.0;",
"normalTex.xy *= uNormalScale;",
"normalTex = normalize( normalTex );",

"vec3 finalNormal = tsb * normalTex;",
"normal = normalize( finalNormal );",

注意這裏的vNormal,它是在頂點着色器中這樣計算的,屬於 view space,之後的一切計算都是在 view space 中進行的:

//normalMatrix = inverse transpose of modelViewMatrix
vNormal = normalize( normalMatrix *  normal );

因爲引入的光照的數據是在 view space 的,我沒有注意到,沿用了在 world space 計算的代碼,所以一直出錯,最後發現了,把原本的觀察位置(原來設爲 camera position)改爲 vec3(0.0,0.0,0.0)就正確了。

雖然數學原理還沒有很理解,但是折騰了這一番感覺對之前感到很困擾的shaderChunk等東西有了初步的認識。

代碼

這裏只做了 簡易版的 pointlight 的照明,沒有衰減的效果。
vertex shader:

varying vec2 vUv;
varying vec3 viewPos;
varying vec3 worldPos;
varying vec3 vNormal;
varying vec3 vViewPosition;
uniform mat4 transform; // 在本程序中沒用,是個 identity 矩陣
//            uniform vec3 cameraPosition;
//            uniform mat3 normalMatrix; // = inverse transpose of modelViewMatrix
//            uniform mat4 viewMatrix;
//            uniform mat4 projectionMatrix;
//            uniform mat4 modelViewMatrix;
//            uniform mat4 modelMatrix;
void main() {
    vUv = uv;
    vNormal = normal;
    worldPos = (viewMatrix*transform*modelMatrix*vec4( position, 1.0 )).xyz;
    //   viewPos = cameraPosition; // world-space lighting
    viewPos = vec3(0.0,0.0,0.0); // view-space lighting


    vNormal = normalize( normalMatrix *  normal );
    vec4 mvPosition =  viewMatrix*transform*modelMatrix*vec4( position, 1.0 );
    vViewPosition = -mvPosition.xyz;

    gl_Position = projectionMatrix*mvPosition;
}

fragment shader:
光照是手動做的,所以比較長,可以嘗試直接拼接 shaderchunk 中的相關模塊
https://github.com/mrdoob/three.js/blob/dev/src/renderers/shaders/ShaderChunk/lights_fragment_begin.glsl


varying vec2 vUv;
varying vec3 vNormal;
uniform float time; 
varying vec3 viewPos;
varying vec3 worldPos;
uniform sampler2D diffuseMap;
uniform sampler2D normalMap;

varying vec3 vViewPosition;




#if NUM_POINT_LIGHTS > 0
struct PointLight {
  vec3 color;
  vec3 position; // light position, in camera coordinates
  float distance; // used for attenuation purposes. Since
                  // we're writing our own shader, it can
                  // really be anything we want (as long as
                  // we assign it to our light in its
                  // "distance" field
};
uniform PointLight pointLights[NUM_POINT_LIGHTS];
//            uniform vec3 pointLightColor[NUM_POINT_LIGHTS];
//            uniform vec3 pointLightPosition[NUM_POINT_LIGHTS];
//            uniform float pointLightDistance[NUM_POINT_LIGHTS];
#endif
//
#if NUM_DIR_LIGHTS > 0 
uniform vec3 directionalLightColor[NUM_DIR_LIGHTS];
uniform vec3 directionalLightDirection[NUM_DIR_LIGHTS];
struct DirectionalLight {
 vec3 direction;
 vec3 color;
 int shadow;
 float shadowBias;
 float shadowRadius;
 vec2 shadowMapSize;
 };
 uniform DirectionalLight directionalLights[ NUM_DIR_LIGHTS ];
#endif

uniform vec3 ambientLightColor;
// -----------------------------------


vec3 perturbNormal2Arb( vec3 eye_pos, vec3 surf_norm ) {

    vec2 normalScale = vec2(1.0,1.0);

    // Workaround for Adreno 3XX dFd*( vec3 ) bug. See #9988

    vec3 q0 = vec3( dFdx( eye_pos.x ), dFdx( eye_pos.y ), dFdx( eye_pos.z ) );
    vec3 q1 = vec3( dFdy( eye_pos.x ), dFdy( eye_pos.y ), dFdy( eye_pos.z ) );
    vec2 st0 = dFdx( vUv.st );
    vec2 st1 = dFdy( vUv.st );

    float scale = sign( st1.t * st0.s - st0.t * st1.s ); // we do not care about the magnitude

    vec3 S = normalize( ( q0 * st1.t - q1 * st0.t ) * scale );
    vec3 T = normalize( ( - q0 * st1.s + q1 * st0.s ) * scale );
    vec3 N = normalize( surf_norm );
    mat3 tsn = mat3( S, T, N );

    vec3 mapN = texture2D( normalMap, vUv ).xyz * 2.0 - 1.0;

//                mapN.xy *= normalScale;
//                mapN.xy *= ( float( gl_FrontFacing ) * 2.0 - 1.0 );

    return normalize( tsn * mapN );
//                return ( tsn * mapN );

}

mat3 tangentTransform(vec3 vViewPosition) {

// normal mapping

vec4 posAndU = vec4( -vViewPosition, vUv.x );
    // tangent is alongside the u-axis(x-axis, horizontal one.)
vec4 posAndU_dx = dFdx( posAndU ),  posAndU_dy = dFdy( posAndU );
vec3 tangent = posAndU_dx.w * posAndU_dx.xyz + posAndU_dy.w * posAndU_dy.xyz;
vec3 normal = normalize( vNormal );
vec3 binormal = normalize( cross( tangent, normal ) );
tangent = cross( normal, binormal );    // no normalization required
mat3 tsb = mat3( tangent, binormal, normal );
    return tsb;
}

// -----------------------------------
void main() {
    vec4 diffuse = texture2D(diffuseMap, vUv);
    vec3 samNorm = texture2D(normalMap, vUv).xyz;
    samNorm = samNorm * 2.0 - 1.0;

    vec3 normal = 1.0 * samNorm;

    // option1
//                normal = perturbNormal2Arb( -vViewPosition, normal ); // this also works

    // option2 
    mat3 tsb = tangentTransform( vViewPosition );
    //  normal.xy *= vNormalScale;
    normal = normalize(tsb * normal);

    vec4 addedLights = vec4(0.0,0.0,0.0, 1.0);
      for(int l = 0; l < NUM_POINT_LIGHTS; l++) {
          vec3 lightPos  = pointLights[l].position;
          vec3 lightColor  = pointLights[l].color;
//                      lightPos = vec3(-10.0,2.0,10.0); // debugging
//                      lightColor = vec3(0.0,5.0,1.0); // debugging

        vec3 lightDir = normalize(lightPos - worldPos);

        // diffuse lighting
        addedLights.rgb += clamp(dot(lightDir, normal), 0.0, 1.0) * lightColor; 

        // specular lighting 
        float specularStrength = 0.8;
        vec3 viewDir = normalize(viewPos - vec3(worldPos));
        vec3 inlight = -lightDir;
        vec3 reflectDir = reflect(inlight, normal);
        float spec = pow(max(dot(viewDir, reflectDir), 0.0), 16.0);
        vec3 specular = specularStrength * spec * lightColor;   
        addedLights.rgb += specular;
      }
    gl_FragColor = mix(vec4(diffuse.x, diffuse.y, diffuse.z, 1.0), addedLights, 0.5);
//                gl_FragColor = vec4(0.5,0.5,0.5,1.0);
//                gl_FragColor = vec4(pointLights[0].position / length(pointLights[0].position), 1.0);
//              gl_FragColor = diffuse;
//              gl_FragColor =  vec4(normal,1.0);
//              gl_FragColor = addedLights;
//              gl_FragColor = vec4( directionalLnormalights[0].color, 1.0);
}

如註釋所言,兩種方法都可以,但當光從側面射來時 Option1的效果如下,可以看到和旁邊的效果還是有差距的。:
這裏寫圖片描述

主程序,只貼出無關場景基礎建設的部分


function addObjs() {

    var plgeo = new THREE.PlaneBufferGeometry(5,10);

    var brickmap = new THREE.TextureLoader().load( "images/brickwall.jpg",
     (texture)=>{

            texture.wrapS = THREE.RepeatWrapping;
            texture.wrapT = THREE.RepeatWrapping;
            myUniforms.diffuseMap.value = texture;
    });
    var normalmap = new THREE.TextureLoader().load( "images/brickwall_normal.jpg",
     (texture)=>{

            texture.wrapS = THREE.RepeatWrapping;
            texture.wrapT = THREE.RepeatWrapping;
            myUniforms.normalMap.value = texture;
    });

    myUniforms = THREE.UniformsUtils.merge([
        THREE.UniformsLib['lights'], // must merge this before set .lights = true
//        THREE.UniformsLib['normalmap'],  

        {

            time: { value: 1.0 },
//            diffuse: {type: 'c', value: new THREE.Color(0xffffff)},
            diffuseMap: {
                          type: 't', 
                          value: brickmap
                        },
            normalMap: {
                          type: 't', 
                          value: normalmap
            },
             transform: {
            type: "m4", value: new THREE.Matrix4()
            }
           ,  updatedNormalMatrix: {
            type: "m3", value: new THREE.Matrix3()
            } 

        }]); 


    var plmat = new THREE.ShaderMaterial( {
        uniforms: myUniforms,
        vertexShader: document.getElementById( 'vertexShader' ).textContent,
        fragmentShader: document.getElementById( 'fragmentShader' ).textContent,
        lights: true,
        derivatives: true // 使用 dFdx / dFdy 函數時需要手動啓用

    } );



    var pln = new THREE.Mesh(plgeo, plmat);
    scene.add(pln);


    var tsf = new THREE.Matrix4();

      pln.position.x = -2.5;

      pln.rotation.x = -Math.PI/2;




   // 右邊的那個平面
    var sph2 = new THREE.Mesh(plgeo.clone(), new THREE.MeshPhongMaterial({
        color:0xdddddd,
        map: brickmap,
        normalMap: normalmap
    }));
    scene.add(sph2);
    sph2.position.x = 2.5;
    sph2.rotation.x = -Math.PI/2;

}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章