Modelling of glass surfaces(1)

Step by step describes the process of modeling and rendering of smooth polished glass objects based on OpenGL.

Content

introduction 
1. Glass. What are we going to model? 
2. The transparency of glass 
3. Lighting Model 
4. Modelling of the caustic 
5. Filtering texture with caustic 
6. Two premlomleniya with glass drawing 
7. Further development of the technology 
8. Work program and source code 
useful links

introduction

With the help of modern graphics cards you can get more and more complex dynamic scenes. This means that high-quality (photorealistic) rendering carried out offline in real time. In particular, one of the urgent tasks of computer graphics is calculation and global illumination rendering and complex materials. In this article I would like to talk about modeling glass objects. Glass is quite complex for modeling material, if given the task to get the most realistic image. In this article, we analyze together with you step by step process of modeling and rendering glass objects. Hopefully, after reading this article, the reader will be able to understand the glass simulation technology and get at something like this:


1. Glass. What are we going to model?

So, what do we need to do in order to get a realistic glass?

The main thing that makes the glass glass - is its transparency. That is, the glass refracts and transmits light rays (but, of course, do not forget that the glass also reflects light), allowing you to see (albeit in a distorted form due to refraction) objects that are behind the glass object on relative to the observer.

It should also be approached with caution to the choice of models of glass lighting. From very much depends on correctly chosen lighting model. For example, if you select a lighting model Cook-Torrance - we get a smooth polished glass, and if you choose a model Phong - the glass will no longer seem polished. But in the article, I will stop your choice on the lighting model Cook-Torrance.

Another important part of the glass simulation - it is caustic. The very bright spots are formed when light rays concentration on a small area due to their refraction (deflection from the initial trajectory) on the border material. Oh, and caustic belongs to the simulation of global illumination.

Thus, four components that make glass in glass CG: 
1. Transparency 
2. Reflections 
3. Lighting 
4. Caustic

Well, let's begin our modeling window. Oh, I forgot to say that I have chosen to simulate OpenGL 3.2. Along the way, of course I will explain how to do this or that effect with the help of graphics API, and where necessary, will be given directly to the source code in C ++, as well as all the necessary shaders. Also, I believe that the reader is familiar with the basics of OpenGL and can independently carry out some training (to make a simple scene with a single point source of light) as well as familiar with the use of technology Framebuffer Object.

So, we begin!

2. The transparency of glass

Actually the transparency of the object can be modeled relatively easily. Enough to draw from the center of the simulated object cube map, and then make selections from it in the right directions. Until relatively recently, to receive dynamic cube map in OpenGL can be six times render the scene in each of the faces of a cube texture. But with the advent of the core OpenGL 3.2 shader geometrical things got a lot easier. Suffice it once to draw the scene, all the rest of us do geometry shader. But let us still act in order. So, imagine that we have a scene, but rather let's just agree that it has Unas. In my example - a room apart from it at random boxes. It is drawn it is all very simple, but gradually we will be adding new items on the scene rendering.

Now is the time to create the cube map to our window. Consider the two approaches - first is to draw the six faces in a cube map, and the second - with the geometry shader. Next, and everywhere I'm going to use Framebuffer Object to draw any texture. I hope the reader is already familiar with this technology and I have only to add some points regarding the rendering of a cube map.

So, in order to get the cube map using six passes rendering it will be necessary: ​​to create itself a cubic texture, create Framebuffer Object, which will be responsible for rendering the created cube map and perform rendering scenes in it. Cubic map is created as follows:

  void Ce2Render :: buildCubeTexture (Ce2TextureObject * texture )
 {
   texture-> target = GL_TEXTURE_CUBE_MAP;
   glGenTextures (1, & (texture-> glID));  // Create a new texture
   bindTexture (0, texture, GL_TEXTURE_CUBE_MAP);  // Bindu it as a cube map

   glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
   glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
   glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
   glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
   glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); 

   // Generate all six faces
   for (int face = 0; face <6; face ++)
   {
     TexImage2D (GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, 
     texture-> internalFormat, texture-> width, texture-> height, texture-> format, texture-> type, 0, texture-> name); 
   }
 } 

The code to create a Framebuffer Object object using a standard (remember, I'm counting on the fact that you are already familiar with it, anyway, to describe the creation of FBO - this is not a task that is supplied in this article).

And another important point - to adjust the transformation matrix so that the camera cover all environments (360 °). For this, we need to adjust the projection matrix (projection matrix) so that the FOV equal to 90 °. Here is a fragment of the source code that creates a projection matrix:

  mat4 Ce2Camera :: perspectiveProjection (float fov, float aspect, float zNear, float zFar)
 {
   mat4 result = IDENTITY_MATRIX;

   float fHalfFOV = 0.5f * fov;
   float cotan = cos (fHalfFOV) / sin (fHalfFOV);
   float dz = zFar - zNear;

   result [0] [0] = cotan / aspect;
   result [1] [1] = cotan;
   result [2] [2] = - (zFar + zNear) / dz;
   result [3] [3] = 0.0f;
   result [2] [3] = -1.0f;
   result [3] [2] = -2.0f * zNear * zFar / dz;

   return result;
 }
 ...
 mat4 _cubemapProjectionMatrix = Ce2Camera :: perspectiveProjection (HALF_PI, 1.0, 1.0, 2048.0); 

Now we have a projection matrix, but we need another six species of matrices with which our camera "watched" would in all directions (± X, ± Y, ± Z). Get these matrices can be a simple multiplication of the projective matrix-matrix rotation on the corresponding angles, and you can just (that this will give the same result) swap places (of course, not random) projective line Matica. Once I did that, and now I will share with you :)

The following code generates 6 dies for drawing into a cubic card from the set position:

  CubemapMatrixArray Ce2BasicHelper :: cubemapMatrix (const mat4 & projectionMatrix, const vec3 & pointOfView)
 {
   CubemapMatrixArray result;  // Just an array of 6 matrices

   mat4 translation = translationMatrix (-pointOfView);

   // Line of the projective matrix
   const vec4 & rX = projectionMatrix [0]; 
   const vec4 & rY = projectionMatrix [1];
   const vec4 & rZ = projectionMatrix [2];
   const vec4 & rW = projectionMatrix [3];

   // Swap sly way and we are multiplied by the matrix move to the specified point
   result [0] = translation * mat4 (-rZ, -rY, -rX, rW);
   result [1] = translation * mat4 (rZ, -rY, rX, rW);
   result [2] = translation * mat4 (rX, -rZ, rY, rW);
   result [3] = translation * mat4 (rX, rZ, -rY, rW);
   result [4] = translation * mat4 (rX, -rY, -rZ, rW);
   result [5] = translation * mat4 (-rX, -rY, rZ, rW);

   return result;
 } 

Well, now we have everything to draw in your cube map scene. This is done like this:

  // _reflectionRefractionBuffer - Created early Framebuffer Object 
   // _reflectionRefractionTexture - Previously created cubic map 
   // _cubemapMatrices - An array of six precalculated matrices

  render () -> bindFramebuffer (_reflectionRefractionBuffer);
  for (int i = 0; i <6; ++ i)
  {
    _reflectionRefractionBuffer-> setCurrentRenderTarget (_reflectionRefractionTexture, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i);
    glClear (GL_DEPTH_BUFFER_BIT);
    // Method, which simply draws a scene (settings - the camera and matrix)
    renderEnvironment (_modelCenter, _cubemapMatrices [i]);
  } 

With the first method seems clear. And what about the rendering to cube map, using the geometry shader? Let's deal with this. With this approach, we need Framebuffer Object, which is connected not to the usual 2D texture, and cube map. So, if joining 2D texture we used glFramebufferTexture2D method, now we need to use glFramebufferTexture method that allows you to connect to the object Framebuffer Object any texture - whether it be 1D, 2D, 3D texture or cube map.

Now the most interesting - the geometry shader. The essence of it is that it will create new triangles and projecting them on each of the six faces of the cube map. To do this, we use the built-in variable geometry shaders gl_Layer, which allows you to select the layer (face) Multilayered texture (cube map), in which we will display primitives. In order to correctly triangles projected on the edge of a cube map, we will need to transfer the geometry shader six matrices that have been successfully calculated previously.

Now consider shaders, which are used to render the environment in order:

The vertex shader. I used the preprocessor not to reproduce shaders and assemble all in one, with well-defined functionality. Directive «WITH_GS» is used to determine - whether the geometry shader is used or not. If not in use (and therefore immediately after the vertex shader is a fragment) means that we normally project our top and pass the required values ​​(normal, texture coordinates, etc.) directly in the fragment shader. If you are using a geometry shader, we need to pass it in the top of the original and the desired values, and he has, in turn, will do everything necessary, and passes control to the fragment shader. So, here he is our vertex shader:

  #ifndef WITH_GS 
 // If we use the geometry shader, the matrix of this we do not need
 uniform mat4 mModelViewProjection; 
 #endif

 uniform vec3 vCamera;
 uniform vec3 vPrimaryLight;
 uniform mat4 mLightProjectionMatrix;
 uniform mat4 mTransform;  // Transformation matrix model

 in vec4 Vertex;
 in vec3 Normal;
 in vec2 TexCoord0;

 #ifdef WITH_GS
  // Data for transmission to the geometry shader
  out vec3 gs_vLightWS;
  out vec3 gs_vViewWS;
  out vec3 gs_vNormalWS;
  out vec2 gs_TexCoord;
  out vec4 gs_LightProjectedVertex;
 #else
  // For transmitting data in fragment shader
  out vec3 vLightWS;
  out vec3 vViewWS;
  out vec3 vNormalWS;
  out vec2 TexCoord;
  out vec4 LightProjectedVertex;
 #endif

 void main ()
 {
   vec4 vTransformedVertex = mTransform * Vertex;

   // Here I will mention only the main (key) of the shader 
    // Complete code cm. In the accompanying source code

   #ifdef WITH_GS
   ...
   gl_Position = vTransformedVertex;  // Pass the initial vertex in the geometry shader
   #else
   ...
   gl_Position = mModelViewProjection * vTransformedVertex;  // Simply projecting the top, as usual
   #endif
 } 

Now let's consider a geometry shader.

  layout (triangles) in;
 layout (triangle_strip, max_vertices = 18) out;
 // Output have 18 peaks - one triangle on each of the six faces

 uniform mat4 mModelViewProjection [6];  // Projection matrix for each of the faces of a cube map

 // The default values from the vertex shader
 in vec3 gs_vLightWS [];
 in vec3 gs_vViewWS [];
 in vec3 gs_vNormalWS [];
 in vec2 gs_TexCoord [];
 in vec4 gs_LightProjectedVertex [];

 // Values passed to the fragment shader
 out vec3 vLightWS;
 out vec3 vViewWS;
 out vec3 vNormalWS;
 out vec2 TexCoord;
 out vec4 LightProjectedVertex;

 void main ()
 {
   // Loop through all six faces of the cube map
   for (int layer = 0; layer <6; layer ++)
   {
     gl_Layer = layer;  // Specify, in what is now the brink of the output should be directed

     // And then projecting each of the triangle sides 
      // All that can be assembled into a loop (for i = 0; i <3; ++ i), but why do we need one more loop in the shader?  
      // Here I also give only the basic functionality, full shader code, refer to the source code

     ...
     gl_Position = mModelViewProjection [layer] * gl_in [0] .gl_Position;
     EmitVertex ();

    ...
     gl_Position = mModelViewProjection [layer] * gl_in [1] .gl_Position;
     EmitVertex ();
     ...
     gl_Position = mModelViewProjection [layer] * gl_in [2] .gl_Position;
     EmitVertex ();

     EndPrimitive ();
   }
 } 

The fragment shader we are left without changes. But rendering in this case is as follows:

   render () -> bindFramebuffer (_reflectionRefractionCubemapBuffer);
   glClear (GL_DEPTH_BUFFER_BIT);
   renderEnvironmentToCubeMap (_modelCenter);  // Complete code of the Fct cm. In source codes
  

As you can see - we need only one rendering instead of six.

Of course, the question arises, "What will ?. Draw 6 times in each of the faces, either one time, but with an increase in the number of triangles in geometry shaders?". In fact, speed is almost the same, but the version with geometry shaders I like most of the smaller code size and its "Beauty".

Cubic map we have prepared (the first or second method). Now you can see the lay it on the object and see what happens. I propose to start with a simple shader and gradually build up a functional, enhancing image quality. For a start it must be said that the glass rendering we need to forget about lighting Lambert (is that dot (light, nomal)). Since this section deals only with the correct application of a cube map, I is lowered and specular lighting, and proceed directly to the point. So, we have a cube map, rendered from the center of the object. The shaders sample of a texture is as follows:

vec4 color = texture (cubemap, direction );

where direction - three-vector, which identifies and face of a cube map texture coordinates on this face.

I propose to consider the imposition of a cube map onto a sphere, as the glass ball is almost everyone, and you will be able to use it as a benchmark. Let's start in the direction of the sample as a (direction vectors) in the field of normal use - so you can check out - we rendered our cube map correctly. In this we see the imposition of objects as we see them from the center of the ball in the direction of the normal:


I hope you soon get the right draw the cube map, and you can now go to a more complex algorithm. Let us again remember - that we actually want from a cube map? This - of reflection and refraction. Let's try to impose a cube map so that the sphere reflected rays of light. Well, that has a built-in GLSL Reflect function, which takes two parameters - the direction of the ray and the normal to the plane on which the light beam is reflected. GLSL Developers saved us from having to calculate the self-reflection of the vector according to the formula:

R = I - 2 · dot (N, I) · N

where I (incidence) - incident ray, N (normal) - normal.

So, we calculate the reflection vector and make a selection from a cube map.

   vec3 vReflected = reflect (-vViewNormal, vNormalWS);
   FragColor = texture (environment_map, vReflected);
  

It should be noted that the view vector must be normalized and calculated as vertex - camera, and not vice versa, or (as in the code above) to use the mark "-" in front view vector. I think it should be made clear a little picture:


So, make a selection in the direction of the reflection vector and see just such a picture:


Now, I think it's time to investigate the refraction of light rays. As we know from physics course (if I am not mistaken, it is even in the curriculum) a ray of light is refracted at the interface between two media by Snell's Law:

1 sinθ 1 = n 2 sinθ 2

where n 1,2 - absolute refractive indices of the material (the ratio of light velocity in a medium speed of light in vacuum); 
θ 1,2 - angle between the normal and therefore the incident beam and refracted.

We consider the case when the ray of light passes from air (n 1 ≈1.0) in glass (n 2 ≈1.4). The relative refractive index η n is equal to 1 / n 2 ≈0.7. So we got to the relative refractive index. It is used in the internal Fct GLSL refract, which takes three parameters - the incident ray, the normal and the refractive index.

   vec3 vRefracted = refract (-vViewNormal, vNormalWS, indexOfRefraction);
   FragColor = texture (environment_map, vRefracted);
  

The result will be is such a picture:


Thus, reflection and refraction we have, but separately. How to combine them? Very simple. Even in 1823 he brought Auguste Fresnel formula with which to calculate the proportion of reflected energy depends on the refractive index and the angle of incidence of the beam:


Now there are many approximations to this formula, but I suggest to use the original formula, but we transform it a little bit right now (you do not think that I suggest you calculate sines and cosines?).

So - what is cos (Θ)? It is not that other, as the scalar product of the normalized vector view on the normal at this point. The only thing I would say - vector view in this case it is necessary to calculate a camera - vertex. Since the cosine sorted out, and the sine is none other than the one minus cosine squared. If you multiply and divide whole expression (a degree) in its numerator, we get on top of the square of the numerator, and the bottom 1 - n 2. Now the formula is much simpler (it is interesting - can make these changes manually or in any mathematical package). Thus, in fact there is nothing wrong with this formula in terms of computing and we can make the calculation of the Fresnel coefficient in a separate function of the form:

  float fresnel (float VdotN, float eta )
 {
  float sqr_eta = eta * eta;  // Square of the refractive index
  float etaCos = eta * VdotN;  // Η · cos (Θ)
  float sqr_etaCos = etaCos * etaCos;  // Square
  float one_minSqrEta = 1.0 - sqr_eta;  // 1 - η 2
  float value = etaCos - sqrt (one_minSqrEta + sqr_etaCos);
  value * = value / one_minSqrEta;  // Squared and divide by 1 - n 2
  return min (1.0, value * value );  // Final squaring
 } 

If the output of the Fresnel coefficient for the color:

  float a fFresnel fresnel = (dot Access (vViewNormal, vNormalWS), indexOfRefraction);
  FragColor = vec4 (fFresnel);
  

we see the following:



The white areas correspond to a greater reflection of light and black - the refracted rays.

In general, we have everything to make the most primitive glass. Let's just mix previously obtained color ( for the reflection and refraction of light) based on Fresnel's coefficient:

FragColor = mix (cRefraction, cReflection, fFresnel);

Of course, the final result is still far away. For comparison - the top result of what has been described above, the bottom - the final result, which we want to achieve. It would seem - the difference is not great, but if you take a glass ball, you will see in it a picture, which will look like it is the final result. But this further.




源碼下載地址:http://download.csdn.net/detail/u011417605/9814229
本文地址:http://blog.csdn.net/u011417605/article/details/70173433

發佈了119 篇原創文章 · 獲贊 91 · 訪問量 45萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章