1. Texture for Diffuse
step 1 : prepare resources
Load the texture in your shader:
// Properties
_MainTex ("Main Tex", 2D) = "while" {}
// Pass in the first SubShader
sampler2D _MainTex;
float4 _MainTex_ST; // Scale:Tiling Trans:Offset
step 2 : prepare containers
Notice how the uv coordinates are stored.
struct a2v {
float4 vertex : POSITION;
float3 normal : NORMAL;
float4 texcoord : TEXCOORD0;
// The TEXCOORD0 in a2v tells you where a vertex are on the given texture
// The UV coordinate will be saved in v2f, in a container TEXCOORDn.
};
struct v2f {
float4 pos : SV_POSITION;
float3 worldNormal : TEXCOORD0;
float3 worldPos : TEXCOORD1;
float2 uv : TEXCOORD2;
};
v2f vert(a2v v) {
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
o.worldNormal = UnityObjectToWorldNormal(v.normal);
// if there's Non-uniform Scale, you must pay attention to normal transformation.
// o.worldNormal = mul(v.normal, (float3x3)unity_WorldToObject);
o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
o.uv = v.texcoord.xy * _MainTex_ST.xy + _MainTex_ST.zw;
// o.uv = TRANSFORM_TEX(v.texcoord, _MainTex)
// up to now it's just a set of uv coordinate, no rgba information.
return o;
}
step 3 : sample the picture and calculate the color
tex2D(Texture from sampler2D, uv coordinate) returns rgba(fixed4) per coordinate.
fixed4 frag(v2f i) : SV_Target {
fixed3 worldNormal = normalize(i.worldNormal);
fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));
fixed3 albedo = tex2D(_MainTex, i.uv).rgb * _Color.rgb; // TexColor and MaterialColor
fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz * albedo;
fixed3 diffuse = _LightColor0.rgb * albedo * max(0, dot(worldNormal, worldLightDir));
// Blinn-Phong
fixed3 viewDir = normalize(UnityWorldSpaceViewDir(i.worldPos));
fixed3 halfDir = normalize(worldLightDir + viewDir);
fixed3 specular = _LightColor0.rgb * _Specular.rgb * pow(max(0, dot(worldNormal, halfDir)), _Gloss);
return fixed4(ambient + diffuse + specular, 1.0);
}
2. Bump Mapping (the MOST IMPORTANT !!!)
0.1 . Why bump mapping?
We calculate light and shadow by dot(lightDir, normalDir). Imagine that the normals on a same plane are different. Now you will get a plane with different gray levels (occlusion). In general, it requires a lot of resources to get such a bumpy surface, but now you can achieve it by using shader tricks!
shader tricks!
0.2 . coordinate range difference
tex2D(tex, uv) returns rgba in [0, 1], but the coordinate range of normal is [-1, 1]. So you need :
0.3 saturate()
Well, at least I think it's jut for security to use this now, including the use of inverse transpose matrix in the normal conversion above. I still need a better understanding of this in the future.
Extract from Cg Standard Library:
saturate
Parameters
x
Vector or scalar to saturate.
Returns x saturated to the range [0,1] as follows:
1) Returns 0 if x is less than 0; else
2) Returns 1 if x is greater than 1; else
3) Returns x otherwise.
1. tangent-space normal map
We want to store every normal of the model in the texture map. Naturally we may think of that in object space. But tangent-space normal map is used more often.
In short, tangent space allows you to freely control the direction of the new normal through relative coordinates. And you can compress the value of Z direction in tangent space (when orthonormal).
2. where to calculate Lighting Model
You've got all changed normals in tangent space by a tangent-space normal map. Next you need to calculate the light and shadow. You'd better use the following two methods to calculate:
· Calculate in tangent space. You need to transform the lightDir and viewDir into tangent space. By this way you can get all containers ready before fragment shader starts, and never have to change them.
· Calculate in world space. You need to transform all changed normal into world space. You need an extra matrix in fragment shader. Sometimes we use this method because some calculation couldn't be done in tangent space (something about cubemap).
A. Calculate in tangent space
reminder
TANGENT (float4 xyzw)
3D object's each tangent in object space, corresponding to each normal. w is used to determine the direction of B axis.
(let's see something about linear algebra...)
x or T axis: tangent y or B axis: binormal z or N axis: normal they are orthonormal
matrix : tangent space --> object space ==》 matrix : object space --> tangent space
inverse
step 1 : prepare resources
// Properties
_BumpMap ("Normal Map", 2D) = "bump" {}
_BumpScale ("Bump Scale", Float) = 1.0
// Pass in the first SubShader
sampler2D _BumpMap;
float4 _BumpMap_ST;
float _BumpScale;
step 2 : prepare containers
TEXCOORDn in v2f stores two sets of coordinate. But for most time, they point to a same coordinate, because MainTex and BumpMap's original pictures are usually the same.
The way you calculate the lighting model has changed. You no longer need worldPos and worldNormal, because you've give the right Directions of light and view.
struct a2v {
float4 vertex : POSITION;
float3 normal : NORMAL;
float4 tangent : TANGENT;
float4 texcoord : TEXCOORD0;
};
struct v2f {
float4 pos : SV_POSITION;
float4 uv : TEXCOORD0;
float3 lightDir : TEXCOORD1;
float3 viewDir : TEXCOORD2;
};
v2f vert(a2v v){
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
o.uv.xy = v.texcoord.xy * _MainTex_ST.xy + _MainTex_ST.zw;
o.uv.zw = v.texcoord.xy * _BumpMap_ST.xy + _BumpMap_ST.zw;
// two sets of coordinate : Diffuse texture and Bump texture
// for most time, uv. xy and zw point to a same coordinate
// because MainTex and BumpMap's original pictures are usually the same
float3 binormal = cross(normalize(v.normal), normalize(v.tangent.xyz)) * v.tangent.w;
// from object space to tangent space
float3x3 rotation = float3x3(v.tangent.xyz, binormal, v.normal);
//or TANGENT_SPACE_ROTATION
o.lightDir = mul(rotation, ObjSpaceLightDir(v.vertex)).xyz;
o.viewDir = mul(rotation, ObjSpaceViewDir(v.vertex)).xyz;
return o;
}
step 3 : sample the picture and calculate the color
When you mark a texture as "Normal map", Unity will compress it. If compressed by DXT5nm, when tex2D() return the rgba, a --> T axis component, g --> B axis component, r and b will be deserted.
UnpackNormal(packedNormal) returns xyz of the changed normals. After this, _BumpScale changes them again.
fixed4 frag(v2f i) : SV_Target {
fixed3 tangentLightDir = normalize(i.lightDir);
fixed3 tangentViewDir = normalize(i.viewDir);
fixed4 packedNormal = tex2D(_BumpMap, i.uv.zw);
// you give a tangent-space normal map, and point out where to sample in each pixel.
// we call it "packed".
// but you need to transform it from pixel's range [0-1] to normal's range [-1,1]
fixed3 tangentNormal;
// if the texture is not marked as "Normal map"
// tangentNormal.xy = (packedNormal.xy * 2 - 1) * _BumpScale;
// tangentNormal.z = sqrt(1.0 - saturate(dot(tangentNormal.xy, tangentNormal.xy)));
// if the texture is marked as "Normal map", there's a function
tangentNormal = UnpackNormal(packedNormal);
tangentNormal.xy *= _BumpScale;
tangentNormal.z = sqrt(1.0 - saturate(dot(tangentNormal.xy, tangentNormal.xy)));
fixed3 albedo = tex2D(_MainTex, i.uv).rgb * _Color.rgb;
fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz * albedo;
fixed3 diffuse = _LightColor0.rgb * albedo * max(0, dot(tangentNormal, tangentLightDir));
fixed3 halfDir = normalize(tangentLightDir + tangentViewDir);
fixed3 specular = _LightColor0.rgb * _Specular.rgb * pow(max(0, dot(tangentNormal, halfDir)), _Gloss);
return fixed4(ambient + diffuse + specular, 1.0);
}
B. Calculate in world space
reminder
To transform the changed normals from tangent space to world space, you need to know how to express every Tangent, Binormal and Normal in world space.
Differences from calculations in tangent space
1. v2f
Before fragment shader, you don't need to calculate anything about lightDir and viewDir. Instead, you need to give the matrix that transform directions from tangent space to world space.
struct v2f {
float4 pos : SV_POSITION;
float4 uv : TEXCOORD0;
float4 TtoW0 : TEXCOORD1;
float4 TtoW1 : TEXCOORD2;
float4 TtoW2 : TEXCOORD3;
};
2.vertex shader
The float3 of TBN in world space is just the expression of the basis of tangent space.
v2f vert(a2v v){
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
o.uv.xy = v.texcoord.xy * _MainTex_ST.xy + _MainTex_ST.zw;
o.uv.zw = v.texcoord.xy * _BumpMap_ST.xy + _BumpMap_ST.zw;
// meet worldPos and worldNormal again, but the parameters aren't directly used in fragment shader.
float3 worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
float3 worldNormal = UnityObjectToWorldNormal(v.normal);
float3 worldTangent = UnityObjectToWorldDir(v.tangent.xyz);
float3 worldBinormal = cross(worldNormal, worldTangent) * v.tangent.w;
// worldPos is stored in w component.
o.TtoW0 = float4(worldTangent.x, worldBinormal.x, worldNormal.x, worldPos.x);
o.TtoW1 = float4(worldTangent.y, worldBinormal.y, worldNormal.y, worldPos.y);
o.TtoW2 = float4(worldTangent.z, worldBinormal.z, worldNormal.z, worldPos.z);
return o;
}
3.fragment shader
Calculate the lightDir and viewDir in fragment shader. Transform bump normals from tangent space to world space. Note that the calculation about angle (dot()) is always finished in fragment shader.
fixed4 frag(v2f i) : SV_Target {
float3 worldPos = float3(i.TtoW0.w, i.TtoW1.w, i.TtoW2.w);
fixed3 lightDir = normalize(UnityWorldSpaceLightDir(worldPos));
fixed3 viewDir = normalize(UnityWorldSpaceViewDir(worldPos));
fixed3 bump = UnpackNormal(tex2D(_BumpMap, i.uv.zw));
bump.xy *= _BumpScale;
bump.z = sqrt(1.0 - saturate(dot(bump.xy, bump.xy)));
bump = normalize(half3(dot(i.TtoW0.xyz, bump), dot(i.TtoW1.xyz, bump), dot(i.TtoW2.xyz, bump)));
fixed3 albedo = tex2D(_MainTex, i.uv).rgb * _Color.rgb;
fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz * albedo;
fixed3 diffuse = _LightColor0.rgb * albedo * max(0, dot(bump, lightDir));
fixed3 halfDir = normalize(lightDir + viewDir);
fixed3 specular = _LightColor0.rgb * _Specular.rgb * pow(max(0, dot(bump, halfDir)), _Gloss);
return fixed4(ambient + diffuse + specular, 1.0);
}
From this page, the original book no longer gives the complete code (I modified part of the code in frag() to run it successfully) , so I will not provide the final code as well. Please try it yourself!
This article has already contained too much knowledge... Well, let's see the rest of "Texture" in my next article!
tips: I'm not sure if it's appropriate to call Semantics 'container', but ... well, at least not completely wrong :)