A Better Self-Shadowing Bump Map?
Posted: Sun Jan 25, 2009 7:27 am
I finally got around to reading Valve's SIGGRAPH2007 publication on self-shadowing bump maps. I have to admit, I had kinda mislead myself into thinking it was something that it isn't - probably encouraged by peoples over-enthusiasm for it.
Based on images I had seen, I thought they had figured out a way to accurately portray self shadowing from any arbitrary direction of incident light. I would be extremely impressed if Valve had managed to accomplish that. Cause I couldn't image how to accomplish that with only three axes. LOL. Anyway, It seems this is NOT true, after all.
If you take the time to really understand what the article is saying about the calculations, and tangent space blah blah, you realize something. The normal map stores shadows from three different directions - and those are all you get. If you go to an angle directly between two of those angles, you get half and half of those two (not really "half and half", but anyway). As for cave walls, it works pretty good, especially in one direction only. One of those three directions is lined up with the direction the player is facing - so the shadow they "should" see it pretty close to the one they "actually" see. Going the other direction isn't so good though - which is the "half and half" case.
This also means, that if you stay in the same direction of the surface, but change your angle (your slope) that you are shining a flashlight into it, the shadow doesn't grow longer or shorter. It only gets darker and lighter.
TIP: When you use ssbump, make sure one of the "basis directions" of your bump texture is aligned with the most likely direction the user will be viewing it from, for best results. The basis direction you'll prolly want to use, and have your player looking from this direction, is the positive U direction (The RED light)
Now.... the "better" part:
The publication talks about removing the signedness from the map (no negatives) resulting in more resolution and less computations. The new method doesn't require a scale and bias. I say, they should have kept it. If they had, though generating it wouldn't be so easy, they could have SIX basis directions. I'd like to see any body nitpick shadows from 6 different directions, nicely blended together. On top of that, I would be willing to bet that 128 levels of shadow darkness is still more than necessary. I would bit mask each channel into two channels , and that would allow two angles of incident (making for four total really, parallel in which case the face is not visible, 30 degress, 60 degrees, and perpendicular for no shadow) for 6 directions at 64 levels of shadow darkness...
Now THAT would be some believable shadowing...
I would imagine it could be taken further if a logarithmic scale was used to optimize the steps between darkness - however, the computation would be prohibitive on all but the best of hardware. I think that 30 and 60 are probably not the optimal angles really. The tangent of the angle determines shadow length - and that's what we're really concerned with.
Can you imagine if they had removed the goal of preserving texture size and alpha channel?
The image displaying ambient occlusion I think has been doctored to demonstrate the effect. I was just thinking how I would implement it - and it seems like that's how they were saying to do it - though they weren't specific. A hemi-sphere above the surface - the more of the sphere is visible from a point, the more lit it is. The image they show obviously doesn't use this method. I don't think I would cast "rays" at all though. I think I would do it in a strictly geometric manner. The geometry of the surrounding surface can be projected to the hemisphere's mesh and use a boolean operation to remove the blocked hemisphere surface. However much surface area remains of the sphere is the amount of visibility. It wouldn't even have to be done per texel - any coplanar area (imagining a height map) that is surrounded by un-occluded texels will also be un-occluded. That optimization could have been applied to either generation process. You could also reverse the projection of the surrounding geometry onto the surface itself, determining, in chunks, the occlusion of each chunk with only one "chopping" of the hemisphere.
It seems to me that Valve needs to hire some engineers. Not that I'm one. I'm just a guy who joined the Army instead of going to college.
If you made it this far - you are obviously some kind of geek :-p . Congrats for hanging in there with me! hehe.
l8r,
coder0xff
P.S. Sry, there are no pretty pictures - but if you even bother to read this, you prolly are smart enough to make them up in your head.
Based on images I had seen, I thought they had figured out a way to accurately portray self shadowing from any arbitrary direction of incident light. I would be extremely impressed if Valve had managed to accomplish that. Cause I couldn't image how to accomplish that with only three axes. LOL. Anyway, It seems this is NOT true, after all.
If you take the time to really understand what the article is saying about the calculations, and tangent space blah blah, you realize something. The normal map stores shadows from three different directions - and those are all you get. If you go to an angle directly between two of those angles, you get half and half of those two (not really "half and half", but anyway). As for cave walls, it works pretty good, especially in one direction only. One of those three directions is lined up with the direction the player is facing - so the shadow they "should" see it pretty close to the one they "actually" see. Going the other direction isn't so good though - which is the "half and half" case.
This also means, that if you stay in the same direction of the surface, but change your angle (your slope) that you are shining a flashlight into it, the shadow doesn't grow longer or shorter. It only gets darker and lighter.
TIP: When you use ssbump, make sure one of the "basis directions" of your bump texture is aligned with the most likely direction the user will be viewing it from, for best results. The basis direction you'll prolly want to use, and have your player looking from this direction, is the positive U direction (The RED light)
Now.... the "better" part:
The publication talks about removing the signedness from the map (no negatives) resulting in more resolution and less computations. The new method doesn't require a scale and bias. I say, they should have kept it. If they had, though generating it wouldn't be so easy, they could have SIX basis directions. I'd like to see any body nitpick shadows from 6 different directions, nicely blended together. On top of that, I would be willing to bet that 128 levels of shadow darkness is still more than necessary. I would bit mask each channel into two channels , and that would allow two angles of incident (making for four total really, parallel in which case the face is not visible, 30 degress, 60 degrees, and perpendicular for no shadow) for 6 directions at 64 levels of shadow darkness...
Now THAT would be some believable shadowing...
I would imagine it could be taken further if a logarithmic scale was used to optimize the steps between darkness - however, the computation would be prohibitive on all but the best of hardware. I think that 30 and 60 are probably not the optimal angles really. The tangent of the angle determines shadow length - and that's what we're really concerned with.
Can you imagine if they had removed the goal of preserving texture size and alpha channel?
The image displaying ambient occlusion I think has been doctored to demonstrate the effect. I was just thinking how I would implement it - and it seems like that's how they were saying to do it - though they weren't specific. A hemi-sphere above the surface - the more of the sphere is visible from a point, the more lit it is. The image they show obviously doesn't use this method. I don't think I would cast "rays" at all though. I think I would do it in a strictly geometric manner. The geometry of the surrounding surface can be projected to the hemisphere's mesh and use a boolean operation to remove the blocked hemisphere surface. However much surface area remains of the sphere is the amount of visibility. It wouldn't even have to be done per texel - any coplanar area (imagining a height map) that is surrounded by un-occluded texels will also be un-occluded. That optimization could have been applied to either generation process. You could also reverse the projection of the surrounding geometry onto the surface itself, determining, in chunks, the occlusion of each chunk with only one "chopping" of the hemisphere.
It seems to me that Valve needs to hire some engineers. Not that I'm one. I'm just a guy who joined the Army instead of going to college.
If you made it this far - you are obviously some kind of geek :-p . Congrats for hanging in there with me! hehe.
l8r,
coder0xff
P.S. Sry, there are no pretty pictures - but if you even bother to read this, you prolly are smart enough to make them up in your head.