It explains the basics of 3D rendering and explains various terms you've probably heard of but never really bothered to look em up.
If people really find this stuff interesting i could write a little more in depth on brushes, textures, displacements and models in source, collisions or whatever is popular. But that requires a little more stuff to look up and i cant ATM. I'd love to hear feedback in case you can't follow it or is too easy. It's not aimed at programmers (though can be a nice introduction), its aimed at mappers who want to get to know the scary world behind hammer. So, here goes.
Perhaps one day ill be bored enough to turn it into an introductory bible of some sorts.
First I will explain some basic definitions
Vector
A Vector is usually regarded as a point in 3D space, but that is just one of its uses. A vector is just a collection of numbers, usually distances constrained to the x, y and z axis. When vectors are used as a position they refer to a point relative to another vector, usually the zero vector (0,0,0). Vectors can appear in different sizes (dimensions); a one dimensional vector is usually a measure of distance; a two dimensional vector can denote a position on a plane or texture (x ,y offset from another point); a three dimensional vector is usually a position, but can also be used to denote force or speed with a certain direction. A four dimensional vector can, for instance, combine a position with a force or speed.
Plane
A plane is a flat surface. A plane can be defined using three positional vectors. It is, by definition, infinite in size. Usually the ‘front’ of the face is determined by the order of the three vectors used to define it. Usually the front is the side from which these vectors are listed counterclockwise. If the three vectors used to define the plane are on one line, the plane is called 'degenerate' (invalid).
Normal
A normal (as you may know from maths) is a 2D or 3D vector, usually one unit in length, perpendicular to a line or plane. It is often used for projection calculations like physics, shadows or decals.
Vertex
A vertex is a data packet filled with all the info required to render a single point in a 3d scene. Which data this is, has to be defined by a programmer. The data usually comprises of a positional 3D vector, a normal 3D vector (in case the graphics card needs to calculate shadows or light), a 2D vector texture coordinate (can be multiple texture coordinates in case of blended textures for instance, or a texture and a lightmap) or color data (e.g. for a colored line or block).
What is rendering exactly?
The CPU sends vertexes to the GPU, then makes sure it has all other metadata required (eg location of dynamic lights, texturedata, etc) then asks the GPU to render all of it! Unfortunately, the GPU doesn’t like to play connect-the-dots and therefore doesn’t. The CPU has to tell WHAT to render using all those vertexes. IE. ‘render a triangle between vector 1, 2 and 3’. ‘Render a line between vector 4, 5 and 6’. In DirectX you can only draw lines and triangles. Drawing a simple cube takes 12 triangles or lines
Texture coordinate
A texture coordinate (a 2D vector) tells the GPU what piece of a texture should be rendered where. Texture coordinates are expressed as two numbers between 0 and 1 (bottom left of the texture is 0,0 and the top right is 1,1). Because texture coordinates can be specified in each vertex, you can add all kinds of funky texture twists, stretching it to wherever. Texture rotating is performed on texture coordinates.

here we see two triangles rendered using four vertexes. Each vertex has its own texture coordinates
So, how does a graphics card render your 3D scene?
First, the CPU decides which shader to use. A shader is a little program running on your GPU that translates vertexes into something that can be rendered (i.e. a pixel on your screen). There are two kinds: The vertexshader is used on a per vertex basis and can be used, for instance, to change rendering options, set blend indexes etc. It's most important function, however, is to calculate the position of the vertex compared to where the camera is.
The pixelshader is used per pixel. It mediates data between the two or three nearest vertexes (i.e. texture coordinates, color data) and outputs a certain color. Its main use is to combine all kinds of data (e.g. lighting, texture coordinates, color data, bumpmaps) into one output colored. This color is eventually projected onto your screen using mathematical magic. A single scene can be built up of hundreds of different shaders, each with their own memory and output. You can even pass a single vertex through multiple shaders!
Next, the shader has to be loaded with data. Any rotations the level should start with, the location of the camera, the aspect ratio, the viewcone, etc. These data only need to be sent once, and updated only when they change. Other data, like textures, need to be sent more often. The graphics card has no direct access to textures. The textures have to be sent by the CPU to the graphics card. Once the texture is stored in the memory of the GPU, it can be used. For most scenes the graphics card is unable to hold many textures in memory (because of all the other data it needs to store), so the CPU has to resend lots of textures to the GPU each frame it renders (luckily the bandwidth is in the order of 10s of gigabytes)! Considering that for instance lightmaps and bumpmaps are actually textures as well. You can imagine that, therefore, it is vital we keep this to a minimum!
The vertexes also need to be loaded. Usually, the CPU decides where they should be drawn (i.e. due to model animations or brush placements) then sends the GPU a shitload of vertexes to render. Again, this has to be performed lots of time each frame, so again, we want to keep this to a minimum. Luckily we can.
Vertex buffers are chunks of memory we can use to store vertexes. Vertex buffers can be sent in one go (instead of sending thousands of vertexes) and can even be stored on the GPU for long periods of time (so we don’t have to send all those vertexes every frame). The problem with vertex buffers however, is that they are relatively static. Once sent, you can’t change them that easily. So in theory you can put all brushes, models and sprites in hammer in a single vertex buffer and send that buffer in one go to the GPU to render, but if you add a single vertex or change the texture coordinates or the position of even one of those, you may need to resend the entire buffer! Because of this, you will want to be careful which vertexes you send to the GPU one by one, and which you put in a buffer. There is an intermediate type of buffer, the dynamic vertex buffer. It is more suitable for vertex data that changes more often. You may have spotted this buffer in a Hammer crash (too many verts for a dynamic buffer, tell a programmer! (yeah, right)). Hammer uses dynamic buffers to render certain selection lines, hl2 uses it to render smoke particles, for instance. Mind that normally, dynamic buffers are not limited in size (their theoretical max size is 2^31 vertexes), Valve just decided to limit them a little lower than what sometimes is needed.
If all your vertexes are loaded in one go, rendering is very fast and easy. As stated before, it’s nothing more difficult than ‘render a triangle between vertexes 1,2 and 3’. However, because this is very inefficient, you may want to change your rendering assignments to something like ‘render 5000 triangles between vertexes 1 and on’, making the GPU render a triangle between vertexes 1, 2 and 3, another between 4, 5 and 6 and so on. This is called a TRIANGLE LIST.
If you can’t get your vertexes to be arranged this nicely (i.e. you need a triangle between 1,2 and 3 but also between 2,3 and 4) without having duplicates all over, it may be better to use a so-called Indexbuffer. This is, as the name suggests, a list of indexes which can be sent to the GPU like a vertexbuffer. It holds indexes like 1,2,3,3,4,4 and, without creating duplicates in your vertexbuffer, you can tell the GPU to render ‘5000 triangles as listed in the indexbuffer’. An indexbuffer uses only 2-4 bytes per index, while a vertexbuffer can use a lot more (usually 16-24 bytes). So this saves a lot of space.
If all triangles you need to render are sharing at least one edge (as the triangles in a displacement) you could use a TRIANGLE STRIP instead. If you make the GPU render a triangle strip using the indexes 1,2, 3, 4 ,5, it will render three triangles: 1,2,3; 2;3;4; 3;4;5 (btw, you don’t NEED to use indexes, you can also render a vertexbuffer this way).

A triangle strip
This strip saves tons of bytes, however, you can only use one set of data (texturecoordinates, normals, etc) for each vertex (two triangles may share a position but have different texture coordinates or normals). You can probably render a cube very efficiently using a triangle strip, but you can’t use different textures on each side because the vertex only contains one set of data. Plus, lighting would be horrible with just 8 normals.
If you want to draw lines, you can use a LINE LIST or a LINE STRIP. A line list draws a line between 1 and 2, 3 and 4, 5 and 6 and so on. A line strip renders a line from 1 to 2 to 3 to 4. It’s a lot faster than single lines, but the lines must interconnect.
Because of all these optimizations, one usually loads a certain texture A, then renders all triangles with that texture, then load texture B. Different colors (eg purple boxes or colored lines) can be rendered in one go as colors can be defined in vertexes. The easiest way to deal with this is to use an index buffer. Then, in our code we do something like
- Code: Select all
Load texture A
Render a TRIANGLE LIST using 100 triangles in indexbuffer starting at index 0
Load texture B
Render a TRIANGLE LIST using 10 triangles in indexbuffer starting at index 300
Etc.
Of course we can also arrange our vertex buffer correctly, but moving indexes around is a lot easier than moving vertexes around. For all that pesky data that continues to change every frame (e.g. moving models) we can send the data through a dynamic vertex buffer. We can also do it while rendering the scene:
- Code: Select all
Load texture C
Render a TRIANGLE LIST using 1 triangle BETWEEN (data of vertex1) (data of vertex2) (data of vertex 3)
But I am sure you can imagine how much slower this type of rendering is…
Rendering other stuff
At this point you might have wondered how to render other stuff. For instance points. Well, DirectX does not offer any means to render points. (OpenGL does btw). You’ll have to render a very small sprite or a very small (set of) line(s).
Sprites are rendered the same way as brushes, but instead use a dynamic vertex buffer (in hammer at least). Every time the camera moves, the corner points (vertexes) of the sprites are recalculated to move them accordingly (so they face the camera always).
Decals are even harder to render. A decal is projected onto a set of brushfaces by the CPU. The CPU duplicates all these brushfaces, clips them according to the projection of the decal, and adds all those triangles to the rendering list. The GPU is eventually asked to render these triangles in a certain way that it will overwrite the colors of the brushes the decals are projected upon. Now you see why a large decal (or an infodecal) on many triangles (eg displacement) is pretty difficult to render (each decal * each triangle). If two or more decals overlap, we are rendering the same set of triangles three or more times !
DirectX offers a thing called ‘instancing’, which basically means to render the same thing at different places. Static props can be rendered once, then instanced to wherever they are. This saves tons of data! Usually though instancing isn’t really an options, because slight changes in animations or textures require you to render all those instances separately.
If an object is outside the screen, it is still rendered. The CPU has to decide if objects need to be rendered or not, the GPU doesn’t care. Usually programmers keep a list of their objects with a flag ‘should render’ , which gets recalculated every time the camera moves. In the case of hl2, objects are grouped into visleafs and their visibility is only updated when the camera moves from one visleaf to another. This saves a lot on calculations.
Whether or not the vertexes of these objects are removed from the rendering buffers is a different matter. Sometimes it’s better to leave them in, because it requires more effort to remove them and clean up afterwards!
The only thing the GPU offers in terms of culling is the fact that it only renders triangles facing the camera. It’s something, but not much…
If you feel interested enough that you want to dive into this matter (code it yourself) get C# visual studio express and XNA studio (both free and downloadable via the Microsoft site). Then learn C#. There is no simpler way to do this I'm afraid. You can also dive into OpenGL (faggy version of DirectX), but its different on many key elements and will stray you away from hl2.









