gambini wrote:Take a look at the stove in ep2 props/forest/stove01. It doesn´t have a short jump as i mentionted but it has 7 submodels (ref+6 lods) although the polycount in the last one is ten times lower than in the ref. But according to your theory, that model is unoptimal.
No, absolutely not.(well, I'd argue it was suboptimal for the PC, but for a different reason than you're asking).
Maybe I've not been very clear, but the essence of all these posts boils down to this: If the GPU is the bottleneck your "optimizations" need to save work for for the gpu by doing extra work on the cpu. This can be done on a per object basis(lods) and/or on a per scene basis(occluders, areaportals, hint brushes).
Somewhat related: Some modern engines and libraries can actually save work on the CPU by doing extra work on the GPU (the opposite of what I was talking about the last paragraph). But implementing systems like this is complicated even when working with consistent hardware or hardware with strict feature compliance (like all DirectX 10 and 11 cards) and it's show-stoppingly hard to implement on older hardware. Examples of this kind of work offloading can be found in BF3, where the Dice uses the "Compute shaders" no the GPU to generate textures for terrain. Another example is Ageia Physx, or the "Bullet" physics library.
If your CPU is the bottleneck, as is the case with all SMALL AND CHEAP
props, then you lose performance by trying to save work for the GPU, by doing extra work on the CPU.
Back to stove01, that is not a small and cheap prop, so the GPU spends more time actually rendering the prop than the CPU takes telling the GPU to render the prop(because that takes time too). So if you can save work for the GPU by doing extra work on the CPU, then you'll save performance.
The CPU and GPU Run in parallel, In a perfect world the cpu will update the world state and issue commands to the gpu in the exact same amount of time that it takes the GPU to render the objects passed to it (albiet a frame behind). Unfortunately that is rarely the case, but these optimization techniques that we talk about exist elusively to make it closer to being true. In the usual case where one finishes before the other, the faster one will literally do nothing until the slower one finishes.
It's not straightforward. Mappers and modelers should have at least a basic understanding of the graphics pipeline and actual overhead of the techniques they're using to properly optimize a scene. If they use the wrong techniques then they can slow down your rendering, if they use the the right ones then they can speed it up. But most of the time I see people doing a mix of the two, they'll save time using some techniques but then gobble up some or all of their savings by using the wrong one somewhere else.