Page 6 of 7
Re: Supported hardware question
Posted: Tue Sep 15, 2009 15:07
by Sussudio
Gez wrote:That's why when you update your drivers, your chipset may become able to do things it couldn't do
Thanks, and that's exactly why I asked this question. As I'm using older drivers which generally work better for my video card, maybe I could be missing some of the newer features required by GZDoom.
Re: Supported hardware question
Posted: Tue Sep 15, 2009 23:19
by Graf Zahl
None of the current features requires more than OpenGL 2.0. But you should update your drivers unless you use really old hardware.
One of NVidia's more recent drivers resulted in a massive speed boost on GZDoom for me (10-20% depending on the map.) I have no idea what was changed but it must have been significant.
Interestingly I almost get no speed boost out of using VBOs despite everyone telling that it's so much faster than immediate mode. I implemented VBO support in the flat renderer in the last few days. The result: The render loop itself only spends half the time it used before but that time is lost again because the card stalls the end of draw calls a bit longer now.
It has become clear that vertex throughput never was an issue for GZDoom on my GF 8600 so I can't expect much more here. The real bottleneck is elsewhere, sadly in a place which I can't optimize.
Re: Supported hardware question
Posted: Wed Sep 16, 2009 19:57
by GuntherDW
Graf Zahl wrote:None of the current features requires more than OpenGL 2.0. But you should update your drivers unless you use really old hardware.
One of NVidia's more recent drivers resulted in a massive speed boost on GZDoom for me (10-20% depending on the map.) I have no idea what was changed but it must have been significant.
Interestingly I almost get no speed boost out of using VBOs despite everyone telling that it's so much faster than immediate mode. I implemented VBO support in the flat renderer in the last few days. The result: The render loop itself only spends half the time it used before but that time is lost again because the card stalls the end of draw calls a bit longer now.
It has become clear that vertex throughput never was an issue for GZDoom on my GF 8600 so I can't expect much more here. The real bottleneck is elsewhere, sadly in a place which I can't optimize.
I thought the new GC made zdoom a lot faster? At least on windows that's true. But dynamic lighting on gzdoom still is quite slow, and, combined with the slowness of gzdoom on linux (i know that this isn't entirely your fault) makes it sadly not that great of a port to play on HD and big maps.
The texture resize function also seems to take up a lot of the CPU power though. Anything that can be done about that one?
Re: Supported hardware question
Posted: Wed Sep 16, 2009 22:51
by Graf Zahl
GuntherDW wrote:
I thought the new GC made zdoom a lot faster? At least on windows that's true.
Only when there's a lot of action going on. But all it does is to keep tasks that should be in the background at a lower priority.
GuntherDW wrote:
But dynamic lighting on gzdoom still is quite slow, and, combined with the slowness of gzdoom on linux (i know that this isn't entirely your fault) makes it sadly not that great of a port to play on HD and big maps.
I know. And here's one of the things I'd like to optimize in the new renderer. But it will require shaders so it's new hardware only.
GuntherDW wrote:
The texture resize function also seems to take up a lot of the CPU power though. Anything that can be done about that one?
Can't be helped. Texture resizing is a resource consuming task.
Re: Supported hardware question
Posted: Wed Sep 16, 2009 22:59
by Nash
Well this is sad news... eh it's okay, it's not like we're in a rush to see the new renderer or anything. You can take as much time as you want on it until you find a solution to make it as fast and as optimized as you can possibly achieve...
Re: Supported hardware question
Posted: Thu Sep 24, 2009 0:50
by Graf Zahl
Update:
Today I redid the texture code for the old renderer and modelled it after what I had planned for the new one. It's mostly working. Cleaning this up was the main reason for starting something completely new. This is no longer necessary so I'll integrate all further changes in the old renderer.
I already deleted all the duplicated code for the rewrite as it's no longer needed.
Re: Supported hardware question
Posted: Thu Sep 24, 2009 2:09
by Nash
So basically, we're not going to see a fancy new renderer, but an improved old renderer... correct?
If it helps you clean up the code so it's easier to maintain, that's all that matters I think.

Re: Supported hardware question
Posted: Thu Sep 24, 2009 8:50
by Graf Zahl
No. I will change it so that the actual rendering code is redone. But I won't redo all the level processing and setup code. If I had gotten some performance gains out of that I would have but since that's not the case why even bother?
One other note: I tested the shader code on an old GF6800 yesterday with disappointing results. It's glitchy and slow (using shaders loses 50% performance) so I don't think that I'll try to keep shaders fully operational on such old hardware anymore. I'll need to do an exception for special colormaps on camera textures but that'll be the only thing.
Not having to deal with 2 freely mixable rendering paths should help clean this up considerably.
Re: Supported hardware question
Posted: Tue Sep 29, 2009 8:31
by Graf Zahl
Here's an update based on an older post:
Graf Zahl wrote:
Just to give you an overview of issues:
- there's no abstraction of textures. This is a significant problem because any special effects like warping, brightmaps or glowing flats are crudely hacked in making the code a big mess. I'd rather replace it with a 2-tier approach: Separate it into textures and materials. The texture is just that: a direct representation of the graphic. The material on the other hand defines what shader is used, whether it's a multitexture setup and such. This alone should clean up 50% of the mess but is no longer compatible with the way dynamic lights are drawn right now. Which brings us to the next point:
This change is done. And it's amazing how it simplified matters.
These 2 points became trivialities after the GL texture class was separated and all the ugly workarounds that were in place were finally gone:
- there's 2 ways to render special colormaps (e.g. inverse invulnerability): Use a shader or create a different texture
- there's 2 ways to warp a texture: use a shader or create a different texture.
If I want I can even add textures that are forced to be software warped now - even on modern hardware where shaders are used for warping.
- the way dynamic lighting is done is just hideous. It's impossible to work with this and still have texture
effects.
This is next on my list - and it will be the point where I will finally separate the rendering paths. Currently the only distinction is where the shaders get applied but if I manage to add this I can finally retire the problem code on newer cards.
Re: Supported hardware question
Posted: Fri Oct 02, 2009 18:09
by Graf Zahl
Some bad news.
This has been going really well recently but today I hit a dead end. The new lighting code is almost complete but when testing it I only get a crash deep in the graphics driver. And I can't find the cause. All functions I call to make this work return 'no error'.
So unless a driver update makes this go away there's nothing I can do. If it's not a driver bug I can't find out what is wrong here.

Re: Supported hardware question
Posted: Fri Oct 02, 2009 20:02
by Cherepoc
Damn, I thought the new render was almost done
Anyway, great work Graf! Hope you (or nvidia

) will find the solution.
Re: Supported hardware question
Posted: Sat Oct 03, 2009 7:10
by Gez
Can you access a machine with a different hardware (ATI or even Intel) to see how it works there?
Re: Supported hardware question
Posted: Sat Oct 03, 2009 9:46
by Graf Zahl
No. I only got 2 computers, one of which is too old to handle this at all.
But I did some further tests with much more simplified data and shaders. It doesn't crash anymore but it exhibits some really odd behavior which looks like a driver bug. And if there's a driver bug the crashes may well be another effect of that. So unless I get further information I can't do much.
I committed the testing code in disabled form to the SVN though.
So if anyone wants to check for himself, here's what to do:
1. define TESTING globally in a header which gets included by both gl_renderer.cpp and gl_flats.cpp.
2. Edit wadsrc\static\shaders\glsl\main.fp and change the first line into #ifdef DYNLIGHT
3. Compile and run.
4. Swap lines 12 and 13 in the shader (vec4 light =... and vec4 texel =...) and run again.
No matter what I do, I either get the texture data or the texture buffer data, but never both at the same time. Obviously with such strange behavior there's no point in moving forward. It just doesn't run on my system so I can't test it.
Re: Supported hardware question
Posted: Sun Oct 04, 2009 1:58
by GuntherDW
Owkey, forgot i ever posted something here.
First, i had to add gl_rendererstate.cpp to the Cmakelist,
Also, when I enabled the TESTING, it spawned these compile errors
Code: Select all
[ 38%] Building CXX object src/CMakeFiles/zdoom.dir/gl/textures/gl_cameratexture.o
[ 38%] Building CXX object src/CMakeFiles/zdoom.dir/gl/scene/gl_decal.o
[ 38%] Building CXX object src/CMakeFiles/zdoom.dir/gl/scene/gl_drawinfo.o
[ 38%] Building CXX object src/CMakeFiles/zdoom.dir/gl/scene/gl_flats.o
/GuntherDW/src/svn/gzdoom/src/gl/scene/gl_flats.cpp: In member function ‘void GLFlat::DrawSubsectorLights(subsector_t*, int)’:
/GuntherDW/src/svn/gzdoom/src/gl/scene/gl_flats.cpp:141: warning: comparison between signed and unsigned integer expressions
/GuntherDW/src/svn/gzdoom/src/gl/scene/gl_flats.cpp: In member function ‘void GLFlat::DrawSubsector(subsector_t*)’:
/GuntherDW/src/svn/gzdoom/src/gl/scene/gl_flats.cpp:168: warning: comparison between signed and unsigned integer expressions
/GuntherDW/src/svn/gzdoom/src/gl/scene/gl_flats.cpp: In member function ‘void GLFlat::DrawSubsectors(bool)’:
/GuntherDW/src/svn/gzdoom/src/gl/scene/gl_flats.cpp:190: error: ‘class FRenderState’ has no member named ‘EnableLights’
/GuntherDW/src/svn/gzdoom/src/gl/scene/gl_flats.cpp:246: error: ‘class FRenderState’ has no member named ‘EnableLights’
make[2]: *** [src/CMakeFiles/zdoom.dir/gl/scene/gl_flats.o] Error 1
make[1]: *** [src/CMakeFiles/zdoom.dir/all] Error 2
make: *** [all] Error 2
I'll see if i can test this though
edit: ahah, glrendererstate.cpp doesn't even include EnableLights. Did you forget to update that one in the SVN repo?
Re: Supported hardware question
Posted: Sun Oct 04, 2009 11:22
by Graf Zahl
Not tested yet. EnableLights in gl_renderstate was done after uncommenting the TBO test code.