Page 5 of 7

Re: Supported hardware question

Posted: Wed Sep 09, 2009 20:17
by Enjay
I'd be surprised if any crash reports from the new renderer were much use at this stage. It is so early on in its development and there is still so much to do and Graf expects it to crashout with unsupported hardware anyway.

Re: Supported hardware question

Posted: Wed Sep 09, 2009 20:19
by Remmirath
I know, i just wanted to be sure if i need to replace my videocard or not. :P

Re: Supported hardware question

Posted: Wed Sep 09, 2009 20:52
by Graf Zahl
Can't say. The code is too much in a state of flux right now. I'm not trying to make it stable. Currently I'm only making sure it works on my system. Please don't bother with crash logs. The most likely cause is that the renderer tries to get a GL API that does not exist in the driver being used.

Re: Supported hardware question

Posted: Wed Sep 09, 2009 22:23
by Graf Zahl
Now here's something I really did *NOT* expect:

Everybody seems to say that VBO's (vertex buffers, meaning to store the vertex data directly in video RAM is the thing of the future because it's so much faster, yadda, yadda, yadda.

So I set everything up to use VBOs and now was finally able to make a first test of the new renderer with only flats being rendered (using MAP09 of Hellcore while looking down onto the village from the church steeple which is a great scene to test rendering speed)

The results:

- Old renderer 52 fps
- New renderer 48 fps

That can't be, I thought so I did some tests. First I commented out some unneeded shader code for this scene. The result: 49 fps, i.e. not worth the overhead needed.

Next I commented out some fields in the vertex structure that weren't needed. Now the FPS rate went up to 52. Better but still not worth it.

Now I got really curious and converted the code to the old (and supposedly obsolete) immediate mode. The surprising result: 60 fps. Now I have to ask myself: Why do I have to jump through hoops and write the most convoluted garbage code to get the data to the GFX card in an 'ootimized' way when the simpler and much more convenient older method works so much better?

Makes no sense to me...

Re: Supported hardware question

Posted: Wed Sep 09, 2009 22:42
by Nash
Heh, it runs for me with r439... but all I get is a black screen. Game still runs though (I can move around and shoot, kill monsters, but I'll only hear sounds).

I'm not seeing the flats as Enjay describes it...

But hey I know it's still heavy WIP stuff. The fact that it runs on my system is all I needed to know. ;)

ATI 4870 X2

Re: Supported hardware question

Posted: Wed Sep 09, 2009 23:01
by Enjay
Nash wrote:I'm not seeing the flats as Enjay describes it...
Just for the hell of it really (r447)...

Image

Image

Re: Supported hardware question

Posted: Thu Sep 10, 2009 2:33
by Janitor
I guess I'm going to hop onto this and say "I'm fine with this as long as we see some new features." (/me looks at legacy's Corona effects.) I have no issues upgrading the good ol' GFX card. I have an nvidia GeForce 8600 GT. I imagine this should be sufficient?

Re: Supported hardware question

Posted: Thu Sep 10, 2009 6:54
by Graf Zahl
Of course. I got the same.

Re: Supported hardware question

Posted: Thu Sep 10, 2009 11:21
by Gez
Graf Zahl wrote:Now I got really curious and converted the code to the old (and supposedly obsolete) immediate mode. The surprising result: 60 fps. Now I have to ask myself: Why do I have to jump through hoops and write the most convoluted garbage code to get the data to the GFX card in an 'ootimized' way when the simpler and much more convenient older method works so much better?

Makes no sense to me...
There is the possibility that on future hardware immediate mode will be comparatively slower than VBOs.

Also, I figured you might not be the only one who stumbled upon this dilemma, so I googled for arguments between IM and VBO, and found this. Also this too.

Re: Supported hardware question

Posted: Thu Sep 10, 2009 12:04
by Graf Zahl
Well, yeah. I suspect the second case. With Doom it's that VBO's don't map well to the needed functionality because I need to carry all fields along that I *might* need - even if it's only for one polygon in the entire set.

Another issue is that in order to use a streaming VBO I need to make a third pass over the data. The end result is that I lose on the CPU side what I might gain on the GPU side.

The really funny thing is that the last driver I installed seems to have significantly boosted the performance of immediate mode. It was quite amazing how much faster GZDoom is now compared to the last driver I used.

After all this I stopped to believe in the rampant pro-VBO propaganda. VBOs are so rigid that a driver may often be in a better position to get the data to where it is needed rather than preemptively allocating a large buffer that needs to include fields with only minimal use.

Re: Supported hardware question

Posted: Mon Sep 14, 2009 16:29
by Graf Zahl
I think I need to make an announcement:

I have put the renderer rewrite on hold indefinitely. My initial plans did not work out and instead of the hoped performance gains it actually got slower!

Before I invest any more work here I have to do some experiments to see what kinds of optimizations actually can increase the rendering speed but as it looks the stuff I hoped would gain most actually had no effect whatsoever while causing a lot of work - so it's a waste of time to pursue this course of action any further.

Re: Supported hardware question

Posted: Mon Sep 14, 2009 16:58
by Enjay
Well, that's unfortunate. You have already sunk a good number of hours into it. I suppose it's good that you have realised this now rather than later though.

Do you have any thoughts as to how you might progress as and when you look at it again? Clearly you aren't happy with the renderer as it stands ATM either.

Re: Supported hardware question

Posted: Mon Sep 14, 2009 18:42
by Graf Zahl
The one thing I got out of these tests is that the flat rendering mechanism is almost as good as it gets. I tried keeping things precalculated but the speed difference was so small it was hardly worth it (less than 0.1 ms on a large level.) and most of that probably was due to the still missing lighting calculations which I need to keep. That's not worth completely redoing the code so I might as well scrap what I started there. Fortunately this is the only part that will get scrapped. The rest I already did works fine.

The code I am least satisfied about in the old renderer is the wall rendering. It wastes a good amount of time setting things up, especially if there's lots of 3D floors. But after my tests I'll approach this differently than I planned.

Re: Supported hardware question

Posted: Tue Sep 15, 2009 13:56
by Sussudio
When GZDoom detects the capabilities of a video card, where does he get this info from? video drivers? by directly testing the hardware?

Re: Supported hardware question

Posted: Tue Sep 15, 2009 14:58
by Gez
Sussudio wrote:When GZDoom detects the capabilities of a video card, where does he get this info from? video drivers?
That.

To query directly the hardware, you'd have to have querying routines dedicated to each different existing hardware... Possible, maybe, reasonable, certainly not!

That's why when you update your drivers, your chipset may become able to do things it couldn't do. And if your drivers aren't updated, on the other hand... For example, I have a GeForce 9600m GT. The hardware is capable of supporting OpenGL 3.0, PhysX and CUDA. But it's a MSI computer and for some reason MSI decided to prevent its laptops from using Nvidia/ATI's own drivers (trying to install Nvidia's latest drivers result in an error as it cannot identify the chipset), so I'm stuck with their custom version of the 176.88 branch which is restricted to OpenGL 2.1, no PhysX, no CUDA. (Nvidia's drivers are currently in the 186.40 branch last time I checked...)