I'd like to know which one consumes less resources when rendered in GZDoom: having one large model containing five objects or having individual models for each object? Right now I have a map with ~1800 model objects visible in a single area and it takes 96% processing power of my Intel Core2Duo E6850 3.00Ghz CPU(2 cores, 2 threads) overclocked and running at 3.45Ghz.
Model usage was probably not meant to be abused in GZDoom like that but I would be interested if this could be improved on somehow engine wise? If so, consider this a feature request The map is an enviromental map and the models are vegetation, like grass, trees etc.
Re: GZDoom performance question
Posted: Tue Aug 23, 2011 15:58
by Gez
For pure rendering performances, I don't think it'd make much change to have the same amount of polygons be divided into several meshes or not. However, if you have less actors on the map, you will have less actor management going on so it will improve performances a bit.
As for performance improvement on the engine side, don't count too much on it. Graf isn't active at the moment and as for myself I have no knowledge of OpenGL.
Re: GZDoom performance question
Posted: Tue Aug 23, 2011 17:18
by DoomerMrT
Well I will see what I can do to optimize things, but until we get Graf back my future players should consider obtaining a 2nd generation core i7 processor had a similar issue a while back with a big amount of transparent true color sprites (~1000) however those ate gigabytes of RAM and not CPU
Re: GZDoom performance question
Posted: Tue Oct 25, 2011 13:13
by ibm5155
I think even modern games if you put 1800 models you'll get slowdown...
Well Hunters Moon is playable with all that models.
Re: GZDoom performance question
Posted: Sun Mar 11, 2012 17:39
by DoomerMrT
As this is still bugging me I made a quick benchmark to see what difference there is between using models or sprites for actors. I expected better performance for sprites of course, but still this is something worth to investigate. I took my map with models in it and I got 34 fps when looking at the models. Then, I took the map again but this time the same actors were using imp sprites (apart from looking funny, it improves performance quite a bit) and I got 60-61 fps. Actors only have NOINTERACTION flag set in both cases. This test shows that the fps drop has to do more with rendering than actor control, as the number of things didn't change. Although the CPU usage is quite high even with sprites (~40%) it is not nearly as killing as with models (~100%). This leads to the assumption that GZdoom doesn't take advantage of the videocard as it should, so it could lower the load on CPU (when rendering models).
Vertical sync was on during test, but turning it off didn't give better framerates from the screenshot's position.
Hardware was:
Intel Core 2 Duo 3.00GHz CPU at 3.29GHz, 2 cores, 2 threads
3GB DDR2 800MHz RAM
Nvidia 8800GTX 768MB
Of course rendering models is a lot more expensive than rendering sprites. The transformations are a lot more complex for each actor and there's significantly more data to be pushed to the graphics card. And with the amount of models you use it just adds up.
Vertex buffers wouldn't help much because the time is lost elsewhere, mostly with setting up the view matrix for each actor. This task is 100% CPU side.
Re: GZDoom performance question
Posted: Fri Nov 09, 2012 12:29
by DoomerMrT
I thought about this: let's say we have a sequence of 200 high resolution, true color, full 35 FPS sprites. Couldn't performance be improved with a method like this? : we have key frames, and between key frames the following image would be compared with the previous image and only the changes between the two would be drawn on the previous frame. This way it is not needed to redraw an image entirely.
Re: GZDoom performance question
Posted: Fri Nov 09, 2012 12:50
by Graf Zahl
Uh... What?
No, that wouldn't work at all because the entire scene is completely redrawn each frame. It has to or you wouldn't be able to move the camera. Plus, you also need to restore the area behind the sprite.
And even if it could be done technically, what you propose would be infinitely more costly than just pushing out the pixels. All the comparisons need time, too. Rendering a polygon is cheap. It's probably the only factor in here that's irrelevant. But any costly operation - that includes texture setup for a sprite has an impact.
Bottom line: If you got performance issues on the system you describe in your first post it's time you rethink your editing approach. Most users have significantly weaker systems so what barely works for you will slow most machines down to a crawl.
Re: GZDoom performance question
Posted: Fri Nov 09, 2012 13:00
by Gez
DoomerMrT wrote:I thought about this: let's say we have a sequence of 200 high resolution, true color, full 35 FPS sprites. Couldn't performance be improved with a method like this? : we have key frames, and between key frames the following image would be compared with the previous image and only the changes between the two would be drawn on the previous frame. This way it is not needed to redraw an image entirely.
That's something that can only work with non-scrolling, solid-colored 2D games. "Okay, the sprite moved from there to here, so let's blank this old area and draw the sprite on this area."
Re: GZDoom performance question
Posted: Fri Nov 09, 2012 18:58
by DoomerMrT
I see, thanks for the explanation. I will think about how to mod in a way to be able to target the majority of the userbase without giving up too much detail.
Re: GZDoom performance question
Posted: Sat Nov 17, 2012 11:20
by DoomerMrT
Hi!
Back with a question again:
How does GZdoom render png truecolor graphics? How large is the impact on the performance when having eg.:
-power of 2 textures, 256x256
-non power of 2, like 349x329
-having transparency in them
Also: are the resources loaded on demand, eg. when you see it, and then unload after a while when you no longer see it? I ask this because then I won't necessarily place a lot png resource in one room, where everything is visible.
Re: GZDoom performance question
Posted: Sat Nov 17, 2012 12:40
by Gez
See texture options for details. But basically, textures and sprites and every other sort of images is converted into an OpenGL format and fed to the OpenGL drivers, who pass them to the hardware.
Some hardware does not support NPO2 textures, so it needs to be worked around first.
Resources are precached while the level is loaded; however if you use scripts to change textures, the new texture may not have been precached.
Re: GZDoom performance question
Posted: Mon Nov 19, 2012 10:51
by DoomerMrT
I know developers hate it when non-experts talk about development but....
Gez wrote:Basically, textures and sprites and every other sort of images is converted into an OpenGL format and fed to the OpenGL drivers, who pass them to the hardware.
Some hardware does not support NPO2 textures, so it needs to be worked around first.
all of this sounds to be a pretty costly operation when working with a lot of images..couldn't be performance improved with multi threading somehow?
Re: GZDoom performance question
Posted: Mon Nov 19, 2012 14:17
by Graf Zahl
No. It has do be done synchronously, not to mention that OpenGL's multithreading capabilities are severely limited.
A well written GL driver would use multithreading to keep really costly operations out of the main thread - and that's probably the reason why NVidia performs so much better than both AMD and Intel.
All my speed checks strongly suggest that with AMD the main thread does all the work but with NVidia it merely triggrers a worker thread that's waiting to do stuff.
Re: GZDoom performance question
Posted: Thu Nov 29, 2012 18:53
by Nash
Graf Zahl wrote:Of course rendering models is a lot more expensive than rendering sprites. The transformations are a lot more complex for each actor and there's significantly more data to be pushed to the graphics card. And with the amount of models you use it just adds up.
Vertex buffers wouldn't help much because the time is lost elsewhere, mostly with setting up the view matrix for each actor. This task is 100% CPU side.
So the GPU isn't utilized at all when drawing models? That would explain the slowness.