8800 GTX Blender performance

Discuss stuff not about Indigo.
Post Reply
14 posts • Page 1 of 1
User avatar
deltaepsylon
Posts: 417
Joined: Tue Jan 09, 2007 11:50 pm

8800 GTX Blender performance

Post by deltaepsylon » Mon Sep 10, 2007 2:18 am

My vidcard (evga 8800gtx) seems to lag down too fast when going with high poly scenes, so could someone give me a reference for poly count before lagging down,, using a UV sphere?
-----
P5N32-C Coffee machine overclocked to 4 cups a minute! still not enough...

alex22
Posts: 171
Joined: Thu Apr 12, 2007 12:07 pm
Location: Germany

Post by alex22 » Mon Sep 10, 2007 3:19 am

In editmode, I can go to about 120k Polys, then it lags very bad(Fraps says around 5 fps). In Objectmode I can go up to 1000k before I get the same amount of fps. But it could be the CPU instead of the GPU that is making problems. My core2duo is almost at full load when rotating view, with both cores.

oogsnoepje
Posts: 35
Joined: Sat Aug 18, 2007 10:20 am

Post by oogsnoepje » Mon Sep 10, 2007 11:34 am

Blender is a typical program that mis-uses OpenGL horribly. That's why it's lagging down too fast. It flushes the pipeline far too often for silly things, uses immediate mode for rendering everything (from user interface and models), and they have never heard of state sorting.

Kill the immediate mode, I'd say. Kill it, murder it to death, burn it ritually.. well, you get my point.

There are a few developers working on making Blender use vertex arrays for drawing meshes, iirc. But who knows when that code will see the light.

I like Blender though :P but that's the reason for the slowness :)

User avatar
zsouthboy
Posts: 1395
Joined: Fri Oct 13, 2006 5:12 am

Post by zsouthboy » Mon Sep 10, 2007 4:16 pm

Yeah blender doesn't use OGL in the most efficient way - batching would work so much better :(

User avatar
Heavily Tessellated
Posts: 108
Joined: Thu Aug 10, 2006 4:20 pm
Location: Huh?

Post by Heavily Tessellated » Tue Sep 11, 2007 5:16 am

I highly suggest you try linux and Xorg with the closed source nvidia drivers. Because on a X2 4200+ with a stock 7900GS Extreme 600MHz core / 1.6GHz DDR3, which in it's day was a nifty gaming card but it's now doing render work. Anyway, I can have 250k vertices active and its still effortlessly smooth. At least 30fps. Even 1M is tolerable, I'd say 10fps. 2.7M verts for the Stanford Dragon and it's pretty bad, 1fps or so. I've loaded and manipulated Lucy (58M verts) and that was no picnic. Better have 4G of RAM minimum and a non-Intel system. Still, it was only about 5 seconds lag. Which, all things considered, not too shabby!

The point being that your 8800GTX runs circles around an 3 year old 7900GS Extreme. Try linux! It must be more efficient at memory transfers in the kernel, and/or it's OGL subsystem is just special.

oogsnoepje
Posts: 35
Joined: Sat Aug 18, 2007 10:20 am

Post by oogsnoepje » Tue Sep 11, 2007 5:33 am

Faster memory transfers won't help much in this situation and there's nothing more special about OpenGL on Linux when you're using the NVIDIA driver (same code base as for Windows).

User avatar
zsouthboy
Posts: 1395
Joined: Fri Oct 13, 2006 5:12 am

Post by zsouthboy » Tue Sep 11, 2007 8:43 am

I'm using Blender in Linux and win32 on a regular basis, on the same machines even, and I notice very little difference in viewport performance too.

User avatar
Kram1032
Posts: 6649
Joined: Tue Jan 23, 2007 3:55 am
Location: Austria near Vienna

Post by Kram1032 » Tue Sep 11, 2007 8:51 am

better or worse?

In general, linux blender is supposed to be capable for more faces than windows blender...
so, I guess, it should be better, on linux, right?

oogsnoepje
Posts: 35
Joined: Sat Aug 18, 2007 10:20 am

Post by oogsnoepje » Tue Sep 11, 2007 11:08 am

For the people who don't understand the issue:

- Imagine a highway full of cars
- The highway is the path to your videocard and the cars are data (vertices, triangles)

Now everything would flow on nicely if the cars all drove the same speed and in nice rows (don't know how exactly that should be said in English). Every row of cars has another color. Let them all drive faster and the speed of every car will go up, and the other way around. This is essentially comparable with batched data.

What Blender does is sending out one car at a time, and after ten cars went by, Blender wipes out the whole road to clean it up for the next ten cars, just because they're a different color. Those cars will ride over it, after which the road will be wiped once again, and so on.

If you would like to send one million people in their cars to their jobs along that road, which approach do you guys think would be faster?

I hope that explains why faster memory transfers are useless without batching the data first.

User avatar
Heavily Tessellated
Posts: 108
Joined: Thu Aug 10, 2006 4:20 pm
Location: Huh?

Post by Heavily Tessellated » Tue Sep 11, 2007 2:34 pm

Hmm. Then I must be lying. Let me run test it. New scene, default 32/32 UVSphere. Select all. Subdivide 4 times: 253,954 verts, 262,144 faces. Rotating...

Yes. OK. I lied. 30fps isn't right, but it's not 5fps like alex22's... Still, no, it's not "effortlessly smooth". Sorry for the misinformation. (I'd guess 10-15-ish fps, not bad at all, but not smooth as previously stated.)
oogsnoepje wrote:There are a few developers working on making Blender use vertex arrays for drawing meshes, iirc. But who knows when that code will see the light.
So that button under System & OpenGL is just for decoration at the moment? I've never turned it on, just like I've never turned MipMaps off. I also tweak the card for best quality before spawning Blender, not performance... nvidia-settings -a AllowFlipping=0 -a OpenGLImageSettings=0 -a SyncToVBlank=1 because I never want to see torn edges or flickering vertices, ever.

Is there such a utility for Windows, where you can change the driver settings as part of a shell script? deltaepsylon you could try OpenGLImageSettings=3 or higher... if could provide some boost, but considering we're after photorealistic rendering...
zsouthboy wrote:I'm using Blender in Linux and win32 on a regular basis, on the same machines even, and I notice very little difference in viewport performance too.
I can't speak for Windows performance of Blender as once it was downgraded from being a gaming card it never saw Windows again. (without turning this into an anti-M$ thread, how many non-corporate users use Windows for nothing more than a glorified game loader? Ah.)

oogsnoepje
Posts: 35
Joined: Sat Aug 18, 2007 10:20 am

Post by oogsnoepje » Tue Sep 11, 2007 3:48 pm

Heavily Tessellated wrote:I'd guess 10-15-ish fps, not bad at all
Considering that it could be ten times faster, I think it is quite bad actually. And those polygons are just using the standard shading path as well, nothing complex these days.
So that button under System & OpenGL is just for decoration at the moment? I've never turned it on, just like I've never turned MipMaps off. I also tweak the card for best quality before spawning Blender, not performance... nvidia-settings -a AllowFlipping=0 -a OpenGLImageSettings=0 -a SyncToVBlank=1 because I never want to see torn edges or flickering vertices, ever.
As the tooltip says, that setting can make Blender unreliable, and when you've seen the source code of Blender, you'll instantly believe that. It doesn't seem to do much anyway, as far as I can see, but haven't checked the code itself yet so I could be wrong :) What I do know is that everything I've seen for drawing in Blender is done using immediate mode.

Those settings you mention are not for improving quality of the rendering nor is it for better memory transfer speed nor is it for better rendering speed. Especially the option to synchronize with the vertical blanking interval is just there to make moving parts on your screen appear smoother by not displaying the image until it's fully visible on your monitor. If you're using an LCD-monitor you can leave that option out by the way. It won't cause torn edges or flickering vertices.
how many non-corporate users use Windows for nothing more than a glorified game loader?
The only games I play are Tetris on my mobile phone and those with my girlfriend :)

User avatar
Heavily Tessellated
Posts: 108
Joined: Thu Aug 10, 2006 4:20 pm
Location: Huh?

Post by Heavily Tessellated » Tue Sep 11, 2007 5:27 pm

oogsnoepje wrote:[If you're using an LCD-monitor you can leave that option out by the way. It won't cause torn edges or flickering vertices.
Nah, can't. The graphics guy in me refuses to give up calibrated phosphor CRTs until I can afford professional level LCDs. So my Blender station is bending the table with two 21" glass monsters. :(

No not for improving rendering quality of Blender per se, but the image quality presented to my eyeballs. I mean 1600x1200 is the average these days, my laptop is 1920x1200 and the screen is gorgeous, but still it's not as smooth and easy to stare at for 8 hours straight as a high-refresh CRT. Maybe one day... heh. maybe one day I'll just jack the computer to the interface port on my forehead.)

My point, though, was that you can control the driver from the command line in linux, I guess like the sliders for Performance <--> Quality on the driver panel in Windows, and that you might be able to squeek another few fps out of it, if it's taking OGL shortcuts to gain the speed increase.

User avatar
Kram1032
Posts: 6649
Joined: Tue Jan 23, 2007 3:55 am
Location: Austria near Vienna

Post by Kram1032 » Wed Sep 12, 2007 5:09 am

1600*1200 = average? O.o
Yours is 1920*1200? O.o.O.O.o.O

oogsnoepje
Posts: 35
Joined: Sat Aug 18, 2007 10:20 am

Post by oogsnoepje » Wed Sep 12, 2007 6:02 am

Mine is bigger.

Post Reply
14 posts • Page 1 of 1

Who is online

Users browsing this forum: Bing [Bot] and 12 guests