GPU rendering made easy :) So they say.

Discuss stuff not about Indigo.
Post Reply
9 posts • Page 1 of 1

User avatar
Zom-B
1st Place 100
Posts: 4701
Joined: Tue Jul 04, 2006 4:18 pm
Location: ´'`\_(ò_Ó)_/´'`
Contact:

Post by Zom-B » Wed Feb 07, 2007 2:24 am

actually there is a (free!!!) GPU based Renderer out there called RTSquare,
that should work with every "old" PS2 compatible Graphic Card :shock:
Some Speed tests could be found here

There is also the Parthenon Renderer, that do Global Illumination on GPU quit fast...
(more Infos & Download Link HERE)

Also Hardware Rigs exist, that are configured to use GPU Power to speed things up.
(Don't know much about these in detail...so don't blame me if I'm wrong :wink: )

The Free Programm Galetto from NVIDIA is using, your surly guess it, the GPU for rendering

Vidro Renderer uses GPU power to speed up things too (beware, its japanese :wink: )

Not A GPU, but a nice co-processor

A good source for general-purpose computation using Graphic Hardware is GPGPU.org

And finally the CUDA SDK to let your (Nvidia) graphic Card understand C language!


Again I would love to see a call for donation to buy a "kick ass" GF 8800
for Ono to make such GPU based development possible,
even if the danger he starts playin games is acute...
check this video, that Sex for my Eyes :shock:[/url]
Last edited by Zom-B on Mon Feb 19, 2007 6:11 am, edited 1 time in total.
polygonmanufaktur.de

User avatar
eman7613
Posts: 597
Joined: Sat Sep 16, 2006 2:52 pm

Post by eman7613 » Fri Feb 09, 2007 3:27 am

i have an 8800gtx in my new computer, if ono wants i can test run the code for him, i don't usually play with C so : /. But ya i can test it.
Yes i know, my spelling sucks

User avatar
Zom-B
1st Place 100
Posts: 4701
Joined: Tue Jul 04, 2006 4:18 pm
Location: ´'`\_(ò_Ó)_/´'`
Contact:

Post by Zom-B » Fri Feb 09, 2007 5:38 am

eman7613 wrote:i have an 8800gtx in my new computer, if ono wants i can test run the code for him, i don't usually play with C so : /. But ya i can test it.
Actually I don't think its fun to code stuff, that you can't debug by your self... :wink:

I'm buying a 8800 in April, after exams and when I'm back from spain,
so then I would be a possible Tester too 8)
polygonmanufaktur.de

User avatar
eman7613
Posts: 597
Joined: Sat Sep 16, 2006 2:52 pm

Post by eman7613 » Fri Feb 09, 2007 8:42 am

ZomB wrote:
eman7613 wrote:i have an 8800gtx in my new computer, if ono wants i can test run the code for him, i don't usually play with C so : /. But ya i can test it.
Actually I don't think its fun to code stuff, that you can't debug by your self... :wink:

I'm buying a 8800 in April, after exams and when I'm back from spain,
so then I would be a possible Tester too 8)
just so you know, the "shining" textures from AF aren't actually fixed, they are still a problem :( had i known that i woulda waited for the r600
Yes i know, my spelling sucks

User avatar
zsouthboy
Posts: 1395
Joined: Fri Oct 13, 2006 5:12 am

Post by zsouthboy » Fri Feb 09, 2007 9:53 am

MLT Path tracing is a different workload than rasterization.

Radiance: I believe DX10 requires single-precision floats. More than likely, it could be worked around, regardless.

User avatar
oodmb
Posts: 271
Joined: Thu Oct 26, 2006 5:39 am
Location: USA
Contact:

Post by oodmb » Fri Feb 09, 2007 4:09 pm

gpu precision seems to currently be a very large problem, if you have ever played a video game, you might notice that the effects are very good, but it sacrifices virtualy all quality. from what i have seen, most gpu renderers which render on a geforce or other gaming graphics card acount for this by using the CPU and graphics card, they just speed it up a slight amount. you can get a graphics card however specificaly made for precision. a while back i heard about a ray tracing card specificaly made for shooting photons, unfortunitly it was pretty expensive, and i think that by now it might be out of date.
it might be interesting to try to render using the ageia physics card, its specificaly made for physics calculations, and light is vectors and stuff, its also highly parrallel and doesnt seem to sacrifice quality.
a shiny monkey is a happy monkey

User avatar
Zom-B
1st Place 100
Posts: 4701
Joined: Tue Jul 04, 2006 4:18 pm
Location: ´'`\_(ò_Ó)_/´'`
Contact:

Post by Zom-B » Sun Feb 18, 2007 11:21 pm

oodmb wrote:gpu precision seems to currently be a very large problem, if you have ever played a video game, you might notice that the effects are very good, but it sacrifices virtualy all quality. from what i have seen, most gpu renderers which render on a geforce or other gaming graphics card acount for this by using the CPU and graphics card, they just speed it up a slight amount. you can get a graphics card however specificaly made for precision.
The "poor" precision GPU's archive in Video Games is related 99% Speed to game performance, nobody would play visual perfect game where to wait 3 minutes for each Frame :wink:

The Point about GPU rendering isn't to use some hard/soft wired functionality
of your graphic Card to render a Effect!
Its about using the pure computation power... the ability to perform pure mathematical and/or C++ (or what ever language) operations (of DX10 compatible Cards!!!)!

until now the most GPU accelerated Programms used operations based on Shader Model 2, often for image based work in Video editing...

To bring this whole to a single Point:

Its about using state-of-the-art DX10 GPU's as (limited/optimized) co-processors,
to take over or support parts of the CPU work with 100% same results!
polygonmanufaktur.de

User avatar
eman7613
Posts: 597
Joined: Sat Sep 16, 2006 2:52 pm

Post by eman7613 » Mon Feb 19, 2007 6:10 am

to reinforce what zomb is saying, video cards are basically giant floating point calculators, and the high end ones can actually do much more floatingpoint cps then a cpu can. The reason behind them not looking so great all the time, is taht realtime video is a constant game of optimizations & shortcuts. That is the reason behind adjusting AA and AF in a game, to increase the accuracy & quality, but it takes longer. If you turned off all those optimizations and shortcuts you would have seconds per framce, instead of frames per second.

EDIT: a perfect example of this is Nvdia's gelato & ATI's version of Folding at Home (and for something like that they cant just say "oops its not so accurate on the numbers" since it is actualy used for some scientific stuff that is beyond me")
Yes i know, my spelling sucks

Post Reply
9 posts • Page 1 of 1

Who is online

Users browsing this forum: No registered users and 9 guests