Maybe some smart programmer type could write up a program to take advantage of the new Nvidia CUDA...
http://www.nvidia.com/object/cuda_learn.html Nvidia CUDA sounds pretty fast (think 10-50x as fast as dual core) and some of the applications already developed use python. http://fastra.ua.ac.be/en/index.html (video showing super computing on GPU) http://www.txcorp.com/technologies/GPULib/ (python acceleration etc. ) I'm no expert but from looking it over a bit it sounds like if some smart programmer type could spend a bit of time on it, it may prove extremely useful. Very insightful resource on CUDA programming.. http://www.biomedcentral.com/1471-2105/8/474
Accelerate Indigo 10-50x with CUDA ...
lol, that isnt beast at all
i like the "only 4000 euros". technically all they did was chunk out on graphics? i mean the rest of the rig is stock, they just strapped on 4 gpu's, dont see why it needs its own webpage 
annyway. dig the screeny with the available screen connections sliding off into infinity XD
nice find.
greetz
mat


annyway. dig the screeny with the available screen connections sliding off into infinity XD
nice find.
greetz
mat
It's dual-gpu cards, so there are 8 gpus 
And it's only good for highly graphically related stuff...
And doubleprecession graphics cards are far too expensive as they're not standard gaming gracas but special ones, for firms and such.....
And I've an ATI and no Gforce

And it's only good for highly graphically related stuff...
And doubleprecession graphics cards are far too expensive as they're not standard gaming gracas but special ones, for firms and such.....
And I've an ATI and no Gforce

Last edited by Kram1032 on Sun Jun 08, 2008 9:10 pm, edited 1 time in total.
There are definitely some tasks in the Indigo and almost all other aplication codes that would benefit speedwise if they were thrown to GPU to compute them (GPU and CPU can work at the same time). But how significant are those tasks. Are we talking about speed gain of 1%,5%, 10% or even 300% (for the whole application)?? Who knows 
But the future for the GPU computing is brighter then ever since CUDA is evolving pretty fast - just search the web for many applications like insanely speedy CFD simulations and even password protected archive speedy bruteforcing. I guess ATI's Close to metal (CTM) will catch up on CUDA soon when enough people will start digging in it.
Yes and damn this is another headache for the programmers. Not only you should be able to make your applications multi-threaded, which can be really some serious pain in the ass, but now in about a year or two it's going to be a must to be able to also use the GPU processing power (for CUDA and CTM at the same time).
Future seems really bright.. for us consumers.. not for programmers

But the future for the GPU computing is brighter then ever since CUDA is evolving pretty fast - just search the web for many applications like insanely speedy CFD simulations and even password protected archive speedy bruteforcing. I guess ATI's Close to metal (CTM) will catch up on CUDA soon when enough people will start digging in it.
Yes and damn this is another headache for the programmers. Not only you should be able to make your applications multi-threaded, which can be really some serious pain in the ass, but now in about a year or two it's going to be a must to be able to also use the GPU processing power (for CUDA and CTM at the same time).
Future seems really bright.. for us consumers.. not for programmers

Cache coherency is TERRIBLE with CUDA and CTM (Ati's version of CUDA).
Walking down a kdtree with a GPU would be painful.
However, the marketing department of both companies are great! How many "normal" people have this idea in their head that somehow a GPU is 10 - 50 x faster than a normal CPU? (Yes, they really are 10-50x faster, on some very specific tasks - streaming data processing where each processing unit works on something different in small chunks? Blazing fast!)
No offense guys. Just amazing how successful their marketing has been.
Walking down a kdtree with a GPU would be painful.
However, the marketing department of both companies are great! How many "normal" people have this idea in their head that somehow a GPU is 10 - 50 x faster than a normal CPU? (Yes, they really are 10-50x faster, on some very specific tasks - streaming data processing where each processing unit works on something different in small chunks? Blazing fast!)
No offense guys. Just amazing how successful their marketing has been.
Actually the marketing of CUDA is pretty accurate so if people think GPUs will speed up any given task is simply their own problem of misunderstanding the issue.
But the whole CUDA thing definitely is successful for a number of projects like those guys with CT models, high-def movie encoding, Elcomsoft password bruteforcing and alot physical simulations.
Just because it's not good for all kinds of rendering techniques at the moment does not mean the whole thing is a dud... it's a good dud actually and will become better with time as the software and hardware evolves
But the whole CUDA thing definitely is successful for a number of projects like those guys with CT models, high-def movie encoding, Elcomsoft password bruteforcing and alot physical simulations.
Just because it's not good for all kinds of rendering techniques at the moment does not mean the whole thing is a dud... it's a good dud actually and will become better with time as the software and hardware evolves

- Neobloodline
- Posts: 136
- Joined: Mon Nov 12, 2007 9:54 am
It isn't really a question of whether a gpu can accelerate rendering or not, I've run gelato on my 8800gts320 and it cranks out the frames (theres a decent Blender exporter to Gelato). I understand it's a different animal to a GI renderer but from what I've read CUDA is best at crunching a long stream of calculations, as opposed to short bursts. Is that not what a GI renderer needs? There will always be nay sayers but it seems that someone with a little imagination could really make use of this technology seeing as how it's free and there's a huge amount of nvidia chips around. Could be I'm way off base... but I doubt it heh. Indigo is such a useful tool, the main drawback is that it takes so much time to render a single frame. Seems any possibility of enhancing performance would be worth a bit of effort exploring. Especially since the possibility of cutting in half (or more) the render times exists.
Who is online
Users browsing this forum: No registered users and 12 guests