bunkspeed hypershot ???
bunkspeed hypershot ???
Hi
I am curious about what you guys think about solutions like Hyershot from bunkspeed which uses the GPU to render scenes.
the speed is impressive, but i am curious about the realism. i saw some very good images with GI, HDRI IBL, Caustics and others blazing fast.
Ono isnt there a catch to it?
claas
			
									
									I am curious about what you guys think about solutions like Hyershot from bunkspeed which uses the GPU to render scenes.
the speed is impressive, but i am curious about the realism. i saw some very good images with GI, HDRI IBL, Caustics and others blazing fast.
Ono isnt there a catch to it?
claas
-- 
C l a a s E i c k e K u h n e n
Artist : Designer : Educator
Assistant Professor Industrial Design
Kendall College of Art and Design
of Ferris State University
						C l a a s E i c k e K u h n e n
Artist : Designer : Educator
Assistant Professor Industrial Design
Kendall College of Art and Design
of Ferris State University
- Neobloodline
- Posts: 136
- Joined: Mon Nov 12, 2007 9:54 am
I use my Nvidia 8800gts320 to render with a blender exporter to Gelato.  It's in the early stages but it's coming along and it's FAST.  When they get it working efficiently I think it'll be something to consider.  It's nowhere near touching the usefulness of Indigo in it's present state but imagine whipping out animations at mach speed with reflection, refraction, and caustics which is all supported in Gelato.  Using the GPU for rendering would be great since there's things a GPU can do that a CPU can't even touch.  GI, AO, and photon-mapped caustics.
			
									
									
						looks like it could be an implementation of this paper: http://graphics.stanford.edu/papers/allfreqmat/
			
									
									
						i didnt say it is something very new
but the speed is good and they seem to be the first who made
the GPU demos into an actual program to use which sells.
i am just curious if this has a catch somewhere in terms of realism.
cars, well cars are not a great object to study but the plane interiors
showed some well lid / illuminated areas.
claas
			
									
									but the speed is good and they seem to be the first who made
the GPU demos into an actual program to use which sells.
i am just curious if this has a catch somewhere in terms of realism.
cars, well cars are not a great object to study but the plane interiors
showed some well lid / illuminated areas.
claas
-- 
C l a a s E i c k e K u h n e n
Artist : Designer : Educator
Assistant Professor Industrial Design
Kendall College of Art and Design
of Ferris State University
						C l a a s E i c k e K u h n e n
Artist : Designer : Educator
Assistant Professor Industrial Design
Kendall College of Art and Design
of Ferris State University
- 
				Knaxknarke
- Posts: 21
- Joined: Thu Nov 23, 2006 10:14 am
no GPU used
Hypershot does NOT use the GPU. It's a pure CPU renderer.
See at:
http://www.bunkspeed.com/hypershot/products.html
At the right column:
Shader Model 2 is to weak to be of any use, Shader Model 3 may be of some use for some parts (mostly for shading, less for ray shooting) but if you want cross-platform, only GLSL is an option, but the compiler is in the OpenGL driver and there are still compatibility issues with AMD/NV.
And every major driver update has a new bug in the compiler. The compiler works when the user starts the app! This is bound to make a lot of different problems for the support and bugfixing...
Older graphics boards also don't have enough memory. 256 MB for big screen(s) 2D GUI, or 3D GUI, 3D-APIs (OGL/DX9) double/triple buffering with multisampling, textures, vertex and pixel buffers, display lists, shader programs, framebuffer objects, etc. etc.
There is simply not enough graphics memory to ray trace complex scenes on a GPU + 256 MB VRAM. Big models, big KD-tree, big textures for example HDRI environment maps with 4096*2048*fp16*4 (the GPUs always use RGBA, there is no RGB mode internally, it's the result from texture caching hard-wired with the memory controller and bus) makes 64MB. One quarter of your VMEM just for a background texture!
You surely use a render node with 2-4GB RAM. A lot of graphics people use 64bit workstations, because the 2GB/4GB limit of Win32 hurts there rendering size. It's no problem to crash Maya internal renderer or mental ray on 32bit Windows with a 3 million poly model. And this is a _small_ model, if you handle CAD data.
The biggest graphics boards have 1-2GB VRAM. With no MMU, no paging, no protection, no virtual memory. Things that are granted on PC-CPUs since the 386 from 1985 (ok, M$ needed >10 years to use it in their main OS, but it was there all the time - on PC-Unices/ or OS/2 ...).
Then there is still the PCIe 16x bottleneck for fast interaction and data transfer between CPU/GPU. So hybrid CPU/GPU rendering is still difficult and the latency is kill finer granularity in control flow.
CUDA is the first chance to do ray tracing on completely GPUs. The Quadro FX 5600 with 1.5 GB VRAM is big enough for medium models. But it is too expensive. 3000 US$ - you better buy a dual-quadcore Xeon PC with 8GB RAM for this money.
			
									
									
						See at:
http://www.bunkspeed.com/hypershot/products.html
At the right column:
The same in the tech_spec.pdf:FEATURES AT-A-GLANCE:
[...]
- Multi-threaded architecture takes full advantage of dual and quad core CPU's
- CPU not GPU based
It would be much to complicated to support a big fraction of graphic cards in the real world. Nvidia CUDA and AMD CTM are capable of complex algorithms on GPU, but not in common use (newest GPUs+special drivers).- CPU based
- near linear performance scale with additional CPUs
Shader Model 2 is to weak to be of any use, Shader Model 3 may be of some use for some parts (mostly for shading, less for ray shooting) but if you want cross-platform, only GLSL is an option, but the compiler is in the OpenGL driver and there are still compatibility issues with AMD/NV.
And every major driver update has a new bug in the compiler. The compiler works when the user starts the app! This is bound to make a lot of different problems for the support and bugfixing...
Older graphics boards also don't have enough memory. 256 MB for big screen(s) 2D GUI, or 3D GUI, 3D-APIs (OGL/DX9) double/triple buffering with multisampling, textures, vertex and pixel buffers, display lists, shader programs, framebuffer objects, etc. etc.
There is simply not enough graphics memory to ray trace complex scenes on a GPU + 256 MB VRAM. Big models, big KD-tree, big textures for example HDRI environment maps with 4096*2048*fp16*4 (the GPUs always use RGBA, there is no RGB mode internally, it's the result from texture caching hard-wired with the memory controller and bus) makes 64MB. One quarter of your VMEM just for a background texture!
You surely use a render node with 2-4GB RAM. A lot of graphics people use 64bit workstations, because the 2GB/4GB limit of Win32 hurts there rendering size. It's no problem to crash Maya internal renderer or mental ray on 32bit Windows with a 3 million poly model. And this is a _small_ model, if you handle CAD data.
The biggest graphics boards have 1-2GB VRAM. With no MMU, no paging, no protection, no virtual memory. Things that are granted on PC-CPUs since the 386 from 1985 (ok, M$ needed >10 years to use it in their main OS, but it was there all the time - on PC-Unices/ or OS/2 ...).
Then there is still the PCIe 16x bottleneck for fast interaction and data transfer between CPU/GPU. So hybrid CPU/GPU rendering is still difficult and the latency is kill finer granularity in control flow.
CUDA is the first chance to do ray tracing on completely GPUs. The Quadro FX 5600 with 1.5 GB VRAM is big enough for medium models. But it is too expensive. 3000 US$ - you better buy a dual-quadcore Xeon PC with 8GB RAM for this money.
gelato has already been mentioned, it predates them by several years. i've heard, however, that it's performance/cost characteristics are worse than other industry players, since you have to buy an expensive license and then expensive quadro cards too (cf. one license on a monster multicore machine).F.ip2 wrote:but the speed is good and they seem to be the first who made
the GPU demos into an actual program to use which sells.
the rendering precomputation is one thing, but interactive relighting is something else. effectively what they are doing is compressing a lot of radiance and surface reflection information, reconstructing it in realtime (this is why it's been popular with games, it's even part of the directx api).F.ip2 wrote:jesus how do they make it so fast ...
the first precomputed radiance transfer (prt) method to gain popularity was kautz and sloan's spherical harmonic based method. it however was only a low-frequency approximation, and judging from appearances it looks like one of the newer all-frequency methods by ramamoorthi is implemented here.
- 
				Knaxknarke
- Posts: 21
- Joined: Thu Nov 23, 2006 10:14 am
Are you sure, that it's much faster than other fast renderers. I seen a demo of hypershot and it didn't look to fast, compared to the interactive preview in Modo 301 or FPrime...F.ip2 wrote:ops
jesus how do they make it so fast ...
About the "how":
I think it's heavy optimizing the code. Cache coherent memory layout, packet traversal (ray bundles). Look at ompf.org for hard-core ray tracing optimization. Adaptive sampling of HDRI environment and material BRDFs. They specialized in one object illuminated by a HDRI. So 128-258 adaptive samples on the HDRI should do in most cases.
You can't compare the speed of a modern biased renderer with the dated Mental Ray (what most people use with Maya/Max/XSI) or with unbiased rendering.
BTW: I'm very curious about Mental Ray 4. Rumors go that they will sacrifice some shader plugin compatibility to do a totally new design. They hired some smart guys with PhDs in real-time ray tracing in the last year, so maybe it will be fast as Hypershot, Modo, ...
I remember that Modo was quite fast with GI rendering.
The point I am very interested is how real/accurate are the renderings with Hypershot. For example also VRay seems to produce quite excellent results.
Dont get me wrong, this is not about questioning Indigo, no no.
I am just curious. Mainly because I am looking out for new software for our department and to look out for future trends t support our students the best.
Claas
			
									
									The point I am very interested is how real/accurate are the renderings with Hypershot. For example also VRay seems to produce quite excellent results.
Dont get me wrong, this is not about questioning Indigo, no no.
I am just curious. Mainly because I am looking out for new software for our department and to look out for future trends t support our students the best.
Claas
-- 
C l a a s E i c k e K u h n e n
Artist : Designer : Educator
Assistant Professor Industrial Design
Kendall College of Art and Design
of Ferris State University
						C l a a s E i c k e K u h n e n
Artist : Designer : Educator
Assistant Professor Industrial Design
Kendall College of Art and Design
of Ferris State University
- 
				Knaxknarke
- Posts: 21
- Joined: Thu Nov 23, 2006 10:14 am
The basic version of Gelato is for free. But the rreal-time relighting engine isn't free. It only needs a newer Nvidia GPU, they only recommend to use Quadro FX since it is tested with the Quadro drivers. So Geforce 8800 xxx should be also fine and isn't to expensive.lyc wrote: gelato has already been mentioned, it predates them by several years. i've heard, however, that it's performance/cost characteristics are worse than other industry players, since you have to buy an expensive license and then expensive quadro cards too (cf. one license on a monster multicore machine).
- 
				Knaxknarke
- Posts: 21
- Joined: Thu Nov 23, 2006 10:14 am
The newer Final Render is also fast and doesn't show to much (biased) low frequency noice. AFAIK this newer biased renderers all use some randomized quasi monte carlo sampling strategies to do fast and accurate sampling of light paths. Some kind of bias compensation for photon mapping and so on. There are a lot of tricks in the academic GI literature.F.ip2 wrote: The point I am very interested is how real/accurate are the renderings with Hypershot. For example also VRay seems to produce quite excellent results.
The problem is to find the ones that really work, that are reliable and fast. And to combine them in a useful way. But it should be hard to prove, that they alway work or that they are alway correct. But most will look real enough. The eye can be fooled. As long as you don't try to do lighting simulation for buildings, cars head lights and so on, it should be ok.
Pathtracing+MLT is time proven and well understood. It will work correctly. But it isn't the only way to get good looking GI renderings. And it isn't the fastest way.F.ip2 wrote: Dont get me wrong, this is not about questioning Indigo, no no.
I think Hypershot is quite expensive, for what it does. But there is surely an academic license or classroom license.F.ip2 wrote: I am just curious. Mainly because I am looking out for new software for our department and to look out for future trends t support our students the best.
Knax
Who is online
Users browsing this forum: No registered users and 57 guests
 
        








