bunkspeed hypershot ???

General questions about Indigo, the scene format, rendering etc...
User avatar
F.ip2
Posts: 160
Joined: Thu Aug 10, 2006 11:05 am

bunkspeed hypershot ???

Post by F.ip2 » Thu Nov 29, 2007 4:17 pm

Hi

I am curious about what you guys think about solutions like Hyershot from bunkspeed which uses the GPU to render scenes.

the speed is impressive, but i am curious about the realism. i saw some very good images with GI, HDRI IBL, Caustics and others blazing fast.



Ono isnt there a catch to it?


claas
--
C l a a s E i c k e K u h n e n
Artist : Designer : Educator

Assistant Professor Industrial Design
Kendall College of Art and Design
of Ferris State University

User avatar
Neobloodline
Posts: 136
Joined: Mon Nov 12, 2007 9:54 am

Post by Neobloodline » Thu Nov 29, 2007 7:10 pm

I use my Nvidia 8800gts320 to render with a blender exporter to Gelato. It's in the early stages but it's coming along and it's FAST. When they get it working efficiently I think it'll be something to consider. It's nowhere near touching the usefulness of Indigo in it's present state but imagine whipping out animations at mach speed with reflection, refraction, and caustics which is all supported in Gelato. Using the GPU for rendering would be great since there's things a GPU can do that a CPU can't even touch. GI, AO, and photon-mapped caustics.

User avatar
SmartDen
Developer
Posts: 999
Joined: Fri Oct 13, 2006 10:58 pm
Location: Canary Islands
Contact:

Post by SmartDen » Thu Nov 29, 2007 8:46 pm

I couldn't find it in google. Some links?

User avatar
Zom-B
1st Place 100
Posts: 4701
Joined: Tue Jul 04, 2006 4:18 pm
Location: ´'`\_(ò_Ó)_/´'`
Contact:

Post by Zom-B » Fri Nov 30, 2007 12:51 am

SmartDen wrote:I couldn't find it in google. Some links?
Here you go...
F.ip2 missed an P ;-)
polygonmanufaktur.de

User avatar
zsouthboy
Posts: 1395
Joined: Fri Oct 13, 2006 5:12 am

Post by zsouthboy » Fri Nov 30, 2007 2:08 am

I'm honestly not seeing anything special.

Tons of marketing though.

They make it seem as though they've stumbled upon some magic way of rendering - uh, no. They haven't. Realtime image-based lighting has been around for a while.

User avatar
lycium
Posts: 1216
Joined: Wed Sep 12, 2007 7:46 am
Location: Leipzig, Germany
Contact:

Post by lycium » Fri Nov 30, 2007 3:37 am

looks like it could be an implementation of this paper: http://graphics.stanford.edu/papers/allfreqmat/

User avatar
F.ip2
Posts: 160
Joined: Thu Aug 10, 2006 11:05 am

Post by F.ip2 » Fri Nov 30, 2007 9:24 am

i didnt say it is something very new

but the speed is good and they seem to be the first who made
the GPU demos into an actual program to use which sells.


i am just curious if this has a catch somewhere in terms of realism.
cars, well cars are not a great object to study but the plane interiors
showed some well lid / illuminated areas.

claas
--
C l a a s E i c k e K u h n e n
Artist : Designer : Educator

Assistant Professor Industrial Design
Kendall College of Art and Design
of Ferris State University

Knaxknarke
Posts: 21
Joined: Thu Nov 23, 2006 10:14 am

no GPU used

Post by Knaxknarke » Fri Nov 30, 2007 10:35 am

Hypershot does NOT use the GPU. It's a pure CPU renderer.

See at:
http://www.bunkspeed.com/hypershot/products.html

At the right column:
FEATURES AT-A-GLANCE:
[...]
- Multi-threaded architecture takes full advantage of dual and quad core CPU's
- CPU not GPU based
The same in the tech_spec.pdf:
- CPU based
- near linear performance scale with additional CPUs
It would be much to complicated to support a big fraction of graphic cards in the real world. Nvidia CUDA and AMD CTM are capable of complex algorithms on GPU, but not in common use (newest GPUs+special drivers).

Shader Model 2 is to weak to be of any use, Shader Model 3 may be of some use for some parts (mostly for shading, less for ray shooting) but if you want cross-platform, only GLSL is an option, but the compiler is in the OpenGL driver and there are still compatibility issues with AMD/NV.
And every major driver update has a new bug in the compiler. The compiler works when the user starts the app! This is bound to make a lot of different problems for the support and bugfixing...

Older graphics boards also don't have enough memory. 256 MB for big screen(s) 2D GUI, or 3D GUI, 3D-APIs (OGL/DX9) double/triple buffering with multisampling, textures, vertex and pixel buffers, display lists, shader programs, framebuffer objects, etc. etc.

There is simply not enough graphics memory to ray trace complex scenes on a GPU + 256 MB VRAM. Big models, big KD-tree, big textures for example HDRI environment maps with 4096*2048*fp16*4 (the GPUs always use RGBA, there is no RGB mode internally, it's the result from texture caching hard-wired with the memory controller and bus) makes 64MB. One quarter of your VMEM just for a background texture!

You surely use a render node with 2-4GB RAM. A lot of graphics people use 64bit workstations, because the 2GB/4GB limit of Win32 hurts there rendering size. It's no problem to crash Maya internal renderer or mental ray on 32bit Windows with a 3 million poly model. And this is a _small_ model, if you handle CAD data.

The biggest graphics boards have 1-2GB VRAM. With no MMU, no paging, no protection, no virtual memory. Things that are granted on PC-CPUs since the 386 from 1985 (ok, M$ needed >10 years to use it in their main OS, but it was there all the time - on PC-Unices/ or OS/2 ...).

Then there is still the PCIe 16x bottleneck for fast interaction and data transfer between CPU/GPU. So hybrid CPU/GPU rendering is still difficult and the latency is kill finer granularity in control flow.

CUDA is the first chance to do ray tracing on completely GPUs. The Quadro FX 5600 with 1.5 GB VRAM is big enough for medium models. But it is too expensive. 3000 US$ - you better buy a dual-quadcore Xeon PC with 8GB RAM for this money.

User avatar
F.ip2
Posts: 160
Joined: Thu Aug 10, 2006 11:05 am

Post by F.ip2 » Fri Nov 30, 2007 11:26 am

ops

i must have missed that, that is is not GPU based.

jesus how do they make it so fast ...
--
C l a a s E i c k e K u h n e n
Artist : Designer : Educator

Assistant Professor Industrial Design
Kendall College of Art and Design
of Ferris State University

User avatar
lycium
Posts: 1216
Joined: Wed Sep 12, 2007 7:46 am
Location: Leipzig, Germany
Contact:

Post by lycium » Fri Nov 30, 2007 5:56 pm

F.ip2 wrote:but the speed is good and they seem to be the first who made
the GPU demos into an actual program to use which sells.
gelato has already been mentioned, it predates them by several years. i've heard, however, that it's performance/cost characteristics are worse than other industry players, since you have to buy an expensive license and then expensive quadro cards too (cf. one license on a monster multicore machine).
F.ip2 wrote:jesus how do they make it so fast ...
the rendering precomputation is one thing, but interactive relighting is something else. effectively what they are doing is compressing a lot of radiance and surface reflection information, reconstructing it in realtime (this is why it's been popular with games, it's even part of the directx api).

the first precomputed radiance transfer (prt) method to gain popularity was kautz and sloan's spherical harmonic based method. it however was only a low-frequency approximation, and judging from appearances it looks like one of the newer all-frequency methods by ramamoorthi is implemented here.

Knaxknarke
Posts: 21
Joined: Thu Nov 23, 2006 10:14 am

Post by Knaxknarke » Sat Dec 01, 2007 9:44 am

F.ip2 wrote:ops
jesus how do they make it so fast ...
Are you sure, that it's much faster than other fast renderers. I seen a demo of hypershot and it didn't look to fast, compared to the interactive preview in Modo 301 or FPrime...

About the "how":

I think it's heavy optimizing the code. Cache coherent memory layout, packet traversal (ray bundles). Look at ompf.org for hard-core ray tracing optimization. Adaptive sampling of HDRI environment and material BRDFs. They specialized in one object illuminated by a HDRI. So 128-258 adaptive samples on the HDRI should do in most cases.

You can't compare the speed of a modern biased renderer with the dated Mental Ray (what most people use with Maya/Max/XSI) or with unbiased rendering.

BTW: I'm very curious about Mental Ray 4. Rumors go that they will sacrifice some shader plugin compatibility to do a totally new design. They hired some smart guys with PhDs in real-time ray tracing in the last year, so maybe it will be fast as Hypershot, Modo, ...

User avatar
F.ip2
Posts: 160
Joined: Thu Aug 10, 2006 11:05 am

Post by F.ip2 » Sun Dec 02, 2007 3:34 am

I remember that Modo was quite fast with GI rendering.


The point I am very interested is how real/accurate are the renderings with Hypershot. For example also VRay seems to produce quite excellent results.

Dont get me wrong, this is not about questioning Indigo, no no.

I am just curious. Mainly because I am looking out for new software for our department and to look out for future trends t support our students the best.


Claas
--
C l a a s E i c k e K u h n e n
Artist : Designer : Educator

Assistant Professor Industrial Design
Kendall College of Art and Design
of Ferris State University

Knaxknarke
Posts: 21
Joined: Thu Nov 23, 2006 10:14 am

Post by Knaxknarke » Mon Dec 03, 2007 3:39 am

lyc wrote: gelato has already been mentioned, it predates them by several years. i've heard, however, that it's performance/cost characteristics are worse than other industry players, since you have to buy an expensive license and then expensive quadro cards too (cf. one license on a monster multicore machine).
The basic version of Gelato is for free. But the rreal-time relighting engine isn't free. It only needs a newer Nvidia GPU, they only recommend to use Quadro FX since it is tested with the Quadro drivers. So Geforce 8800 xxx should be also fine and isn't to expensive.

Knaxknarke
Posts: 21
Joined: Thu Nov 23, 2006 10:14 am

Post by Knaxknarke » Mon Dec 03, 2007 3:52 am

F.ip2 wrote: The point I am very interested is how real/accurate are the renderings with Hypershot. For example also VRay seems to produce quite excellent results.
The newer Final Render is also fast and doesn't show to much (biased) low frequency noice. AFAIK this newer biased renderers all use some randomized quasi monte carlo sampling strategies to do fast and accurate sampling of light paths. Some kind of bias compensation for photon mapping and so on. There are a lot of tricks in the academic GI literature.

The problem is to find the ones that really work, that are reliable and fast. And to combine them in a useful way. But it should be hard to prove, that they alway work or that they are alway correct. But most will look real enough. The eye can be fooled. As long as you don't try to do lighting simulation for buildings, cars head lights and so on, it should be ok.
F.ip2 wrote: Dont get me wrong, this is not about questioning Indigo, no no.
Pathtracing+MLT is time proven and well understood. It will work correctly. But it isn't the only way to get good looking GI renderings. And it isn't the fastest way.

F.ip2 wrote: I am just curious. Mainly because I am looking out for new software for our department and to look out for future trends t support our students the best.
I think Hypershot is quite expensive, for what it does. But there is surely an academic license or classroom license.

Knax

User avatar
Caronte
Posts: 61
Joined: Tue May 01, 2007 7:17 am
Location: Valencia, Spain

Post by Caronte » Mon Dec 03, 2007 3:52 am

F.ip2 wrote:...also VRay seems to produce quite excellent results.
For me, VRay is the best renderer in terms of time/results.
Sorry about my poor english ;)

Post Reply
22 posts

Who is online

Users browsing this forum: No registered users and 97 guests