Alpha render?

Feature requests, bug reports and related discussion
User avatar
psor
1st Place Winner
Posts: 1295
Joined: Sun Jun 25, 2006 1:25 am
Location: Berlin
Contact:

Post by psor » Fri Jan 18, 2008 5:32 pm

Honestly?! ... I would say no! Sorry to say this, but I think the most people
have problems already. :? :cry: :roll: :wink:

If you could find another way, would be appreciated! ;o))

edit: Of course if this is the easiest way to do it, ... *sigh* ... but
then this would be a good start to think about layerblending too. :P :D ;)




take care
psor
Last edited by psor on Fri Jan 18, 2008 5:34 pm, edited 1 time in total.
"The sleeper must awaken"

User avatar
Zom-B
1st Place 100
Posts: 4701
Joined: Tue Jul 04, 2006 4:18 pm
Location: ´'`\_(ò_Ó)_/´'`
Contact:

Post by Zom-B » Fri Jan 18, 2008 5:34 pm

OnoSendai wrote:Ok, I've thought of one way.
It would involve two master buffers; I wonder if using 2x the amount of memory for the buffer than usual will be alright for people?
RAM is so cheap at the moment, so go for it Ono :D
polygonmanufaktur.de

User avatar
OnoSendai
Developer
Posts: 6244
Joined: Sat May 20, 2006 6:16 pm
Location: Wellington, NZ
Contact:

Post by OnoSendai » Fri Jan 18, 2008 5:35 pm

Damn, you're a tough customer Psor :)

User avatar
OnoSendai
Developer
Posts: 6244
Joined: Sat May 20, 2006 6:16 pm
Location: Wellington, NZ
Contact:

Post by OnoSendai » Fri Jan 18, 2008 5:37 pm

Ah yes... the problem is rather similar to the layer blending / multilight problem

User avatar
psor
1st Place Winner
Posts: 1295
Joined: Sun Jun 25, 2006 1:25 am
Location: Berlin
Contact:

Post by psor » Fri Jan 18, 2008 5:38 pm

@ZomB

Dude! :lol: It's not the RAM it's the OS for the most ppl. And as you know
Nik is releasing 32bit releases for updates. So how should this work?

Then I do prefer an option to say to render the alpha channel after
I've decided that the image is done. Flushing the mainbuffer, rendering
alpha channel and there you go.


@Nik

I'm not a customer yet, hehe! ;) But I wanna say, that you should also
make it as fast as possible. I do remember the alpha channel of Maxwell
was so freakin noisy in the beginning, that it was almost not possible to
use it. So I guess you'll have to use a different way ... *hint* ... ;o)))

edit: I dunno what fry is doing, but it is fast and works pretty well.
The other comp channels, like shadow and volume passes too.

edit2: Checked it again with the frydemo and the sampler seems
to be the same. On all channels the "artefacts" have the same pattern.
So I guess they use pathtracing for those too. They definitely use more
then one masterbuffer, because you can switch between the channels
and the used RAM increases if you render Alpha, AO, ... too.

edit3: So as I said, if this is the way to go, then do it. If there is
a way to save the RAM for different stuff, then it would be cool too. It
depends on what you wanna do. If you go the "eat-more-RAM" route,
you'll have to provide more 64bit releases which means spending more
time to compile new releases! Anyway, you wanna sell your SDK at
some point, so do it the way that is the best for you and your paying
customers! :P :D ;)




take care
psor
"The sleeper must awaken"

v_mulligan
Posts: 126
Joined: Wed Nov 28, 2007 9:16 am

Post by v_mulligan » Fri Jan 18, 2008 6:20 pm

I'm not quite clear on why you need twice the memory. As I understand it, the Indigo renderer produces many point samples of the light reaching the camera from the scene, much like a real camera collecting photons. Collect enough samples and you have an image. Instead of storing 3 values (the RGB colour components) for each sample, why not store 4 (the RGB and alpha components)? Wouldn't this just use 33% more memory? I apologize if I'm off in left field about the way the renderer works. Most of my experience is with REYES-style renderers like Aqsis. I know that Indigo strives for much greater physical accuracy, but regardless the algorithms used to compute the information that's being sampled, the same information needs to be STORED for each sample, does it not?

As for graininess of the alpha channel, I'd expect it to be as grainy as the colour channel if you compute an alpha value for each sample. This should be acceptable for any compositing work that it would be used for.

User avatar
psor
1st Place Winner
Posts: 1295
Joined: Sun Jun 25, 2006 1:25 am
Location: Berlin
Contact:

Post by psor » Fri Jan 18, 2008 6:30 pm

"As for graininess of the alpha channel, I'd expect it to be as grainy as the
colour channel if you compute an alpha value for each sample. This should be
acceptable for any compositing work that it would be used for."


I'm no pro when it comes to compositing, but I would say a 'grainy' alpha
mask would make you hate your work. Because what is not full white (255)
will be semi-transparent and you'll not like this I guess ... ;o))

But maybe I'm missing something! :D :lol: ;)

nb: And yes I think it's about 33% too, but Nik was a bit carefull! ;o)




take care
psor
"The sleeper must awaken"

v_mulligan
Posts: 126
Joined: Wed Nov 28, 2007 9:16 am

Post by v_mulligan » Fri Jan 18, 2008 6:32 pm

Incidentally, however you decide to compute the alpha channel, the colour output should probably change a bit as well to facilitate compositing:

Consider the example of a red object that's 50% transparent, in front of a black background. Currently, the final colour of its pixels would presumably be RGB (0.5, 0.0, 0.0). If Indigo were to render an alpha channel, it wouldn't be that useful to output RGBA (0.5, 0, 0, 0.5), though. The typical compositing formula is:

Composited Colour = Alpha*Background + (1 - Alpha)*Foreground

In this case, if I were to composite my red object onto a black background, the final colour would be RGB (0.25, 0, 0) instead of the desired RGB (0.5, 0, 0). If the program is producing an alpha channel, semitransparent pixels or samples should retain the colour of the foreground objects, and not be influenced by background. Thus, with alpha output off, Indigo should in this case output RGB (0.5, 0, 0), and with alpha on, it should output RGBA (0.25, 0, 0, 0.5).

Sorry to complicate the problem!

v_mulligan
Posts: 126
Joined: Wed Nov 28, 2007 9:16 am

Post by v_mulligan » Fri Jan 18, 2008 6:38 pm

psor wrote: I'm no pro when it comes to compositing, but I would say a 'grainy' alpha
mask would make you hate your work. Because what is not full white (255)
will be semi-transparent and you'll not like this I guess ... ;o))
Normally, I'd agree. If the grain in the alpha channel matches the grain in the colour channel, it shouldn't make the image any worse when composited. I mean, it'll be grainy, but it won't look any worse than it did with the background you already had. This is why I'd suggest calculating an alpha value for each sample as the colour values are calculated. If it were done as a separate step, after the colour image is generated, a different random number sequence would presumably be used to jitter the samples, and the grain would, I expect, not match the grain in the colour image. In this case, you're quite right about compositing headaches.

I'll scratch my head a bit and then maybe post some pseudocode showing how I'd implement alpha rendering in Indigo. It'll involve some guesswork about what's actually going on under the hood, though...

User avatar
psor
1st Place Winner
Posts: 1295
Joined: Sun Jun 25, 2006 1:25 am
Location: Berlin
Contact:

Post by psor » Fri Jan 18, 2008 6:39 pm

Hehe, ... yup - there we come to the fun of comping. So Nik, watch out Sir!

:twisted: 8) :lol: :D :wink:



take care
psor
"The sleeper must awaken"

User avatar
OnoSendai
Developer
Posts: 6244
Joined: Sat May 20, 2006 6:16 pm
Location: Wellington, NZ
Contact:

Post by OnoSendai » Fri Jan 18, 2008 6:53 pm

goddamit psor that little bug in your sig is annoying me

:twisted: :lol:

User avatar
psor
1st Place Winner
Posts: 1295
Joined: Sun Jun 25, 2006 1:25 am
Location: Berlin
Contact:

Post by psor » Fri Jan 18, 2008 6:58 pm

Don't worry mate, I know this feeling! :twisted: :lol: :wink:



take care
psor
"The sleeper must awaken"

User avatar
OnoSendai
Developer
Posts: 6244
Joined: Sat May 20, 2006 6:16 pm
Location: Wellington, NZ
Contact:

Post by OnoSendai » Fri Jan 18, 2008 7:09 pm

Anyway, thinking a little more about the problem:

It's clear that full realism will not be achieved if compositing with a background image is used - because a different background image would result in different illumination of the scene. In practice, this means that specular reflections and reflections of the background at grazing angle will be incorrect.

However we can ignore that and go for 'as realistic as possible'.

Compositing with low dynamic range (LDR) images will be less realistic that compositing with HDR images. This is because the nonlinear tone-mapping stage is, in effect, performed on the indigo output and the background plate separately. This will differ from the correct approach which would be to composite the two HDR images, and then tone-map.

So Ideally we would work with light-linear HDR images. The problem here, I'm guessing, is that most people's background plates will be LDR photos.

Lets assume for the moment however that we will work with HDR output from Indigo, and a HDR background plate.

Simply using a HDR Indigo render (with no alpha channel), a greyscale alpha map, and a HDR background plate, won't give correct results.

This is because you'll see bits of the old Indigo background when doing the blend operation.
For instance, lets say you have a very blurred red object, due to DOF effects, against a blue sky. The corresponding colour at a pixel may then be 25% red, 75% blue. Simply using a greyscale alpha map to blend with the background plate will incorrectly use the blue part of the pixel colour, as well as the correct red part.

A more correct approach would be for Indigo to only accumulate the red objects etc.. onto the buffer, but not the sky. The alpha channel at this pixel would then be the fraction of rays that hit the red object (eg 25%)

User avatar
OnoSendai
Developer
Posts: 6244
Joined: Sat May 20, 2006 6:16 pm
Location: Wellington, NZ
Contact:

Post by OnoSendai » Fri Jan 18, 2008 7:17 pm

v_mulligan wrote:I'm not quite clear on why you need twice the memory. As I understand it, the Indigo renderer produces many point samples of the light reaching the camera from the scene, much like a real camera collecting photons. Collect enough samples and you have an image. Instead of storing 3 values (the RGB colour components) for each sample, why not store 4 (the RGB and alpha components)? Wouldn't this just use 33% more memory?
That wouldn't give correct results, you'd just end up with a very high alpha value around bright bits of the image :)

v_mulligan
Posts: 126
Joined: Wed Nov 28, 2007 9:16 am

Post by v_mulligan » Fri Jan 18, 2008 7:29 pm

OnoSendai wrote: That wouldn't give correct results, you'd just end up with a very high alpha value around bright bits of the image :)
I don't mean store the greyscale colour value again in a separate channel. I'm assuming we have some sort of algorithm for COMPUTING the alpha component. Having computed it, we just want to STORE it. All I'm saying is that storing the alpha channel should only increase memory usage by 33% instead of by 100%.

The more difficult question, of course, is how to COMPUTE the alpha component. I think in my head, I was making all of this much more complicated than it needs to be. Here's how I'd propose adding alpha output to Indigo:

Case 1: The user doesn't want alpha output.
--Render everything exactly as you are now :) .

Case 2: The user wants alpha output.
--For each sample, store the information stored before (the colour values) PLUS a single bit of data (a boolean variable) indicating whether the sample is from the background directly (0) or is scattered/reflected/refracted (1).
--To figure out the COLOUR of each pixel, average only those samples that are scattered/reflected/refracted, i.e., only those samples for which the boolean variable equals 1. (I think this is basically what you said in the last paragraph two posts back, OnoSendai).
--To figure out the ALPHA value for each pixel, average the boolean variable stored with each sample.

For example, let's say we're looking at a pixel at the boundary between a grey wall and the blue sky beyond. If there's no alpha channel, we want the pixel to be blue-grey, whereas if there is an alpha channel, we want it to be grey to allow compositing. Seven samples land on the wall and return grey, and three land on the sky and return blue. With each of the samples landing on the grey wall, store (colour=grey, alpha=1), while for the other three, store (colour=blue, alpha=0). To figure out the colour of the pixel, consider only the seven samples with alpha=1. These yield an average colour of grey (instead of the grey-blue you'd get from averaging all of the samples). The alpha should be 0.7 (7 * 1 + 3 * 0). This should composite properly.

Note also that this solution requires very little additional cost in memory. When storing the samples, you store only one additional BIT of data per sample, and when storing the final image, you store one additional channel, increasing uncompressed image size by 33%.
Last edited by v_mulligan on Fri Jan 18, 2008 7:33 pm, edited 2 times in total.

Post Reply
54 posts

Who is online

Users browsing this forum: No registered users and 127 guests