Proposal for faster renderings
Proposal for faster renderings
The principle: the noise is distributed on the surface of the render in a random way. So on a second render of the same scene we will have different noise, but most of the correct pixels will correspond on the two images. If we take three images we can get an even better combination. And so on.
The practice: i took a scene made for a project i'm working on and rendered it 3 times for 10 minutes each, then I rendered it for 30 minutes, obtaining 4 images: one for probe and 3 for my demonstration. On my laptop (Turion64X2 at 1800), I've got about 70 samples per pixel on the 10mins renders and 210 on the 30mins render. The most difficult part was to find out a way of combining the 3 images. In Photoshop, the best way was to blend each layer containing the images with the one beneath with "Darken".
The colusion: the stack render version is not better concerning noise, but has more correct pixels/areas so it can be better post-processed in a noise-reduction software, while the long render may have areas with problems of correct exposure, noise, etc. not correctable in post-processing.
Maybe this isn't a new ideea, but i'm sure there is a way of combining the images in a more optimal/automate way (i'm thinking some bit-operations on the pixels) than blending layers in photoshop...if anyone has the time/knoledge to make it.
The practice: i took a scene made for a project i'm working on and rendered it 3 times for 10 minutes each, then I rendered it for 30 minutes, obtaining 4 images: one for probe and 3 for my demonstration. On my laptop (Turion64X2 at 1800), I've got about 70 samples per pixel on the 10mins renders and 210 on the 30mins render. The most difficult part was to find out a way of combining the 3 images. In Photoshop, the best way was to blend each layer containing the images with the one beneath with "Darken".
The colusion: the stack render version is not better concerning noise, but has more correct pixels/areas so it can be better post-processed in a noise-reduction software, while the long render may have areas with problems of correct exposure, noise, etc. not correctable in post-processing.
Maybe this isn't a new ideea, but i'm sure there is a way of combining the images in a more optimal/automate way (i'm thinking some bit-operations on the pixels) than blending layers in photoshop...if anyone has the time/knoledge to make it.
- Attachments
-
- One of the three 10 mins renders
- 2.png (496.98 KiB) Viewed 13168 times
-
- The cobmination of the three 10 mins renders using "darken" blending in Photoshop layers
- combined_darken.png (485.4 KiB) Viewed 13168 times
-
- The 30 mins render
- 30mins.png (434.06 KiB) Viewed 13168 times
-
- Posts: 126
- Joined: Wed Nov 28, 2007 9:16 am
@ Ono
My combined image was not optimal because of the way Photoshop combines images, taking the lowest value as you said. But the algorithm should be: if a certain pixel has values like a in the first render, a in the second and b, then the final result should be a, completely ignoring the "statistically wrong" pixel, b. Obviously it cand be extrapolated to any number of odd renders (3,5,7 and so on).
@v_mulligan
Its not a proposal for default way Indigo renders. But would be nice to have this option.
My combined image was not optimal because of the way Photoshop combines images, taking the lowest value as you said. But the algorithm should be: if a certain pixel has values like a in the first render, a in the second and b, then the final result should be a, completely ignoring the "statistically wrong" pixel, b. Obviously it cand be extrapolated to any number of odd renders (3,5,7 and so on).
@v_mulligan
Its not a proposal for default way Indigo renders. But would be nice to have this option.
-
- Posts: 126
- Joined: Wed Nov 28, 2007 9:16 am
-
- Posts: 1828
- Joined: Mon Sep 04, 2006 3:33 pm
so what if a certain pixel has a value a in the first render, b in the second, and c in the third? average it out? thats what it does anyways.Regius wrote:@ Ono
My combined image was not optimal because of the way Photoshop combines images, taking the lowest value as you said. But the algorithm should be: if a certain pixel has values like a in the first render, a in the second and b, then the final result should be a, completely ignoring the "statistically wrong" pixel, b. Obviously it cand be extrapolated to any number of odd renders (3,5,7 and so on).
@v_mulligan
Its not a proposal for default way Indigo renders. But would be nice to have this option.
-
- Posts: 126
- Joined: Wed Nov 28, 2007 9:16 am
A better algorithm for removing noise would be:
1. Render ten images of the same scene.
2. Loop through each pixel in the image:
--At pixel (x,y), average nine of the ten samples and calculate the standard deviation. If the tenth sample is within two standard deviations of the mean of the other nine, accept it; otherwise reject it. Do this comparison for each of the ten samples against the remaining nine.
--Average all the accepted samples to generate the final colour for pixel (x,y).
1. Render ten images of the same scene.
2. Loop through each pixel in the image:
--At pixel (x,y), average nine of the ten samples and calculate the standard deviation. If the tenth sample is within two standard deviations of the mean of the other nine, accept it; otherwise reject it. Do this comparison for each of the ten samples against the remaining nine.
--Average all the accepted samples to generate the final colour for pixel (x,y).
-
- Posts: 126
- Joined: Wed Nov 28, 2007 9:16 am
Illustrating the above suggestion: Let's say a pixel has value A in the first image, B in the second, C in the third, D in the fourth, and E in the fifth. (I'll just do five to make it simple). Let's say A, C, D, and E are all similar, but B is way off. According to the algorithm above, we'd do the following:
Standard Deviation of B, C, D, and E --> Huge; A is within 2 sigmas of the mean of B, C, D, and E. Accept A.
Standard Deviation of A, C, D, and E --> Small; B is NOT within 2 sigmas of the mean of B, C, D, and E. REJECT B.
Standard Deviation of A, B, D, and E --> Huge; C is within 2 sigmas of the mean of B, C, D, and E. Accept C.
Standard Deviation of A, B, C, and E --> Huge; D is within 2 sigmas of the mean of B, C, D, and E. Accept D.
Standard Deviation of A, B, C, and D --> Huge; E is within 2 sigmas of the mean of B, C, D, and E. Accept E.
Average the accepted values (A, C, D, and E) to produce the final value.
As a final note, standard error might be more appropriate than standard deviation. I'm not sure.
Standard Deviation of B, C, D, and E --> Huge; A is within 2 sigmas of the mean of B, C, D, and E. Accept A.
Standard Deviation of A, C, D, and E --> Small; B is NOT within 2 sigmas of the mean of B, C, D, and E. REJECT B.
Standard Deviation of A, B, D, and E --> Huge; C is within 2 sigmas of the mean of B, C, D, and E. Accept C.
Standard Deviation of A, B, C, and E --> Huge; D is within 2 sigmas of the mean of B, C, D, and E. Accept D.
Standard Deviation of A, B, C, and D --> Huge; E is within 2 sigmas of the mean of B, C, D, and E. Accept E.
Average the accepted values (A, C, D, and E) to produce the final value.
As a final note, standard error might be more appropriate than standard deviation. I'm not sure.
Wow, guys I had do read twice to understand (or so I think - what's sigma?), I'm not a programer/math guy but I think v_mulligan might be right.
Anyway there must be a balance. I think there must be a point where is better to wait the rendering of one image than sum up a set of images and recombine them, even if is done automatically. Its all about when you get to that point in time. And with what method obtain a useful image.
But perhaps with a good algorithm some can render let's say 10 images 2 minutes each and after combining them to obtain a better image than a regular 60 minutes render.
Anyway there must be a balance. I think there must be a point where is better to wait the rendering of one image than sum up a set of images and recombine them, even if is done automatically. Its all about when you get to that point in time. And with what method obtain a useful image.
But perhaps with a good algorithm some can render let's say 10 images 2 minutes each and after combining them to obtain a better image than a regular 60 minutes render.
-
- Posts: 126
- Joined: Wed Nov 28, 2007 9:16 am
Sigma is one standard deviation. It's a very common concept in stats -- a measure of the spread in a dataset.
I'm guessing the algorithm I propose above would be computationally less expensive than Indigo doing ten or so samples per pixel -- and would therefore almost certainly be worth it. An even simpler algorithm would be to discard the 5 or 10% of samples at each pixel that lie furthest from the mean and to average the rest -- though this might involve discarding good samples in a lot of cases.
I'm guessing the algorithm I propose above would be computationally less expensive than Indigo doing ten or so samples per pixel -- and would therefore almost certainly be worth it. An even simpler algorithm would be to discard the 5 or 10% of samples at each pixel that lie furthest from the mean and to average the rest -- though this might involve discarding good samples in a lot of cases.
- PureSpider
- Posts: 1459
- Joined: Tue Apr 08, 2008 9:37 am
- Location: Karlsruhe, BW, Germany
- Contact:
@PureSpider
@v_mulligan
We all see what OnoSendai said. But the discussion has evolved and we are not talking anymore about taking the smallest pixel value but a way of excluding the "wrong" pixel aka noise and get a "most close to correct" value of that pixel.PureSpider wrote:OnoSendai wrote:Taking the smallest pixel value of
several images will reduce fireflies, but it's biased.
@v_mulligan
Can I help somehow?would therefore almost certainly be worth it
Who is online
Users browsing this forum: No registered users and 150 guests