
Cut Crystal Tests
Taking the spreadsheet to work today to see how it shows up in Office 2003 will fix anything that comes out funny and will then work on the .xls version in Office 2007 compatability mode from then on to ensure anyone can open it successfully.
I changed the way I was calculating my starting B values with some success, produces better values but will require tweaking anyway I think. Previously I was the gradient of a line between C and F wavelengths, now I'm using the average of all gradients of all lines connecting each 10nm wavelength pair (780-770, 770-760, ...etc).
I changed the way I was calculating my starting B values with some success, produces better values but will require tweaking anyway I think. Previously I was the gradient of a line between C and F wavelengths, now I'm using the average of all gradients of all lines connecting each 10nm wavelength pair (780-770, 770-760, ...etc).

Actually it just occured to me that if this tabular data is going to be provided to indigo in a similar way to nk data then there should be no issue whatsoever with me calculating indpendent A and B coefficients for each Glass right?
Well that's what I'm going to do because I think the A coefficients are what are making it impossible for me to get the curves right.
Well that's what I'm going to do because I think the A coefficients are what are making it impossible for me to get the curves right.

I've got the absorption cooefficient values just fine. Thought I had the Cauchy A and B thing figured out...then my brain started going into a loop. I have no doubt that the solution is not complicated but I just can't 'see' it yet. I will figure this out even if it kills me 


I just use the IOR defined in the xml file as the A coefficient. Not sure if this is the optimal way, but I don't think it matters too much.WytRaven wrote: Ono is there any particular reason why you just seem to refuse point blank to answer the question of how you calculate your Cauchy A values internally?
The scene is being rendered @ 1920x1200, 2x super sampling. It has a max-depth 15000 and is using dispersion. Additionally every object in the scene is either refracting, scattering, reflecting, or all of the above.
This is being rendered on a Dual core AMD Athlon64 FX62 base system with 2GB of ram. It's running a little less efficiently than it could be due to 1. I'm using test 8 32bit as the 64bit wasn't available when I started the render, 2. While at work and sleep it's running full tack but when I'm home it's running below normal priority and I'm manipulating an enourmous spreadsheet in the foreground
Despite the 2 points above I would seriously doubt that either has significantly impacted on overall render time.
I am rendering this to the point of apparent convergence, in other words when I can no longer see areas of noise I will stop it; until that day/month/year/eon it will soldier on
This is being rendered on a Dual core AMD Athlon64 FX62 base system with 2GB of ram. It's running a little less efficiently than it could be due to 1. I'm using test 8 32bit as the 64bit wasn't available when I started the render, 2. While at work and sleep it's running full tack but when I'm home it's running below normal priority and I'm manipulating an enourmous spreadsheet in the foreground

Despite the 2 points above I would seriously doubt that either has significantly impacted on overall render time.
I am rendering this to the point of apparent convergence, in other words when I can no longer see areas of noise I will stop it; until that day/month/year/eon it will soldier on


Who is online
Users browsing this forum: No registered users and 11 guests