actually, something just occurred to me, regarding IGI merging.
is (XYZ[]/2)+(XYZ[]/2) a valid operation?
ie, does it work the same way as (RGB[]/2)+(RGB[]/2) ?
XYZ colour space
There are a couple of options,
to do image blending in XYZ space, or some RGB space.
An RGB colour space is related to the XYZ space by a linear transformation, then a clipping operation, to bring each component into the range [0, infinity).
If it wasn't for the clipping operation, which is nonlinear, it wouldn't matter in which space the blending was done.
It's quite an intriguing idea to be able to do colour correction (white point settings) independently for each source image, before they are blended.
So if we have a toUnclippedRGB_i transformation, which converts from the XYZ colour space to the RGB colour space, but does not clip the components, then we could use
final_colour_rgb = clipComponents(
toUnclippedRGB_1(source_colour_xyz_1)*b_1 +
toUnclippedRGB_2(source_colour_xyz_2)*b_2 +
...
+ toUnclippedRGB_n(source_colour_xyz_n)*b_n
)
where source_colour_xyz_i is the XYZ pixel colour from the i-th image,
b_i is the blend factor for the i-th image (don't have to add to 1 necessarily), and toUnclippedRGB_i is the XYZ->RGB transformation for the i-th image. (whose transformation matrix will depend upon the whitepoint for the i-th image)
So the corresponding user interface would have, for each source image, a blend factor slider, and a set of whitepoint sliders.
Another thing to note is that the values in the .igi are un-normalised, in the sense that they are not divided by the average number of samples per pixel. You would probably want to do this division before the images are blended together, as a matter of convenience for the end-user, if the source images have different sampling levels.
to do image blending in XYZ space, or some RGB space.
An RGB colour space is related to the XYZ space by a linear transformation, then a clipping operation, to bring each component into the range [0, infinity).
If it wasn't for the clipping operation, which is nonlinear, it wouldn't matter in which space the blending was done.
It's quite an intriguing idea to be able to do colour correction (white point settings) independently for each source image, before they are blended.
So if we have a toUnclippedRGB_i transformation, which converts from the XYZ colour space to the RGB colour space, but does not clip the components, then we could use
final_colour_rgb = clipComponents(
toUnclippedRGB_1(source_colour_xyz_1)*b_1 +
toUnclippedRGB_2(source_colour_xyz_2)*b_2 +
...
+ toUnclippedRGB_n(source_colour_xyz_n)*b_n
)
where source_colour_xyz_i is the XYZ pixel colour from the i-th image,
b_i is the blend factor for the i-th image (don't have to add to 1 necessarily), and toUnclippedRGB_i is the XYZ->RGB transformation for the i-th image. (whose transformation matrix will depend upon the whitepoint for the i-th image)
So the corresponding user interface would have, for each source image, a blend factor slider, and a set of whitepoint sliders.
Another thing to note is that the values in the .igi are un-normalised, in the sense that they are not divided by the average number of samples per pixel. You would probably want to do this division before the images are blended together, as a matter of convenience for the end-user, if the source images have different sampling levels.
o...k....
perhaps I'll stick to fiddling with the GUI
I'm currently doing the merge in XYZ, weighted average by the num_samples in each image.
i'll read your post again sometime when my head is not tired.
perhaps I'll stick to fiddling with the GUI

I'm currently doing the merge in XYZ, weighted average by the num_samples in each image.
Code: Select all
double total_samples = i_raw_image.header.num_samples + i_merge_image.header.num_samples;
raw_image.scale((float)(i_raw_image.header.num_samples/total_samples));
merge_image.scale((float)(i_merge_image.header.num_samples/total_samples));
raw_image.addImage(merge_image,0,0);
i_raw_image.header.num_samples = total_samples;
Well if u wanna set up the GUI for blending,
I could code the merging stuff, it will just be a bit of munging around the current code.
The ideal GUI for blending, IMHO, would have some kind of 'add image' button, then after an image is added, it would create the blend factor and whitepoint sliders. You would be able to blend an arbitrary number of images.
I could code the merging stuff, it will just be a bit of munging around the current code.
The ideal GUI for blending, IMHO, would have some kind of 'add image' button, then after an image is added, it would create the blend factor and whitepoint sliders. You would be able to blend an arbitrary number of images.
Who is online
Users browsing this forum: Google [Bot] and 2 guests