I'm not sure if anyone has seen this before but there is a great little algorithm for a 3D bump map effect. It looks just like displacement mapping but treats everything with texture manipulations and doesn't use anywhere near the same resources. It even handles silhouettes!
So nifty. I can't decide if the algorithm would work with MLT but it seems like it is all raytracing-based so it should provide persistent query-able surfaces.
This link shows some of the possibilities:
http://fabio.policarpo.nom.br/downloads.htm
Relief mapping?
I thought so too. The thing I find amazing is the fact that it handles 3d intersections properly - even between two relief-mapped polygons.
And it can run in real time so I guess it doesn't use much in the way of resources.
Also the multi-layer extension seems pretty cool. It allows one to create more than just height-mapped displacements. It can represent surfaces like woven basket or chain-link structures. The author has even written a plugin implementation for 3DS. Makes it easy to explore.
I really wonder if Indigo could be made to work with such a scheme...
And it can run in real time so I guess it doesn't use much in the way of resources.
Also the multi-layer extension seems pretty cool. It allows one to create more than just height-mapped displacements. It can represent surfaces like woven basket or chain-link structures. The author has even written a plugin implementation for 3DS. Makes it easy to explore.
I really wonder if Indigo could be made to work with such a scheme...
@zsouthboy - purist...
I'm not sure that's exactly what Deus said but you're right about it just being trickery. The thing about it is that all of the 3D surface data is present in the scene so it's hard to say it's not real 3D, it's really just an alternate, pixel accurate, displacement method. After all, isn't a "true 3D perspective transform" just another way to distort a texture map onto a screen? Anything that can map a 2D texture onto 2D screen in the same way as would a polygon-based 3D projection is a totally valid solution.
Basically all that is happening is we're skipping the whole decimation/subdivision/polygon step and imbuing individual texels with independent z-depth to go with their already-phony normals and faked phong speculars.
The math is intact and we didn't have to pass by our 2.5-million-polygon detour to get there.
So I'd say it's really no phonier that bump-mapping and I love how robust it looks. Did you see the examples? It lets bumpy stone intersect a floor plane giving a perfect, irregular stone edge.
Shadows, silhouettes, 3D intersections... I dunno. Seems like a win.
http://fabio.policarpo.nom.br/publications.htm#Papers has a few more links. A bit technical but the second paper really seems to convey the idea.

I'm not sure that's exactly what Deus said but you're right about it just being trickery. The thing about it is that all of the 3D surface data is present in the scene so it's hard to say it's not real 3D, it's really just an alternate, pixel accurate, displacement method. After all, isn't a "true 3D perspective transform" just another way to distort a texture map onto a screen? Anything that can map a 2D texture onto 2D screen in the same way as would a polygon-based 3D projection is a totally valid solution.
Basically all that is happening is we're skipping the whole decimation/subdivision/polygon step and imbuing individual texels with independent z-depth to go with their already-phony normals and faked phong speculars.
The math is intact and we didn't have to pass by our 2.5-million-polygon detour to get there.
So I'd say it's really no phonier that bump-mapping and I love how robust it looks. Did you see the examples? It lets bumpy stone intersect a floor plane giving a perfect, irregular stone edge.
Shadows, silhouettes, 3D intersections... I dunno. Seems like a win.
http://fabio.policarpo.nom.br/publications.htm#Papers has a few more links. A bit technical but the second paper really seems to convey the idea.
Huh well.. The drawing method is not the same so its not the same algorithm. Raytracing is ray primitive intersection, not scan line processing. That would be like painting an egg green and calling it an apple.
That actually brings me to a question about the displacement mapping in indigo. It seems like that the geometry is subdivided using a preprocessing pass? I envisioned this feature as an "on demand" subdivision process to keep the memory footprint reaasonable.
I would probably have implemented displacement mapping purely by calculating bounding box for each poly at scene generation then manually tracing ray over the bitmap (using Mip-maps at a distance), no prior subdivision into polys required.
That actually brings me to a question about the displacement mapping in indigo. It seems like that the geometry is subdivided using a preprocessing pass? I envisioned this feature as an "on demand" subdivision process to keep the memory footprint reaasonable.
I would probably have implemented displacement mapping purely by calculating bounding box for each poly at scene generation then manually tracing ray over the bitmap (using Mip-maps at a distance), no prior subdivision into polys required.
Who is online
Users browsing this forum: No registered users and 19 guests