programing a raytracer
right....
as for polygon intersection- i'm too tired right now to go in and start coding randomly but i have an idea for an algorithm to test if a point is in a polygon and would like to know if it makes sense before i do anything. (i'm also too tired to read articles where the letters don't make much sense).
so point in a plane on plane of triangle
test the length from that point to each point of the triangle, if that point is further from any point than the distance between any two vertexes in the triangle the point is outside the triangle.
as for polygon intersection- i'm too tired right now to go in and start coding randomly but i have an idea for an algorithm to test if a point is in a polygon and would like to know if it makes sense before i do anything. (i'm also too tired to read articles where the letters don't make much sense).
so point in a plane on plane of triangle
test the length from that point to each point of the triangle, if that point is further from any point than the distance between any two vertexes in the triangle the point is outside the triangle.
a shiny monkey is a happy monkey
There's quite a lot of papers on this subject. I used a very slow algorithm for a while and now I use something based on barycentric co-ordinates. Here's a code fragment...oodmb wrote:right....
as for polygon intersection- i'm too tired right now to go in and start coding randomly but i have an idea for an algorithm to test if a point is in a polygon and would like to know if it makes sense before i do anything. (i'm also too tired to read articles where the letters don't make much sense).
so point in a plane on plane of triangle
test the length from that point to each point of the triangle, if that point is further from any point than the distance between any two vertexes in the triangle the point is outside the triangle.
Code: Select all
public final IntersectionState intersect(Ray ray, double tMax) {
//double k = p0.dotProduct(this.n);
double k = p0.x*this.n.x + p0.y*this.n.y + p0.z*this.n.z;
double rvn = ray.v.x*this.n.x + ray.v.y*this.n.y + ray.v.z*this.n.z;
double ron = ray.o.x*this.n.x + ray.o.y*this.n.y + ray.o.z*this.n.z;
double t = (k - ron) / rvn;
if (Double.isNaN(t)) {
return null;
}
if (t < Const.EPSILON || t > tMax) {
return null;
}
//Vector p = ray.o.add(ray.v.scale(t));
double px = ray.o.x + (ray.v.x * t);
double py = ray.o.y + (ray.v.y * t);
double pz = ray.o.z + (ray.v.z * t);
//Vector w = new Vector(p0, p);
double wx = px - p0.x;
double wy = py - p0.y;
double wz = pz - p0.z;
//Vector u = new Vector(p0, p1);
double ux = p1.x - p0.x;
double uy = p1.y - p0.y;
double uz = p1.z - p0.z;
//Vector v = new Vector(p0, p2);
double vx = p2.x - p0.x;
double vy = p2.y - p0.y;
double vz = p2.z - p0.z;
double uv = ux*vx+uy*vy+uz*vz;
double uv2 = uv*uv;
double uu = ux*ux+uy*uy+uz*uz;
double vv = vx*vx+vy*vy+vz*vz;
double den = uv2 - uu*vv;
double wu = wx*ux+wy*uy+wz*uz;
double wv = wx*vx+wy*vy+wz*vz;
double s = (uv*wv - vv*wu)/den;
double r = (uv*wu - uu*wv)/den;
if (s < 0.0 || r < 0.0 || (s+r) > 1.0) {
return null;
}
return new IntersectionState(this, t);
}
p0, p1 and p2 are the triangle vertices. n is the geometric normal (the smoothed normals are calculated later only if this intersection is determined as being the closest one.
The (s < 0.0) test could also be performed before calculating r for a tiny speed-up.
Also, den should be calculated as 1.0 / den and then multiplied later (rather than divided) for extra speed.
I think this algorithm is similar to that originally used by Ingo Wald for his OpenRT realtime ray tracing architecture.
I noticed a huge speed improvement by inlining all the dot products ... in a complex scene, this method dominates so the overhead of calling a separate method every time a dot product is required becomes significant.
You can also see that 'k' can be pre-calculated and stored in the triangle data, at the expense of memory usage.
Ian.
What changes exactly ... it fails to detect the intersection? The above algorithm isn't sensitive to vertex order so I'd be surprised (I just did a quick test to confirm it). The only thing that changes with swapping vertices around is the direction of the geometric normal.
Are you doing back-face culling (i.e are you rejecting the intersection if it hits the back of the surface)? If so, suggest you switch this off otherwise you won't be able to render thin surfaces.
Ian.
Are you doing back-face culling (i.e are you rejecting the intersection if it hits the back of the surface)? If so, suggest you switch this off otherwise you won't be able to render thin surfaces.
Ian.
nah, no back-face culling- i have it set up so that if it hits the back of the surface the normal is negated.
the problem is that when i switch the verticies around the triangle changes position except point 0 and changes shape. i tried putting spheres at the same points as the verts, but the only vert thats ever in the correct place is vert 0. otherwise, i'm very happy with how its working, its detecting a triangle (near the place where i want it anyway) and detecting the correct normal, so in general YAY.
also, it might be noted that the shadow changes the most when changes to the triangle intersection algorithm are made.
the problem is that when i switch the verticies around the triangle changes position except point 0 and changes shape. i tried putting spheres at the same points as the verts, but the only vert thats ever in the correct place is vert 0. otherwise, i'm very happy with how its working, its detecting a triangle (near the place where i want it anyway) and detecting the correct normal, so in general YAY.
also, it might be noted that the shadow changes the most when changes to the triangle intersection algorithm are made.
a shiny monkey is a happy monkey
That's excellent progress thenoodmb wrote:nah, no back-face culling- i have it set up so that if it hits the back of the surface the normal is negated.
the problem is that when i switch the verticies around the triangle changes position except point 0 and changes shape. i tried putting spheres at the same points as the verts, but the only vert thats ever in the correct place is vert 0. otherwise, i'm very happy with how its working, its detecting a triangle (near the place where i want it anyway) and detecting the correct normal, so in general YAY.

The problem could be normalised (or not normalised) ray direction vectors? None of my intersection code assumes a normalised vector (it makes complex CSG intersection testing easier if ray vectors are allowed to be greater than 1 in length).
Ian.
realy? i'm pretty sure my direction vectors are normalised. either that or they are less than one by default due to the fact that they aren't based on a distance, but a direction. in that case, i'll try scaling it by a couple hundred or so before the check.The problem could be normalised (or not normalised) ray direction vectors? None of my intersection code assumes a normalised vector (it makes complex CSG intersection testing easier if ray vectors are allowed to be greater than 1 in length).
ok, thats not working, its just changing the color of my triangle(no clue how thats happening).
figured it out!!!!
in my u, v and w i was doing the subtract wrong- i was calling on a method to create a vector based on two points but i was using the wrong points and couldn't do it with a point and a vector.
a shiny monkey is a happy monkey
Use fresnel attenuation for maximum realism.oodmb wrote:now for some reflection equations-- i have reflection down, now how do i mix it with the surface color and stuff...
Assuming that you want to make a shader for a diffuse substrate underneath a specular reflection coating...
Calculate the fresnel term 'f' for the reflected ray and use it to calculate the actual amount of reflected radiance.
For the diffuse substrate, calulate the fresnel term for each light sample direction and then multiply the light's irradiance by WHITE - f. This means a light shining on the substrate at a very shallow angle will hardly contribute at all (because most of it bounces off due to specular reflection and very little passes through).
That way you'll correctly simulate strong reflections at oblique viewing angles, and also the attenuation of the direct light on the substrate.
Best way to work out the reflected colour multiplier is as follows:
f = WHITE - "reflect albedo"
f = f * fresnel attenuation (based on normal, IOR and angle of reflection)
f = f + "reflect albedo"
The "reflect albedo" is simply the normal reflectance colour (i.e. how much light is reflected at 0 degrees to the normal).
Exactly the same can be applied to a phong with diffuse substrate.
Did that make sense?

Ian.
Who is online
Users browsing this forum: No registered users and 31 guests