
The new normalized coordinates of the pixels are said to be defined in NDC space (which stands for Normalized Device Coordinates): We first need to normalize this pixel position using the frame's dimensions. Applying the final camera-to-world transformation 4x4 matrix transform the coordinates in screen space to world space.
RAY GEOMETRY PLUS
The coordinates of this point are first expressed in raster space (the pixels coordinate plus an offset of 0.5), then converted to NDC space (the coordinates are remapped to the range ) then converted to screen space (the NDC coordinates are remapped to the ). Finally, to make the beginning of the demonstration simpler, we will assume that our rendered image is square (the width and the height of the image in pixels are the same).įigure 5: converting the coordinate of a point in the middle of a pixel to world coordinates requires a few steps. RenderMan, Maya, PBRT and OpenGL align the camera along the negative z-axis and we suggest developers to follow the same convention). By convention, we will also orient the camera along the negative z-axis (the camera default orientation is left to the developer's choice, however generally the camera is oriented along either the positive or the negative z-axis. By convention, in ray-tracing, it is often placed exactly 1 unit away from the camera's origin (this distance will never change and we will explain why further down). However this inversion can be avoided if the film plane lies on the same side as the scene (in front of the aperture rather than behind). The film of real world pinhole cameras is located behind the aperture, which causes the light rays by geometrical construction to form an inverted image of the scene. Remember from the lesson 3D Viewing: the Pinhole Camera Model, that the origin of the camera can be seen as the aperture of a pinhole camera (which is also the center of projection). In almost all 3D applications, the default position of a camera when it is created is the origin of the world, which is defined by the point with coordinates (0, 0, 0). What do we know about these rays that would help us to construct them? We know that they start from the camera's origin. Secondary rays are shadows rays for example which we will talk about later). Naturally the process of creating an image will start with constructing these rays which we call primary or camera rays (primary because these are the first rays we will cast into the scene. If the ray intersects an object from the scene, the colour of the pixel the ray is passing through is set with the color of the object at this intersection point. Figure 1: backward or eye tracing consists of tracing rays from the eye through the center of each pixel of the image.
