Ray tracing (graphics)

Ray tracing (graphics)

In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane. The technique is capable of producing a very high degree of photorealism; usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such as in still images and film and television special effects, and more poorly suited for real-time applications like computer games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and chromatic aberration.


Algorithm overview

Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments, with more photorealism than either ray casting or scanline rendering techniques. It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it.

Scenes in raytracing are described mathematically by a programmer or by a visual artist (using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography.

Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene.

It may at first seem counterintuitive or "backwards" to send rays "away" from the camera, rather than "into" it (as actual light does in reality), but doing so is in fact many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could potentially waste a tremendous amount of computation on light paths that are never recorded. A computer simulation that starts by casting rays from the light source is called Photon mapping, and it takes much longer than a comparable ray trace.

Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated. The light intensity of this pixel is computed using a number of algorithms, which may include the classic rendering algorithm and may also incorporate techniques such as radiosity.

Detailed description of ray tracing computer algorithm and its genesis

What happens in nature

In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of photons traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring relativistic effects). In reality, any combination of three things might happen with this light ray: absorption, reflection, and refraction. A surface may reflect all or part of the light ray, in one or more directions. It might also absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Between absorption, reflection, and refraction, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. An exception here is fluorescence whereby higher non-visible light frequencies such as UV, which are radiated by many light sources to different degrees, can be converted by some materials to visible light. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, and reflective properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.

Ray casting algorithm

The first ray casting (versus ray tracing) algorithm used for rendering was presented by Arthur Appel in 1968. The idea behind ray casting is to shoot rays from the eye, one per pixel, and find the closest object blocking the path of that ray – think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye normally sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older scanline algorithms is its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modeling techniques and easily rendered.

Ray casting for producing computer graphics was first used by scientists at Mathematical Applications Group, Inc., (MAGI) of Elmsford, New York, New York. In 1966, the company was created to perform radiation exposure calculations for the Department of Defense. MAGI's software calculated not only how the gamma rays bounced off of surfaces (ray casting for radiation had been done since the 1940s), but also how they penetrated and refracted within. These studies helped the government to determine certain military applications; constructing military vehicles that would protect troops from radiation, designing re-entry vehicles for space exploration. Under the direction of Dr. Philip Mittelman, the scientists developed a method of generating images using the same basic software. In 1972, MAGI became a commercial animation studio. This studio used ray casting to generate 3-D computer animation for television commercials, educational films, and eventually feature films – they created much of the animation in the film "Tron" using ray casting exclusively. MAGI went out of business in 1985.

Ray tracing algorithm

The next important research breakthrough came from Turner Whitted in 1979. Previous algorithms cast rays from the eye into the scene, but the rays were traced no further. Whitted continued the process. When a ray hits a surface, it could generate up to three new types of rays: reflection, refraction, and shadow. A reflected ray continues on in the mirror-reflection direction from a shiny surface. It is then intersected with objects in the scene; the closest object it intersects is what will be seen in the reflection. Refraction rays traveling through transparent material work similarly, with the addition that a refractive ray could be entering or exiting a material. To further avoid tracing all rays in a scene, a shadow ray is used to test if a surface is visible to a light. A ray hits a surface at some point. If the surface at this point faces a light, a ray (to the computer, a line segment) is traced between this intersection point and the light. If any opaque object is found in between the surface and the light, the surface is in shadow and so the light does not contribute to its shade. This new layer of ray calculation added more realism to ray traced images.

Advantages over other rendering methods

Ray tracing's popularity stems from its basis in a realistic simulation of lighting over other rendering methods (such as scanline rendering or ray casting). Effects such as reflections and shadows, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. Relatively simple to implement yet yielding impressive visual results, ray tracing often represents a first foray into graphics programming. The computational independence of each ray makes ray tracing amenable to parallelization.

Disadvantages

A serious disadvantage of ray tracing is performance. Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to perform anti-aliasing and improve image quality where needed. Although it does handle interreflection and optical effects such as refraction accurately, traditional ray tracing is also not necessarily photorealistic. True photorealism occurs when the rendering equation is closely approximated or fully implemented. Implementing the rendering equation gives true photorealism, as the equation describes every physical effect of light flow. However, this is usually infeasible given the computing resources required. The realism of all rendering methods, then, must be evaluated as an approximation to the equation, and in the case of ray tracing, it is not necessarily the most realistic. Other methods, including photon mapping, are based upon ray tracing for certain parts of the algorithm, yet give far better results.

Reversed direction of traversal of scene by the rays

The process of shooting rays from the eye to the light source to render an image is sometimes referred to as "backwards ray tracing", since it is the opposite direction photons actually travel. However, there is confusion with this terminology. Early ray tracing was always done from the eye, and early researchers such as James Arvo used the term "backwards ray tracing" to refer to shooting rays from the lights and gathering the results. As such, it is clearer to distinguish "eye-based" versus "light-based" ray tracing.

While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can benefit from rays generated from the lights. Caustics are bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, will better sample this phenomena. This integration of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and lights, and the paths subsequently joined by a connecting ray after some length. [cite journal | url = http://www.graphics.cornell.edu/~eric/Portugal.html | title = Bi-Directional Path Tracing | author = Eric P. Lafortune and Yves D. Willems | journal = Proceedings of Compugraphics '93 | date = December 1993 | pages = 145–153] [cite web | url = http://www.cescg.org/CESCG98/PDornbach/index.html | title = Implementation of bidirectional ray tracing algorithm | author = Péter Dornbach | accessdate = 2008-06-11]

Photon mapping is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface points. [http://graphics.ucsd.edu/~henrik/papers/photon_map/global_illumination_using_photon_maps_egwr96.pdf Global Illumination using Photon Maps] [ [http://web.cs.wpi.edu/~emmanuel/courses/cs563/write_ups/zackw/photon_mapping/PhotonMapping.html Photon Mapping - Zack Waters ] ] The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias.

An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider a darkened room, with a door slightly ajar leading to a brightly-lit room), or a scene in which most points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or torchieres). In such cases, only a very small subset of paths will transport energy; Metropolis light transport is a method which begins with a random search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space of rays. [http://graphics.stanford.edu/papers/metro/metro.pdf]

Algorithm: classical recursive ray tracing

For each pixel in image { Create ray from eyepoint passing through this pixel Initialize NearestT to INFINITY and NearestObject to NULL For every object in scene { If ray intersects this object { If t of intersection is less than NearestT { Set NearestT to t of the intersection Set NearestObject to this object } } } If NearestObject is NULL { Fill this pixel with background color } Else { Shoot a ray to each light source to check if in shadow If surface is reflective, generate reflection ray: recurse If surface is transparent, generate refraction ray: recurse Use NearestObject and NearestT to compute shading function Fill this pixel with color result of shading function } }

Below is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions.

First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source. For example if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue.

In real time

The first implementation of a "real-time" ray-tracer was credited at the 2005 SIGGRAPH computer graphics conference as the REMRT/RT tools developed in 1986 by Mike Muuss for the BRL-CAD solid modeling system. Initially published in 1987 at USENIX, the BRL-CAD ray-tracer is the first known implementation of a parallel network distributed ray-tracing system that achieved several frames per second in rendering performance. [See Proceedings of 4th Computer Graphics Workshop, Cambridge, MA, USA, October 1987. Usenix Association, 1987. pp 86–98.] This performance was attained by leveraging the highly-optimized yet platform agnostic LIBRT ray-tracing engine in BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a commodity network. BRL-CAD's ray-tracer, including REMRT/RT tools, continue to be available and developed today as Open source software. [cite web |url=http://brlcad.org/overview.html |title=BRL-CAD Overview |accessdate=2007-09-17]

Since then, there have been considerable efforts and research towards implementing ray tracing in real time speeds for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3D graphics applications such as demoscene productions, computer and video games, and image rendering. Some real-time software 3D engines based on ray tracing have been developed by hobbyist demo programmers since the late 1990s. [cite web |url=http://www.acm.org/tog/resources/RTNews/demos/overview.htm |title=The Realtime Raytracing Realm |author=Piero Foscari |work=ACM Transactions on Graphics |accessdate=2007-09-17]

The OpenRT project includes a highly-optimized software core for ray tracing along with an OpenGL-like API in order to offer an alternative to the current rasterisation based approach for interactive 3D graphics. Ray tracing hardware, such as the experimental Ray Processing Unit developed at the Saarland University, has been designed to accelerate some of the computationally intensive operations of ray tracing. On March 16 2007, the University of Saarland revealed an implementation of a high-performance ray tracing engine that allowed computer games to be rendered via ray tracing without intensive resource usage. [cite news |url=http://news.bbc.co.uk/1/hi/technology/6457951.stm |title=Rays light up life-like graphics |author=Mark Ward |publisher=BBC News |date=March 16, 2007 |accessdate=2007-09-17]

On June 12, 2008 Intel demonstrated running in basic HD (720p) resolution, which is the first time the company was able to render the game using a standard video resolution, instead of 1024 x 1024 or 512 x 512 pixels. ETQW operated at 14-29 frames per second. The demonstration ran on a 16-core (4 socket, 4 core) Tigerton system running at 2.93 GHz. [cite web |url=http://www.tgdaily.com/html_tmp/content-view-37925-113.html |title=Intel converts ET: Quake Wars to ray-tracing |author=Theo Valich |publisher=TG Daily |date=June 12, 2008 |accessdate=2008-06-16]

Example

As a demonstration of the principles involved in raytracing, let us consider how one would find the intersection between a ray and a sphere. In vector notation, the equation of a sphere with center mathbf c and radius r is:leftVert mathbf x - mathbf c ightVert^2=r^2.Any point on a ray starting from point mathbf s with direction mathbf d (here mathbf d is a unit vector) can be written as

:mathbf x=mathbf s+tmathbf d,

where t is its distance between mathbf x and mathbf s. In our problem, we know mathbf c, r, mathbf s (e.g. the position of a light source) and mathbf d, and we need to find mathbf t. Therefore, we substitute for mathbf x:

:leftVertmathbf{s}+tmathbf{d}-mathbf{c} ightVert^{2}=r^{2}.

Let mathbf{v} stackrel{mathrm{def{=} mathbf{s}-mathbf{c} for simplicity; then

:leftVertmathbf{v}+tmathbf{d} ightVert^{2}=r^{2}

:mathbf{v}^2+t^2mathbf{d}^2+2mathbf{v}cdot tmathbf{d}=r^2

:(mathbf{d}^2)t^2+(2mathbf{v}cdotmathbf{d})t+(mathbf{v}^2-r^2)=0.

Knowing that d is a unit vector allows us this minor simplification:

:t^2+(2mathbf{v}cdotmathbf{d})t+(mathbf{v}^2-r^2)=0.

This quadratic equation has solutions:t=frac{-(2mathbf{v}cdotmathbf{d})pmsqrt{(2mathbf{v}cdotmathbf{d})^2-4(mathbf{v}^2-r^2){2}=-(mathbf{v}cdotmathbf{d})pmsqrt{(mathbf{v}cdotmathbf{d})^2-(mathbf{v}^2-r^2)}.

The two values of t found by solving this equation are the two ones such that mathbf mathbf s+tmathbf d are the points where the ray intersects the sphere.

If one (or both) of them are negative, then the intersections do not lie on the ray but in the opposite half-line (i.e. the one starting from mathbf s with opposite direction).

If the quantity under the square root ( the discriminant ) is negative, then the ray does not intersect the sphere.

Let us suppose now that there is at least a positive solution, and let t be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws of reflection state that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and the normal to the sphere.

The normal to the sphere is simply

:mathbf n=frac{mathbf y- mathbf c}{leftVertmathbf y- mathbf c ightVert},

where mathbf y=mathbf s+tmathbf d is the intersection point found before. The reflection direction can be found by a reflection of mathbf d with respect to mathbf n, that is

: mathbf r = mathbf d - 2frac{(mathbf n cdot mathbf d ) mathbf n}{leftVertmathbf n ightVert^2}.

Thus the reflected ray has equation

: mathbf x = mathbf y + t mathbf r.

Now we only need to compute the intersection of the latter ray with our field of view, to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and the one of the sphere are combined by the reflection.

This is merely the math behind the Line–sphere intersection and the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of raytracing, but this demonstrates an example of the algorithms used.

ee also

*Ray casting
*Scanline rendering
*Beam tracing
*Cone tracing
*Global illumination
*Radiosity
*Photon mapping
*Distributed ray tracing
*Ray tracing hardware
*Line–sphere intersection
*Specular reflection

oftware

*3Delight
*Aqsis
*ASAP
*Blender
*Brazil r/s
*BRL-CAD
*Bryce
*Cinema 4D
*Gelato
*Holomatix Rendition
*Indigo Renderer
*Kerkythea
*LuxRender
*Mental ray
*Pixie
*PhotoRealistic RenderMan
*POV-Ray
*Radiance
*Sunflow
*TurboSilver
*V-Ray
*YafRay

References

*Glassner, Andrew (Ed.) (1989). "An Introduction to Ray Tracing". Academic Press. ISBN 0-12-286160-4.
*Shirley, Peter and Morley Keith, R. (2001) "Realistic Ray Tracing,2nd edition". A.K. Peters. ISBN 1-56881-198-5.
*Henrik Wann Jensen. (2001) "Realistic image synthesis using photon mapping". A.K. Peters. ISBN 1-56881-147-0.
*Pharr, Matt and Humphreys, Greg (2004). "Physically Based Rendering : From Theory to Implementation". Morgan Kaufmann. ISBN 0-12-553180-X.

External links

* [http://www.scratchapixel.com/tutorials/home/home.php Ray-tracing and 3D rendering tutorials] - free tutorials on ray-tracing and rendering techniques with source code
* [http://www.codermind.com/articles/Raytracer-in-C++-Introduction-What-is-ray-tracing.html What is ray tracing ?] - An ongoing tutorial of ray tracing in several parts from basic to more advanced topics
* [http://www.pcper.com/article.php?aid=506 Ray tracing and Gaming - One Year Later] - Daniel Pohl's follow up on the benefits of ray tracing over rasterization for games
* [http://www.raytracingnews.org/ The Ray Tracing News] – short research articles and new links to resources
* [http://www.few.vu.nl/~kielmann/theses/avdploeg.pdf Interactive Ray Tracing: The replacement of rasterization?] – thesis about real time ray tracing and its state in December 2006
* [http://graphics.cs.uni-sb.de/RTGames/ Games using realtime raytracing]
* [http://www.nirenstein.com/e107/page.php?11 SSRT] – C++ source code for a Monte-carlo ray/pathtracer (supporting GI) - written with ease of understanding in mind.
* [http://www.devmaster.net/articles/raytracing_series/part1.php A series of tutorials on implementing a raytracer using C++]
* [http://irtc.org/ The Internet Ray Tracing Competition] – still and animated categories
* [http://www.pcper.com/article.php?aid=334 Quake 4 Raytraced by Daniel Pohl] – good information on potential for real-time ray traced games


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Ray tracing — is a method for calculating the path of waves or particles through a system. The method is practiced in two distinct forms:*Ray tracing (physics), which is used for analyzing optical and other systems *Ray tracing (graphics), which is used for 3D …   Wikipedia

  • Ray tracing hardware — is a special purpose computer hardware design for accelerating real time ray tracing. Ray Tracing vs. Rasterization The problem of rendering 3D graphics can be conceptually presented as finding all intersections between a set of primitives… …   Wikipedia

  • Ray Tracing — Raytracing (dt. Strahlverfolgung[1] oder Strahlenverfolgung[2], in englischer Schreibweise meist ray tracing, seltener ray shooting) ist ein auf der Aussendung von Strahlen basierender Algorithmus zur Verdeckungsberechnung, also zur Ermittlung… …   Deutsch Wikipedia

  • Ray tracing — Raytracing (dt. Strahlverfolgung[1] oder Strahlenverfolgung[2], in englischer Schreibweise meist ray tracing, seltener ray shooting) ist ein auf der Aussendung von Strahlen basierender Algorithmus zur Verdeckungsberechnung, also zur Ermittlung… …   Deutsch Wikipedia

  • ray tracing — noun a) A computer graphics technique that produces realistic images by projecting imaginary light rays to determine which parts of an object should be illuminated. b) A technique used in optics for analysis of optical systems …   Wiktionary

  • ray tracing —    The technique for creating reflections and shadows in computer graphics for more realistic, 3 D effect. Takes the location of the light source and computes where the light rays would fall …   IT glossary of terms, acronyms and abbreviations

  • Forward Ray Tracing — Raytracing (dt. Strahlverfolgung[1] oder Strahlenverfolgung[2], in englischer Schreibweise meist ray tracing, seltener ray shooting) ist ein auf der Aussendung von Strahlen basierender Algorithmus zur Verdeckungsberechnung, also zur Ermittlung… …   Deutsch Wikipedia

  • Distributed ray tracing — Distributed ray tracing, also called distribution ray tracing and stochastic ray tracing, is a refinement of ray tracing that allows for the rendering of soft phenomena. Conventional ray tracing uses single rays to sample many different domains.… …   Wikipedia

  • Ray casting — is the use of ray surface intersection tests to solve a variety of problems in computer graphics. The term was first used in computer graphics in a 1982 paper by Scott Roth to describe a method for rendering CSG models. [Citation last1 = Roth |… …   Wikipedia

  • Tracing — may refer to:* Tracing (law), a process by which a one demonstrates the ownership of property, with the intent to be awarded a claim based on this information * Tracing (criminology), a subject concerning setting up traces of occurrence of… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”