开发工具:
文件大小: 59mb
下载次数: 0
上传时间: 2019-03-04
详细说明:光线跟踪Contents
26 Real-Time Ray Tracing
26.1 Ray Tracing Fundamentals
26.2 Shaders for Ray Tracing
26.3 Top and Botton Level Acceleration Structures
26.4 Coherent
12
26.5 Denoising
27
26.6 Texture Filtering
34
26.7 Speculations
References
Bibliography
ndex
Acknowledgments
Our sincere thanks the following for helping out with images, proofreading, expert
knowledge, and morc: Kostas Anagnostou, Magnus Andersson, Colin Barrc-Briscbois
Henrik Wall Jensen, Aaron Lefolin, Edward Liu, Ignacio Llamas, Rula Lober. boyd
Meeji, Jacob Munkberg, Jacopo Pantaleoni, Steven Parker, Tomasz Stachowiak, and
Chris Wyman
Since version 1.0, a few people have kindly provided us corrections for this chapter
namely pontus Andersson and Aaryaman Vasishta
Chapter 26
Real-Time Ray Tracing
I wanted change and excitement and to shoot off in all directions myself,
like the colored arrows from a Fourth of Ja
uly rocket.
Sylvia plath
Compared to rasterization-based techniques, which is the topic of large parts of this
book, ray tracing is a method that is more directly inspired by the physics of light
As such, it can generate substantially more realistic images. In the first edition of
Real-Time Rendering. from 1999, we dreamed about reaching 12 frames per second
for rendering an average frame of A Bug's Life(ABL) between 2007 and 2024. In some
sense we were right. ABL used ray tracing for only a few shots where it was trul
nccded, c g, reflection and refraction in a watcr droplet. Howcvcr, rcccnt advances in
GPUs have made it possible to render game-like scenes with ray tracing in real time
For example, the cover of this book shows a scene rendered at about 20 frames per
second using global illumination with something that starts to resemble feature-film
image quality. Ray tracing will revolutionize real-time rendering
In its simplest form, visibility determination for both rasterization and ray tracing
can be described with double for-loops. Rasterization is
for(t in triangles)
for(p in pixels)
determine if p is inside t
Ray tracing can, on the other hand, be described by
for(p in pixels)
(t in triangles)
determine if ray through P hits T
So in a sense, these are both simple algorithms. However, to make either of these
fast, you need much more code and hardware than can fit on a business card
1 One
important feature of a ray tracer using a spatial data structure, such as a bounding
volume hierarchy(BVH), is that the running time for tracing a ray is O(log n), where m
I The back of Paul Heckbert's business card froin the 1990s had code for a sinple, recursive ray
34
26. Real-Time Ray Tracing
is the number of triangles in the scene. While this is an attractive feature of ray tracing
it is clear that rasterization is also better than O(n), since GPUs have occlusion culling
hardware and rendering engines use frustum culling, deferred shading, and Imany other
techniques that avoid fully processing every primitive. So, it is a complex matter to
cstimatc the running timc for rasterization in OO notation. In addition. the texture
units and the triangle traversal units of a gpu are incredibly fast and have been
optimized for rasterization over a span of decades
The important difference is that ray tracing can shoot rays in any direction, not
just from a single point, such as from the eye or a light source. As we will see in
Section 26. 1, this Flexibility makes it possible to recursively render reflections and
refractions[89, and to fully evaluate the rendering equation(Equation 11. 2). Doing
so makos the images just look better. This propcrty of ray tracing simplifies content
creation as well. since less artist intervention is needed 20. When using rasterizatiOn
artists often need to adjust their creations to work well with the rendering techniques
being used. However, with ray tracing, noise may become apparent in the images
This can happen when area lights are sampled, when surfaces are glossy, when an
environment map is integrated over, and when path tracing is used, for example
That said, to make real-time ray tracing be the only rendering algorithm used
for real-time applications, it is likely that several techniques, e. g, denoising, will be
needed to make the images look good enough. Denoising attempts to remove the
noise based on intelligent image averaging(Section 26.5). In the short-term, clever
combinations of rasterization and ray tracing arc cxpectcd rasterization is not going
away anly tilne sool. In the longer-lerIll, ray tracing scales well as processors becoine
more powerful, i. e, the more compute and bandwidth that are provided the better
images we can generate with ray tracing by increasing the number of samples per
pixel and the recursive ray depth. For example, due to the difficult indirect lighting
involved, the image in Figure 26. 1 was generated using 256 samples per pixel. Another
image with high-quality path tracing is shown in Figure 26. 6, where the number of
samples per pixel range from 1 to 65.536
Before diving into algorithins used in ray tracing, we refer you lo several relevant
chapters and sections. Chapter 11 on global illumination provides the theory sur-
rounding the rendering equation(Equation 11.2), as well as a basic explanation of ray
and path tracing in Section 11. 2.2. Chapter 22 describes intersection methods, where
ray against object tests are essential for ray tracing. Spatial data structures, which are
used to speed up the visibility queries in ray tracing, are described in Section 19.1.1
and in Chapter 25, about collision detection
26.1 Ray Tracing Fundamentals
Recall from equation 22. 1 that a ray is defined as
q(t)=o+td
26.1. Ray Tracing Fundamentals
Figure 26.l. a difficult scene with a large amount of indirect lighting rendered with 256 samples per
pixel, with 15 as ray depth, and a million triangles. Still, when zooming it, it is possible to see noise
inl this inlage. There are objects consisting of transparent plastic mlaterials, glass, and several glossy
met
surfaces as well, all of which are hard to render using rasterization.(Model by Boyd Meeji,
where o is the ray origin and d is the normalized ray direction, with t then being
the distance along the ray. note that we use q here instead of r to distinguish it
from the right vector r, used below. Ray tracing can be described by two functions
called trace() and shade(). The core geometrical algorithm lies in trace, which
is responsible for finding the closest intersection between the ray and the primitives
in the scene and returning the color of the ray by calling shade(. For most cases
we want to find an intersection with t >0. For constructive solid geometry, we often
want negative distance intersections(those behind the ray as well
To find the color of a pixel, we shoot rays through a pixel and compute the pixel
color as some weighted average of their results. These rays are ca lled eye rays or
camera rags. The camera setup is illustrated in Figure 26.2. Given an integer pixel
oordinate, (a, y) with r going right in the image and y going down, a camera position
C, and a coordinate frame, r, u, v(right, up, and view ), for the camera, and screen
26. Real-Time Ray Tracing
(x,y)
y
Figure 26.2. A ray is defined by an origin o and a direction d. The ray tracing set up consist s of
constructing and shooting one (or more) rays from the viewpoint through each pixel. The ray shown
in this figure hits two triangles, but if the triangles are opaque, only the first hit is of interest. Note
that the vectors r(right),u(up), and v(view )are used to construct a direction vector d(a, y) of a
sample position(a, y)
resolution of w xh, the eye ray q(t)=o+td is computed as
s(x,y)=/2(x+0.5-1)r-f
2(3+0.5)
(26.2
d(, y)
where the normalized ray direction d is affected by f= tan(/2), with o being the
cameras vertical field of view, and a=w/h is the aspect ratio. Note that the camera
coordinate frame is left-handed, i.e., r points to the right, u is the up-vector, and v
points away from the camera. toward the image plane, i.e., a similar setup to the one
shown in Figure 4.5. Note that s is a temporary vector used in order to normalize
d. The 0.5 added to the integer (a, y) position selects the center of each pixel, since
(0.5, 0.5)is the floating-point center 33. If we want to shoot rays anywhere in a pixel,
we would instead represent the pixel location using floating point values and the 0.5
offsets arc then not added
In the naivest implementation, trace would loop over all the n primitives in the
scene and intersect. the ray wit h each of them, keeping the closest intersection with
t>0. Doing so yields O(n) performance, which is unacceptably slow except with a
few primitives. To get to O(log n) per ray, we use a spatial acceleration data structure
e.g., a bVh or a k-d tree. See Chapter 19. 1 for descriptions on how to intersection
test a ray using a BVH
Using trace() and shade( to describe a ray tracer is simple. Equation 26.2 is
used to create an eye ray from the camera position through a location inside a pixel
26.1. Ray Tracing Fundamentals
trace
camera
n
rav
trace
pixel grid
trace
shade
trace
trace
shade
Figure 26.3. A camera ray is created through a pixel and a first call to trace( starts the ray tracing
process in that pixel. This ray hits the ground planc with a normal n. Then shade() is called at this
first hit point, since the goal of trace( is to find the ray's color. The power of ray tracing comes
from the fact that shade () can ca ll trace( as a help when evaluating the BrDf at that point. Here
his is done by shooting a shadow ray to the light source, which in this case is blocked by a triangle
In addition. assuming the surface is specular, a reflection ray is also shot and this ray hits a circle
At this second hit point, shade()is called again to evaluate the shading. Again, a shadow and a
reflection ray are shot from this new hit point
This ray is fed to trace(), whose task is to find the color or radiance( Chapter 8)that
is returned along that ray. This is done by first finding the closest intersection along
he ray and then computing the shading at that point using shade(). We illustrate
this process in Figure 26. 3. The power of this concept is that shade(), which should
evaluate radiance, can do that by making new calls to trace(). These new rays that
are shot from shade() using trace() can, for example, be used to evaluate shadows
recursive reflections and refractions, and diffuse ray evaluation. The term ray depth is
used to indicate the numbcr of rays that have been shot recursively along a ray path
The eve ray has a ray depth of 1, while the second trace() where the ray hits the
circle in Figure 26.3 has ray depth 2
One use of these new rays is to determine if the current point being shaded is in
shadow with respect to a light source. Doing so generates shadows. We can also take
the eye ray and the normal, n, at the intersection to compute the reflection vector
Shooting a ray in this direction generates a reflection on the surface, and can be
donc rccursively. The samc proccss can be used to goncratc refractive rays. Pcrfcctly
specular rellections and refractions along with sharp shadows is often referred to as
Whitted ray tracing [89. See Sections 9.5 and 14.5. 2 for information on how to compute
the reflection and refraction rays. Note that when an object has a different index of
refraction than the medium in which the ray travels, the ray may be both reflected
26. Real-Time Ray Tracing
n
reflecti
refraction
Figure 26. 4. An incoming ray in the top left corner hits a surface whose index of refraction, n2, is
larger than the index of refraction, n1, in which the ray travels, i.e., n2 >n1. Both a reflection ray
d a refraction ra
ch hit point(circles)
and refracted. See Figure 26.4. This type of recursion is something that rasterization
based methods struggle to solve by using various approximations to achieve only a
subset of the effects that can be obtained with ray tracing. Ray casting, the idea of
testing visibility between two points or in a direction, can be used for other graphical
(and non-graphical) algorithms. For example, we could shoot a number of ambient
occlusion rays from an intersection point to get an accurate estimate of that effect
The functions trace(, shade(, and rayTraceImage(, wherc the latter is a
lunction Chat creates eye rays through each pixel, are used in the pseudocode that
follows. These short pieces of code shows the overall structure of a Whitted ray
tracer, which can be used as a basis for many rendering variants, e.g., path tracing
ray TraceImage()
for (p in pixels)
color of p= trace(eye ray through p
trace(ray
t find closest intersection
return shade(pt
shade(point)
10r=0
for(I in light sources)
trace(shadow ray to l)i
color t= evaluate brdf
(系统自动生成,下载前可以参看下载内容)
下载文件列表
相关说明
- 本站资源为会员上传分享交流与学习,如有侵犯您的权益,请联系我们删除.
- 本站是交换下载平台,提供交流渠道,下载内容来自于网络,除下载问题外,其它问题请自行百度。
- 本站已设置防盗链,请勿用迅雷、QQ旋风等多线程下载软件下载资源,下载后用WinRAR最新版进行解压.
- 如果您发现内容无法下载,请稍后再次尝试;或者到消费记录里找到下载记录反馈给我们.
- 下载后发现下载的内容跟说明不相乎,请到消费记录里找到下载记录反馈给我们,经确认后退回积分.
- 如下载前有疑问,可以通过点击"提供者"的名字,查看对方的联系方式,联系对方咨询.