CHAPTER 8 - Implementation 4
- Learn to use the A component in RGBA
color
for
- Blending for translucent surfaces
- Compositing images
- Antialiasing
Opacity and Transparency
- Opaque surfaces permit no light to pass through
- Transparent surfaces permit all light to pass
- Translucent surfaces pass some light
- translucency = 1 - opacity ()
Physical Models
- Dealing with translucency in a
physically
correct manner is difficult due to
- the complexity of the internal interactions of light and matter
- Using a pipeline renderer
Writing Model
- Use A component of RGBA (or RGB) color to store opacity
- During rendering we can expand our writing model to use RGBA values
Blending Equation
- We can define source and destination
blending factors for each RGBA component
s = [sr, sg, sb, sa]
d = [dr, dg, db, da]
- Suppose that the source and destination
colors are
b = [br, bg, bb, ba]
c = [cr, cg, cb, ca]
- Blend as
c' = [br sr+ cr dr, bg sg+ cg dg , bb sb+ cb db , ba sa+ ca da ]
OpenGL Blending and Compositing
- Must enable blending and pick source
and
destination factors
- glEnable(GL_BLEND)
- glBlendFunc(source_factor, destination_factor)
- Only certain factors supported
- GL_ZERO, GL_ONE
- GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA
- GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA
- See Redbook for complete list
Example
- Suppose that we start with the opaque
background color (R0,G0,B0,1)
- This color becomes the initial destination color
- We now want to blend in a translucent polygon with color (R1,G1,B1,1)
- Select GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA as the source and destination blending factors
- Note this formula is correct if polygon is either opaque or transparent
R'1 = 1 R1 +(1- 1) R0, ...
Clamping and Accuracy
- All the components (RGBA) are clamped and stay in the range (0,1)
- However, in a typical system, RGBA
values
are only stored to 8 bits
- Can easily loose accuracy if we add many components together
- Example: add together n images
- Divide all color components by n to avoid clamping
- Blend with source factor = 1, destination factor = 1
- But division by n loses bits
Order Dependency (see code alpha.c , alpha3d.c)
- Is this image correct?
- Probably not
- Polygons are rendered in the order they pass down the pipeline
- Blending functions are order dependent
- Probably not
Opaque and Translucent Polygons
- Suppose that we have a group of polygons some of which are opaque and some translucent
- How do we use hidden-surface removal?
- Opaque polygons block all polygons behind them and affect the depth buffer
- Translucent polygons should not affect
depth buffer
- Render with
glDepthMask(GL_FALSE)
which makes depth buffer read-only
- Render with
- Sort polygons first to remove order dependency
Fog
- We can composite with a fixed color and
have the blending factors depend on depth
- Simulates a fog effect
- Blend source color Cs and fog color Cf by
- f is the fog factor
- Exponential
- Gaussian
- Linear (depth cueing)
Cs'=f Cs + (1-f) Cf
Fog Functions
OpenGL Fog Functions (see code fog.c )
GLfloat fcolor[4] = {...}:
glEnable(GL_FOG);
glFogf(GL_FOG_MODE, GL_EXP);
glFogf(GL_FOG_DENSITY, 0.5);
glFOgv(GL_FOG, fcolor);
Antialiasing
Graphic primitives that exhibit jaggies or staircasing are examples of aliasing artifacts that result from an all or nothing approach. Each pixel is either drawn with the color value of the primitive or is unchanged.
The application of techniques to reduce aliasing is called antialiasing. Antialiasing is founded in the theory of signal processing and based on the Nyquist sampling rate- must sample at twice the frequency of the signal inorder to accurately reconstruct the signal.
How might antialiasing of lines be performed?
Consider a 1 pixel thick black line with slope in [0,1] displayed on a white background.
The midpoint line algorithm sets the pixel in each column that is closest to the desired line. Each time the pixels in successive columns are not in the same row a jaggy appears.
One method to reduce the size of jaggies is to increase the display resolution (if resolution doubles, twice as many jags but jag size is halved, at a tremendous increase in memory). Not a very attractive solution.
SUPER-SAMPLING (using sampling theory)
Unweighted Area Sampling
Although a line has no width, a displayed line does have non-zero width and occupies a finite retangular area on the screen .
If we consider this area as our line then a single pixel set in each column no longer makes sense (>undersampling).
We would like to set every pixel the line area intersects using a varying intensity. The intensity is varied based on the percentage of the pixel the line intersects.
A pixel completely within the area is black, outside the area is white, and partially within the area is gray. Varying intensities are approximated by a technique discussed later.
Area sampling adds noise or fuzziness to the line, thus blurring the line and making it appear better at a distance.
What are the properties of unweighted area sampling?
1. The intensity of an intersected pixel decreases as the distance between the pixel center and the line increases.
2. A non-intersected pixel cannot contribute to the intensity of a neighbor pixel.
3. Equal areas contribute equal intensity regardless of the distance between the pixel center and the area. A small area in the corner of a pixel contributes as much as an equal area near the pixel center.
Weighted Area Sampling
Weighted area sampling retains the first two properties of unweighted sampling but modifies the third property. It seems reasonable to place more weight on an area near the pixel center than an area in a pixel corner. Thus areas near the pixel center contribute a greater intensity to the pixel than those of equal size but further from the pixel center.
In order to understand the difference between these two sampling methods (for property 3) consider the definition of a weighting function.
A weighting function, W(x, y) determines the influence of a small area of intersection, dA, on the intensity of the pixel, as a function of dA's distance from the center of the pixel. Here, (x,y) refers to the pixel at position x,y.
For unweighted sampling, W(x,y) is defined as constant : to capture property 3.
For weighted sampling, W(x,y) decreases as the distance of the area from the pixel center increases.
The best way to understand this is to visualize W(x,y) in the following manner.
Unweighted area sampling using a box filter
W(x,y) is a box (box filter) whose base is centered on a pixel at position (x,y). The height of the box gives the weight of the area dA at (x,y). For a box filter the height is set at 1 (constant to capture property 3). The pixel has width = 1, and length = 1. Thus the volume of the box filter is normalized to 1.
The area of intersection within a pixel can range over [0, 1]. No intersection with the primitive gives an area of 0, complete intersection with the primitive gives an area of 1. Partial intersections give areas > 0 and < 1. Thus the volume of the intersection is also in the range [0,1].
To make a long story short, W(x,y) = w, where w is the volume of the intersection. w is then used as the weighting value for pixel color determination. Let's assume the maximum color intensity of a pixel is Imax. To determine the weighted color, Ip, of the pixel :
Ip = Imax*w
and this works out nicely. If the primitive does not intersect the pixel then w = 0 and Ip = 0. If the primitive completely covers the pixel then w = 1 and Ip = Imax. Partial intersections give a range of intensities in (0, Imax).
Weighted Area sampling using a conic filter
W(x,y) is a circular cone (conic filter) whose apex is at the center of the pixel at position (x,y). This is the simplest decreasing function of distance ; i.e. the maximum height of the cone occurs at the pixel center and then decreases linearly with increasing distance from the center. The maximal height of the conic filter is normalized so that the volume under the cone is 1.
The circular base of the cone has a radius = 1, considerably larger than the area of a single pixel (filtering theory recommends this radius length). Thus the base completely covers the pixel under consideration, along with portions of the pixels' nearest neighbors - a primitive fairly far from the pixel still contributes to its intensity. Note that the bases of neighboring filters will also overlap.
Practical Implementation of Weighting Function
The question we should now ask is: How do we compute the weighting function? Lets look at unweighted area sampling using the box filter (weighted area sampling uses an equivalent approach).
It would be possible to compute the area of intersection of a line and a pixel. In the case of the box filter the intersect area = the intersect volume. The proportion of the intersect volume to the total volume could then be determined. This proportion is then used as a weight to modulate the maximum intensity of the pixel color. This solution is computationally intensive.
Table Lookup Approach
Consider a pixel with unit width and height and a line with unit width. Visualize the line divided through the center along its length. We will call this the line center. Also visualize the line with a top edge and bottom edge which captures its width. Now consider the many ways the line could intersect the pixel: line center on pixel center (complete intersection), line center to pixel center distance > 1 (no intersection), line center to pixel center distance > 0, < 1 (partial intersection).
The value of the distance, D, from the line center to the pixel center falls in the interval [0.0, 1.0]. Calculation of D (discussed in Gupta-Sproull algorithm) is computationally inexpensive.
Ideally, we would like to use D as an index into a precomputed table of weights. We could then have multiple tables constructed, one for each type of desired filter. How might this be done considering that D is a floating point value and will have many possible values that fall in the interval [0.0, 1.0]?
Assume we are using a 4-bit (can be generalized to an N-bit display) display. This means that 4 bits are used to represent color intensities in the range [0, 15]. Since only 16 intensity values are possible, the precision of D only needs to be 1 in 16. D can be divided into 16 equal increments ranging from 0.0 to 1.0.
Our lookup table will have 16 entries, each entry defining an appropriate weight. All D values <= 1/16 might index into Table[0], all D values > 1/16 and <= 2/16 might index into Table[1], and so on. The entry at Table[i] is used as a weight in the color determination of the pixel : Ip = Imax*w.
openGL antialiasing of Points/Lines
An openGL implementation assumes much more than just a 4-bit monochrome display. A 24-bit display would be reasonable, with 8-bits used to represent intensities for each red, green, and blue component that defines a color. Further more, the programmer sets the desired color intensity via a vertex color command. Since we don't want to modify color intensities another approach is needed.
openGL supports a second color mode : RGBA, where A is an "alpha" component that defines the opacity of the RGB color. The interval of valid alpha values is [0.0, 1.0] where 0.0 => transparent and 1.0 => opaque (the default). To use RGBA mode :
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGBA);
How does openGL use alpha values in antialiasing?
" openGL calculates a coverage value (weight) for each fragment based on the fraction of the pixel square (intersected volume) it would cover. In RGBA mode, openGL multiplies the fragment's alpha value by its coverage (corresponds to intensity calculation of sampling/filtering discussion). You can then use the resulting alpha value to blend the fragment with the corresponding pixel in the framebuffer".
This implies that as the coverage value increases (more of the fragment intersects the pixel) the resulting alpha value should also increase. Why?
As the fragment takes up more of the pixel area we would expect it to contribute more to the color of the fragment.
In order to antialias points or lines one must turn on the appropriate drawing attibutes:
glEnable(GL_POINT_SMOOTH); or glEnable(GL_LINE_SMOOTH);
An openGL implementation must support antialiasing but the type of sampling/filter is not specified. The programmer has only partial control of this via:
glHint(GLenum target, GLenum hint);
where targets relevant to this discussion are
- GL_POINT_SMOOTH_HINT
- GL_POLYGON_SMOOTH_HINT AND
- GL_LINE_SMOOTH_HINT.
Relevant hints are
- GL_NICEST (best sampling/filtering),
- GL_FASTEST (most efficient sampling/filtering)
- GL_DONT_CARE (seems to ignore antialiasing on systems I have investigated but is supposed to indicate no preference).
So in theory
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
should produce best visual results.
Blending of colors (a softening or blurring effect) must be turned on for antialiasing in openGL. Remember that the alpha value specifies the opacity of a color. As the alpha value decreases the resulting color becomes more transparent. Transparency implies a necessary blending of colors with other objects that might intersect the same area =>
"blend the fragment with the corresponding pixel in the framebuffer".
Let's think about a simple example. Suppose we define a red rectangle to be displayed with a green line on top of the rectangular area. The line and polygon will share some pixels in the final display. Where the line completely intersects a shared pixel (alpha value ~1.0) we want a green pixel. Where the line only partially intersects a shared pixel (alpha value > 0.0 , < 1.0) we want to blend the red and green to soften jaggies. Blending is performed after an image has been rasterized and broken into fragments but before the final pixels are drawn to the framebuffer.
To enable blending :
glEnable(GL_BLEND);
During blending the color values of the incoming fragment (the source) are combined with the color values of the currently stored pixel (the destination) in a two stage process.
- First, one must specify the blending functions (how the source/destination values are combined) to be used. The blending functions specify how to compute the source and destination factors. These factors are RGBA quadruplets that are multiplied by the RGBA values of the source and destination, respectively. Then the corresponding components in the two RGBA quadruplets are combined (the default combination is to add) and become the new destination.
Example
Source color values are Rs, Gs, Bs, As, destination color values are Rd, Gd, Bd, Ad. Source blending factors are Sr, Sg, Sb, Sa, and destination blending factors are Dr, Dg, Db, Da. The final blended RGBA values are
(Rs*Sr +Rd*Dr) = red, (Gs*Sg + Gd*Dg) = green, (Bs*Sb + Bs*Db) = blue, (As*Sa + Ad*Da) = alpha where each component is in the interval [0.0, 1.0].
To specify the blending functions you wish to use:
glBlendFunc(GLenum srcfactor, GLenum destfactor);
glBlendFunc(GL_SOURCE_ALPHA, GL_ONE_MINUS_SOURCE_ALPHA);
A list of blending factors is found on page 223 of the openGL text. I will talk about only the above source and destination blending factors that are relevant for antialiasing.
The blending factors for GL_SOURCE_ALPHA are (As,As,As,As) thus each RGBA component of the source color is multiplied by the source alpha value, As. The As value is computed during sampling. The blending function produces the quadruplet : (Rs*As), (Gs*As), (Bs*As), (As*As).
The blending factors for GL_ONE_MINUS_SOURCE_ALPHA are (1,1,1,1)-(As,As,As,As). Subtraction here is component-wise thus the blending function produces the quadruplet : (Rd-Rd*As), (Gd-Gd*As), (Bd-Bd*As), (Ad-Ad*As). The final color is then :
((Rs*As) + (Rd-Rd*As)) = red, ((Gs*As) + (Gd-Gd*As)) = green, ((Bs*As) + (Bd-Bd*As)) = blue, ((As*As) + (Ad-Ad*As)) = alpha.
This makes sense if we just think about it intuitively. We want to update the destination using a blending (+) of the source and the current destination. All RGBA values are clampled to [1] thus the subtract keeps destination values in the interval [0,1].
The final code segment you might set up:
glEnable(GL_LINE_SMOOTH);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SOURCE_ALPHA, GL_ONE_MINUS_SOURCE_ALPHA);
............................
glDisable(
GL_BLEND);
glDisable(GL_LINE_SMOOTH);
Line Aliasing (see code aargb.c )
- Ideal raster line is one pixel wide
- All line segments, other than vertical and horizontal segments, partially cover pixels
- Simple algorithms color only whole pixels
- Lead to the jaggies or aliasing
- Similar issue for polygons (see aapoly.c )
Antialiasing
- Can try to color a pixel by adding a
fraction of its color to the frame buffer
- Fraction depends on percentage of pixel covered by fragment
- Fraction depends on whether there is overlap
Area Averaging
- Use average area 1+2-12 as blending factor
OpenGL Antialiasing
- Can enable separately for points,
lines,
or polygons
glEnable(GL_POINT_SMOOTH);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Accumulation Buffer (see code aacanti.c , accpersp.c )
- Compositing and blending are limited by
resolution of the frame buffer
- Typically 8 bits per color component
- The accumulation buffer is a high resolution buffer (16 or more bits per component) that avoids this problem
- Write into it or read from it with a scale factor
- Slower than direct compositing into the frame buffer
Applications
- Compositing
- Image Filtering (convolution)
- Whole scene antialiasing
- Motion effects