Buscar

respostas do livro

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você viu 3, do total de 115 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você viu 6, do total de 115 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você viu 9, do total de 115 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Prévia do material em texto

Fundamentals of Multimedia
ISBN: 0130618721
Ze-Nian Li and Mark S. Drew
School of Computing Science
Simon Fraser University
Exercise Solutions
c©Prentice-Hall, Inc., 2003
Contents
1 Introduction to Multimedia 1
2 Multimedia Authoring and Tools 3
3 Graphics and Image Data Representations 11
4 Color in Image and Video 15
5 Fundamental Concepts in Video 34
6 Basics of Digital Audio 37
7 Lossless Compression Algorithms 44
8 Lossy Compression Algorithms 52
9 Image Compression Standards 59
10 Basic Video Compression Techniques 64
11 MPEG Video Coding I — MPEG-1 and 2 68
12 MPEG Video Coding II — MPEG-4, 7 and Beyond 74
13 Basic Audio Compression Techniques 78
14 MPEG Audio Compression 82
15 Computer and Multimedia Networks 88
16 Multimedia Network Communications and Applications 92
17 Wireless Networks 98
18 Content-Based Retrieval in Digital Libraries 102
i
Chapter 1
Introduction to Multimedia
Exercises
1. Identify three novel applications of the Internet or multimedia applications. Discuss why you think
these are novel.
Answer:
WML – Wireless markup Language.
Mobile games: massive multiplayer online role-playing game (Mmorpg)
Multisensory data capture
Capture of context
Represent and adjust recollections in memory over time
“Fictive Art” in new media: beyond RPGs in the use of narrative and fictions to create made-up
worlds, imaginary situations, and odd situations (an example is “The Museum of Jurassic Tech-
nology”).
Bridging the semantic gap problem in automatic content annotation systems — the gulf between
the semantics that users expect and the low-level features (content descriptions) that systems
actually use: one solution is “an approach called computational media aesthetics. We define
this approach as the algorithmic study of a variety of image, space, and aural elements em-
ployed in media ... based on the media elements usage patterns in production and the associated
computational analysis of the principles that have emerged while clarifying, intensifying, and
interpreting some event for the audience.” (IEEE Multimedia Magazine, Volume: 10, Issue: 2,
Year: April-June 2003)
2. Briefly explain, in your own words, “Memex” and its role regarding hypertext. Could we carry out
the Memex task today? How do you use Memex ideas in your own work?
Answer:
Memex was a theoretical system explicated by Vanevar Bush in a famous 1945 essay. His main
ideas involved using associative memory as an aid for organizing a weler of material. He even
adumbrated the concept of links.
3. Your task is to think about the transmission of smell over the Internet. Suppose we have a smell
sensor at one location and wish to transmit the Aroma Vector (say) to a receiver to reproduce the
same sensation. You are asked to design such a system. List three key issues to consider and two
applications of such a delivery system. Hint: Think about medical applications.
1
2 Chapter 1. Introduction to Multimedia
Answer:
“October 6, 2000 – DigiScents, Inc., the pioneer of digital scent technology, received the ‘Best
New Technology’ award for its iSmell(TM) device at the ‘Best of RetailVision Awards’ at the
Walt Disney World Dolphin Hotel in Lake Buena Vista, Fla. Retailers such as BestBuy, Ra-
dioShack, CompUSA and other industry giants voted on the vendor awards.”
“DigiScents ... The company would send you a dispenser about the size of a computer speaker.
You’d plug it into your PC. It would be filled with chemicals that, when mixed, could recreate
most any smell. Tiny bits of data would come in over the Net to tell your dispenser what smell
to make. There would be a portal where you could find scents. DigiScents calls it – and at first I
thought they were joking – a ‘Snortal.’”
4. Tracking objects or people can be done by both sight and sound. While vision systems are precise,
they are relatively expensive; on the other hand, a pair of microphones can detect a person’s bearing
inaccurately but cheaply. Sensor fusion of sound and vision is thus useful. Surf the web to find out
who is developing tools for video conferencing using this kind of multimedia idea.
Answer:
“Distributed Meetings: A Meeting Capture and Broadcasting System,” Ross Cutler, Yong Rui,
Anoop Gupta, JJ Cadiz Ivan Tashev, Li-wei He, Alex Colburn, Zhengyou Zhang, Zicheng Liu,
Steve Silverberg, Microsoft Research, ACM Multimedia 2002,
http://research.microsoft.com/research/coet/V-Kitchen/chi2001/paper.pdf
5. Non-photorealistic graphics means computer graphics that do well enough without attempting to make
images that look like camera images. An example is conferencing (let’s look at this cutting-edge
application again). For example, if we track lip movements, we can generate the right animation
to fit our face. If we don’t much like our own face, we can substitute another one — facial-feature
modeling can map correct lip movements onto another model. See if you can find out who is carrying
out research on generating avatars to represent conference participants’ bodies.
Answer:
See: anthropic.co.uk
6. Watermarking is a means of embedding a hidden message in data. This could have important legal
implications: Is this image copied? Is this image doctored? Who took it? Where? Think of “mes-
sages” that could be sensed while capturing an image and secretly embedded in the image, so as to
answer these questions. (A similar question derives from the use of cell phones. What could we use
to determine who is putting this phone to use, and where, and when? This could eliminate the need
for passwords.)
Answer:
Embed retinal scan plus date/time, plus GPS data; sense fingerprint.
Chapter 2
Multimedia Authoring and Tools
Exercises
1. What extra information is multimedia good at conveying?
(a) What can spoken text convey that written text cannot?
Answer:
Speed, rhythm, pitch, pauses, etc...
Emotion, feeling, attitude ...
(b) When might written text be better than spoken text?
Answer:
Random access, user-controlled pace of access (i.e. reading vs. listening)
Visual aspects of presentation (headings, indents, fonts, etc. can convey information)
For example: the following two pieces of text may sound the same when spoken:
I said “quickly, come here.”
I said quickly “come here.”
2. Find and learn 3D Studio Max in your local lab software. Read the online tutorials to see this soft-
ware’s approach to a 3D modeling technique. Learn texture mapping and animation using this product.
Make a 3D model after carrying out these steps.
3. Design an interactive web page using Dreamweaver. HTML 4 provides layer functionality, as in
Adobe Photoshop. Each layer represents an HTML object, such as text, an image, or a simple HTML
page. In Dreamweaver, each layer has a marker associated with it. Therefore, highlighting the layer
marker selects the entire layer, to which you can apply any desired effect. As in Flash, you can
add buttons and behaviors for navigation and control. You can create animations using the Timeline
behavior.
4. In regard to automatic authoring,
(a) What would you suppose is meant by the term “active images”?
Answer:
Simple approach: Parts of the image are clickable.
More complex: Parts of the image have knowledge about themselves.
(b) What are the problems associated with moving text-based techniques to the realm of image-
based automatic authoring?
3
4 Chapter 2. Multimedia Authoring and Tools
Answer:
Automatic layout is well established, as is capture of low-level structures such as images
and video. However amalgamating these into higher-level representations is not well un-
derstood, nor is automatically forming and linking appropriate level anchors and links.
(c) What is the single most important problem associated with automatic authoring using legacy
(already written) text documents?
Answer:
Overwhelming number of nodescreated, and how to manage and maintain these.
5. Suppose we wish to create a simple animation, as in Fig. 2.30. Note that this image is exactly
Fig. 2.30: Sprite, progressively taking up more space.
what the animation looks like at some time, not a figurative representation of the process of moving
the fish; the fish is repeated as it moves. State what we need to carry out this objective, and give a
simple pseudocode solution for the problem. Assume we already have a list of (x; y) coordinates for
the fish path, that we have available a procedure for centering images on path positions, and that the
movement takes place on top of a video.
Answer:
\\ We have a fish mask as in Figure \ref{FIG:MASKANDSPRITE}(a), and
\\ also a fish sprite as in Figure \ref{FIG:MASKANDSPRITE}(b).
\\ Fish positions have centers posn(t).x posn(t).y
currentmask = an all-white image
currentsprite = an all-black image
for t = 1 to maxtime {
\\ Make a mask fishmask with the fish mask black area
\\ centered on position posn(t).x, posn(t).y
\\ and a sprite fishsprite with the colored area also moved
\\ to posn(t).x, posn(t).y
\\ Then expand the mask:
currentmask = currentmask AND fishmask \\ enlarges mask
currentsprite = currentsprite OR fishsprite \\ enlarges sprite
\\ display current frame of video with fish path on top:
5
currentframe = (frame(t) AND currentmask) OR currentsprite
}
(a) (b)
Fig. 2.30: (answer) Mask and Sprite.
6. For the slide transition in Fig. 2.11, explain how we arrive at the formula for x in the unmoving right
video RR.
Answer:
if x=xmax � t=tmax, then we are in the right-hand video. The value of x is to the right of xT ,
and the value in the unmoving right-hand video is that value of x, reduced by xT so that we are
in units with respect to the left of the right-hand video frame. That is, in the right-hand video
frame we are at position x− xt, which is x− (xmax � t=tmax).
7. Suppose we wish to create a video transition such that the second video appears under the first video
through an opening circle (like a camera iris opening), as in Figure 2.31. Write a formula to use the
correct pixels from the two videos to achieve this special effect. Just write your answer for the red
channel.
(a) (b)
Fig. 2.31: Iris wipe: (a): Iris is opening. (b): At a later moment.
6 Chapter 2. Multimedia Authoring and Tools
Answer:
y ˆ ________________
| | R0 |
| | |
| | ____ |
| | ( ) |
| | ( R1 ) |
| | (____) |
| | |
| | |
| -----------------
----------------> x
radius of transition r_T = 0 at time t = 0
r_T = r_max = sqrt( (x_max/2)ˆ + (y_max/2)ˆ2 ) at time t=t_max
--> r_T = r_max * t / t_max
At x,y,
r = sqrt( (x-x_max/2)ˆ2 + (y-y_max/2)ˆ2 )
If ( r < (t/t_max)*r_max )
R(x,y,t) = R1(x,y,t)
Else
R(x,y,t) = R0(x,y,t)
8. Now suppose we wish to create a video transition such that the second video appears under the first
video through a moving radius (like a clock hand), as in Figure 2.32. Write a formula to use the
correct pixels from the two videos to achieve this special effect for the red channel.
(a) (b)
Fig. 2.32: Clock wipe: (a): Clock hand is sweeping out. (b): At a later moment.
7
Answer:
y ˆ ________________
| | R0 / |
| | / |
| | ____ / |
| | ( ) |
| | ( R1_________|
| | (____) |
| | |
| | |
| -----------------
----------------> x
angle of transition a_T = 0 at time t = 0
a_T = a_max = 360 at time t=t_max
--> a_T = a_max * t / t_max
At x,y,
a = atan( -(y-y_max/2)/(x-x_max/2) )
\\ Since y in defn of angle increases from bottom, not from top
\\ like rows. Watch for correct quadrant, though--use 2-arg atan.
If ( a < (t/t_max)*a_max )
R(x,y,t) = R1(x,y,t)
Else
R(x,y,t) = R0(x,y,t)
9. Suppose you wish to create a wavy effect, as in Figure 2.33. This effect comes from replacing the im-
age x value by an x value offset by a small amount. Suppose the image size is 160 rows�120 columns
of pixels.
(a) Using float arithmetic, add a sine component to the x value of the pixel such that the pixel takes
on an RGB value equal to that of a different pixel in the original image. Make the maximum
shift in x equal to 16 pixels.
Answer:
R = R(x + sin(y/120) * 16 , y) and similarly for G, B.
(b) In Premiere and other packages, only integer arithmetic is provided. Functions such as sin are
redefined so as to take an int argument and return an int. The argument to the sin function
must be in 0 :: 1;024, and the value of sin is in −512 :: 512: sin(0) returns 0, sin(256)
returns 512, sin(512) returns 0, sin(768) returns -512 and sin(1,024) returns 0.
Rewrite your expression in part (a) using integer arithmetic.
8 Chapter 2. Multimedia Authoring and Tools
Fig. 2.33: Filter applied to video.
Answer:
R = R(x + sin( (y*1024)/120 ) /32,y) and similarly
for G,B.
[In Premiere: src(x + sin( (y*1024)/120 ) /32,y ,p) ]
Why: y in 0..119; (y*1024)/120 in 0..1023; the resulting sin
is in -512..512; and dividing by 32 puts the offset range into
-16..16.
(c) How could you change your answer to make the waving time-dependent?
Answer:
R = R(x + t*sin(y*(1024/120) ) /(32*tmax),y)
[In Premiere: src(x + t*sin(y*(1024/120) ) /(32*tmax),y ,p) ]
Note the order: else have t/tmax==0 or 1 only.
10. How would you create the image in Figure 2.6? Write a small program to make such an image. Hint:
Place R, G, and B at the corners of an equilateral triangle inside the circle. It’s best to go over all
columns and rows in the output image rather than simply going around the disk and trying to map
results back to (x; y) pixel positions.
Answer:
% Matlab script:
SIZE = 256;
im = ones(SIZE,SIZE,3);
% Place R at (0,1).
9
% Place G at 120 degrees.
% Place B at 240 degrees.
% The outside perimeter goes from R to G as we go from
% R to G.
% And from B to R as we go from 240 to 360.
%
% At a position where the Outside Perimeter value
% is out , at radius r the color is
% (1-r)*(1,1,1) + r*(out)
% Go over all pixels:
for j = 1:SIZE
for i = 1:SIZE
x = j-SIZE/2;
y = i-SIZE/2;
r = sqrt(x*x+y*y);
if (r<=(SIZE/2))
ang = 180/pi*atan2(y,x);
if ang <0
ang = 360+ang;
end
if ang < 120 % between R and G
out = [(120-ang)/120 ; ang/120 ; 0];
elseif ang < 240 % between G and B
out = [0 ; (240-ang)/120 ; (ang-120)/120];
else % between B and R
out = [(ang-240)/120 ; 0 ; (360-ang)/120];
end; % if ang
% and could make the in-between bands broader by not using
% linear interpolation, if wished.
%linear:
im(i,j,:) = ((SIZE/2)-r)/(SIZE/2)*[1;1;1] + r/(SIZE/2)*out;
% and normalize the color to bright:
temp = max( im(i,j,:) );
im(i,j,:) = im(i,j,:)/temp; % takes one channel to 1.0
end; % if r
end
end
imshow(im)
imwrite(im,’colorwheel256.bmp’);
11. As a longer exercise for learning existing software for manipulating images, video, and music, make
a 1-minute digital video. By the end of this exercise, you should be familiar with PC-based equip-
ment and know how to use Adobe Premiere, Photoshop, Cakewalk Pro Audio, and other multimedia
software.
10 Chapter 2. Multimedia Authoring and Tools
(a) Capture (or find) at least three video files. You can use a camcorder or VCR to make your own
(through Premiere or the like) or find some on the Net.
(b) Compose (or edit) a small MIDI file with Cakewalk Pro Audio.
(c) Create (or find) at least one WAV file. You may either digitize your own or download some from
the net.
(d) Use Photoshop to create a title and an ending.
(e) Combine all of the above to produce a movie about 60 seconds long, including a title, some
credits, some soundtracks, and at least three transitions. Experiment with different compression
methods; you are encouraged to use MPEG for your final product.
(f) The above constitutes a minimum statement of the exercise. You may be tempted to get very
creative, and that’s fine, but don’t go overboard and take too much time away fromthe rest of
your life!
Chapter 3
Graphics and Image Data Representations
Exercises
1. Briefly explain why we need to be able to have less than 24-bit color and why this makes for a problem.
Generally, what do we need to do to adaptively transform 24-bit color values to 8-bit ones?
Answer:
May not be able to handle such large file sizes or not have 24-bit displays.
The colors will be somewhat wrong, however.
We need to cluster color pixels so as to best use the bits available to be as accurate as possi-
ble for the colors in an image. In more detail: variance minimization quantization–vmquant.m
Minimum variance quantization allocates more of the available colormap entries to colors that
appear frequently in the input image and allocates fewer entries to colors that appear infre-
quently. Therefore if there are for example many reds, as in a red apple, there will be more
resolution in the red part of the color cube. An excellent implementation of this idea is Wu’s
Color Quantizer (see Graphics Gems vol. II, pp. 126-133).
2. Suppose we decide to quantize an 8-bit grayscale image down to just 2 bits of accuracy. What is the
simplest way to do so? What ranges of byte values in the original image are mapped to what quantized
values?
Answer:
Just use the first 2 bits in the grayscale value.
I.e., any values in the ranges
0000 0000 to 0011 1111
0100 0000 to 0111 1111
1000 0000 to 1011 1111
1100 0000 to 1111 1111
are mapped into 4 representative grayscale values.
In decimal, these ranges are:
0 to (2ˆ6-1)
2ˆ6 to (2ˆ7-1)
2ˆ7 to 2ˆ7 + (2ˆ6-1)
2ˆ7 + 2ˆ6 to (2ˆ8-1)
i.e.,
11
12 Chapter 3. Graphics and Image Data Representations
0 to 63
64 to 127
128 to 191
192 to 255
Then reconstruction values should be taken as the middle
of these ranges; i.e.,
32
96
160
224
3. Suppose we have a 5-bit grayscale image. What size of ordered dither matrix do we need to display
the image on a 1-bit printer?
Answer:
2ˆ5=32 levels ˜= nˆ2+1 with n=6; therefore need D(6)
4. Suppose we have available 24 bits per pixel for a color image. However, we notice that humans are
more sensitive to R and G than to B — in fact, 1.5 times more sensitive to R or G than to B. How
could we best make use of the bits available?
Answer:
ratio is 3:3:2, so use bits 9:9:6 for R:G:B.
5. At your job, you have decided to impress the boss by using up more disk space for the company’s
grayscale images. Instead of using 8 bits per pixel, you’d like to use 48 bits per pixel in RGB. How
could you store the original grayscale images so that in the new format they would appear the same
as they used to, visually?
Answer:
48 bits RGB means 16 bits per channel: so re-store the old ints, which were < 28, as new ints <
216. But then the new values have to be created by multiplying the old values by 28, so that e.g.
a mid-gray is still a mid-gray. As well, have to duplicate the old gray into all three of R,G,B.
6. Sometimes bitplanes of an image are characterized using an analogy from mapmaking called “eleva-
tions”. Figure 3.18 shows some elevations.
Suppose we describe an 8-bit image using 8 bitplanes. Briefly discuss how you could view each
bitplane in terms of geographical concepts.
Answer:
We can think of the 8-bit image as a set of 1-bit bit-planes, where each plane consists of a 1-bit
representation of the image at higher and higher levels of ‘elevation’: a bit is turned on if the
image pixel has a value that is at or above that bit level.
7. For the color LUT problem, try out the median-cut algorithm on a sample image. Explain briefly
why it is that this algorithm, carried out on an image of red apples, puts more color gradation in the
resulting 24-bit color image where it is needed, among the reds.
13
1:128
1:
12
8
0 20 40 60 80 100 120
0
20
40
60
80
10
0
12
0
20
20
20
40
40 60
60
60 60 80
80 80
100
100
120
120
Fig. 3.18: Elevations in geography.
8. In regard to nonordered dithering, a standard graphics text [2] states, “Even larger patterns can be
used, but the spatial versus intensity resolution trade-off is limited by our visual acuity (about one
minute of arc in normal lighting).”
(a) What does this sentence mean?
Answer:
If we increase the matrix size to n�n, with a larger n, then we make the number of intensity
levels available, n2 +1, greater and thus increase the intensity resolution. But then the size
of the patterns laid down in bi-level dots gets larger, decreasing the spatial resolution —
less detail is in each small area. The larger is the matrix of patterns, the greater is the
possibility that we can see gaps between dots. We would prefer output that did not let us
see the gaps between bi-level dots: this is fixed by our ability to detect the space between
small dots that are separated by one minute of arc.
(b) If we hold a piece of paper out at a distance of 1 foot, what is the approximate linear distance
between dots? (Information: One minute of arc is 1/60 of one degree of angle. Arc length on a
circle equals angle (in radians) times radius.) Could we see the gap between dots on a 300 dpi
printer?
Answer:
One minute of arc is 1=60 � �=180 radians, and at r = 25 cm, the arc length is 25 � �=(60 �
180), or about 25�3=10800 �= 1=100 cm. For a 300 dpi printer, the dot separation is about
100 d.p.cm, so we could just see such a gap (this is actually affected by what surrounds the
image).
9. Write down an algorithm (pseudocode) for calculating a color histogram for RGB data.
14 Chapter 3. Graphics and Image Data Representations
Answer:
int hist[256][256][256];
image is an appropriate struct with int fields red,green,blue
for i=0..(MAX_Y-1)
for j=0..(MAX_X-1)
R = image[x][y].red;
G = image[x][y].green;
B = image[x][y].blue;
hist[R][G][B]++;
Chapter 4
Color in Image and Video
Exercises
1. Consider the following set of color-related terms:
(a) wavelength
(b) color level
(c) brightness
(d) whiteness
How would you match each of the following (more vaguely stated) characteristics to each of the above
terms?
(a) luminance ) brightness
(b) hue ) wavelength
(c) saturation ) whiteness
(d) chrominance ) color level
2. What color is outdoor light? For example, around what wavelength would you guess the peak power
is for a red sunset? For blue sky light?
Answer:
450 nm, 650 nm.
3. “The LAB gamut covers all colors in the visible spectrum.”
(a) What does this statement mean? Briefly, how does LAB relate to color? Just be descriptive.
(b) What are (roughly) the relative sizes of the LAB gamut, the CMYK gamut, and a monitor gamut?
Answer:
CIELAB is simply a (nonlinear) restating of XYZ tristimulus values. The objective of CIELAB
is to develop a more perceptually uniform set of values, for which equal distances in different
parts of gamut imply roughly equal differences in perceived color. Since XYZ encapsulates a
statement about what colors can in fact be seen by a human observer, CIELAB also “covers all
colors in the visible spectrum.”
15
16 Chapter 4. Color in Image and Video
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
(a)
0
0.2
0.4
0.6
0.8
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
(b)
Fig. 4.20: (a): Color matching functions; (b): Transformed color matching functions.
XYZ, or equivalently CIELAB, by definition covers the whole human visual system gamut. In
comparison, a monitor gamut covers just the triangle joining the R, G, and B pure-phosphor-
color corners, so is much smaller. Usually, a printer gamut is smaller again, although some
parts of it may overlap the boundary of the monitor gamut and thus allow printing of colors
that in fact cannot be produced on a monitor. Printers with more inkshave larger gamuts.
(Incidentally, color slide films have considerably larger gamuts.)
4. Where does the chromaticity “horseshoe” shape in Figure 4.11 come from? Can we calculate it?
Write a small pseudocode solution for the problem of finding this so-called “spectrum locus”. Hint:
Figure 4.20(a) shows the color-matching functions in Figure 4.10 drawn as a set of points in three-
space. Figure 4.20(b) shows these points mapped into another 3D set of points. Another hint: Try a
programming solution for this problem, to help you answer it more explicitly.
Answer:
The x¯(�), y¯(�), z¯(�), color-matching curves define the human visual system response to spectra.
The outer boundary corresponds to the “spectrum locus”, i.e., the response to a pure (laser-like)
single wavelength sample. Interior points on the surface correspond to mixtures of wavelengths.
To make visualization simpler, we go to a 2D chromaticity color space: fx; yg = fX;Y g=(X +
Y + Z). The boundary of the XYZ plot, projected to 2D, resembles a horseshoe shape.
% matlab script
load ’figs/chap4/xyz.dat’ -ascii % 301 by 3 values
waves = (400:700)’;
plot(waves, xyz)
plot3(xyz(:,1), xyz(:,2), xyz(:,3), ’.’);
%
denom = xyz(:,1) + xyz(:,2) + xyz(:,3);
xy = zeros(301,3); % declare
for k=1:3
xy(:,k) = xyz(:,k) ./ denom;
end
plot3(xy(:,1), xy(:,2), xy(:,3), ’.’);
5. Suppose we use a new set of color-matching functions x¯new(�), y¯new(�), z¯new(�) with values
17
� (nm) x¯new(�) y¯new(�) z¯new(�)
450 0.2 0.1 0.5
500 0.1 0.4 0.3
600 0.1 0.4 0.2
700 0.6 0.1 0.0
In this system, what are the chromaticity values (x; y) of equi-energy white light E(�) where E(�) �
1 for all wavelengths �? Explain.
Answer:
The chromaticity values (x; y) are made from the XY Z triple X =
P
λ[x¯(�) ? E(�)], Y =P
λ[y¯(�)?E(�)], Z =
P
λ[z¯(�)?E(�)]. For the new color-matching functions, since every E(�)
is 1 for equi-energy white light, we form X;Y;Z via
P
(x¯);
P
(y¯);
P
(z¯) = (1; 1; 1), according to
the values in the table; so the chromaticity is x = X=(X + Y + Z) = 1=3, and also y = 1=3.
6. (a) Suppose images are not gamma corrected by a camcorder. Generally, how would they appear on
a screen?
Answer:
Too dark at the low-intensity end.
(b) What happens if we artificially increase the output gamma for stored image pixels? (We can do
this in Photoshop.) What is the effect on the image?
Answer:
Increase the number of bright pixels — we increase the number of pixels that map to the
upper half of the output range. This creates a lighter image. – and incidentally, we also
decrease highlight contrast and increase contrast in the shadows.
7. Suppose image file values are in 0 :: 255 in each color channel. If we define R = R=255 for the red
channel, we wish to carry out gamma correction by passing a new value R0 to the display device, with
R
0 ’ R1/2.0.
It is common to carry out this operation using integer math. Suppose we approximate the calculation
as creating new integer values in 0 :: 255 via
(int) (255 � (R1/2.0))
(a) Comment (very roughly) on the effect of this operation on the number of actually available levels
for display. Hint: Coding this up in any language will help you understand the mechanism at
work better — and will allow you to simply count the output levels.
(b) Which end of the levels 0 :: 255 is affected most by gamma correction — the low end (near 0) or
the high end (near 255)? Why? How much at each end?
Answer:
(a) The integer values actually taken on are not as many as 256. (the number of levels comes
out to 193). The reason is that some of the gamma corrected integer values equal the same
integer value after truncation.
(b) At the low end, the integer value R = 0 corresponds to the quantized value 0, whereas the
int value 1 corresponds to
18 Chapter 4. Color in Image and Video
(int) ( 255* sqrt( 1/255 ) ) ˜= (int) ( 255/16 ) = 15
so that an image value of 1 becomes the gamma-corrected value 15.
At the high end, 255 7! 255 correctly, but 254 7! 255 � (1 − 1=255) ^ 0:5 ’ 255 � (1 −
0:5 � 1=255) = 255− 0:5 ’ 254, so that the high end is much less affected.
Therefore overall the gamma-quantization greatly reduces the resolution of quantization
at the low end of image intensity. The first few levels are 15, 22, 27, 31 : : : and the final few
are 253, 254, 255.
% matlab script:
% gamma.m
levels = 0:255;
levels = levels/255.0;
levels = floor(255* ( levels.ˆ0.5 ) );
plot(0:255,levels)
levels = unique(levels);
length(levels) % 193
levels(1:5) % 0 15 22 27 31
8. In many computer graphics applications, γ-correction is performed only in color LUT (lookup table).
Show the first five entries of a color LUT meant for use in γ-correction. Hint: Coding this up saves
you the trouble of using a calculator.
Answer:
Use round instead, but the idea is as in last question, applied now to a LUT. V 0 = V
1
γ
For answers: Do a LUT table for R G B, 256 rows, say.
First row, indexed by 0, all entries 0. Any other kth row,
R = G = B = round(255 � (k=255) 1γ )
First five: 0 16 23 28 32
0 50 100 150 200 250 300
0
50
100
150
200
250
300
9. Devise a program to produce Figure 4.21, showing the color gamut of a monitor that adheres to
SMPTE specifications.
19
 
Fig. 4.21: SMPTE Monitor Gamut
Answer:
The simplest answer is derived from a Fortran program on the net:
http://www.physics.sfasu.edu/astro/color/chromaticity.html
Alterations are written in lower case. This code produces a .ppm (portable pixmap) file in
ASCII, which can be read by xv under Unix, for example.
For C code, see after the Fortran code, on page 24.
c FORTRAN:
c Reconstructionist version of chromaticity diagram program
C by Dan Bruton (astro@tamu.edu)
c http://www.physics.sfasu.edu/astro/color/chromaticity.html
c
c This program will create a ppm (portable pixmap)
c image of an approximate chromaticity diagram using
c equations from the Color Equations FAQ at
c ftp://ftp.wmin.ac.uk/pub/itrg/coloureq.txt
c or the Color Space FAQ.
c
IMPLICIT REAL*8 (a-h,o-z)
c Width, height and color depth of the ppm image, gamma
PARAMETER(M=300)
PARAMETER(N=M)
PARAMETER(L=255)
PARAMETER(GAM=1.0)
DIMENSION CV(M,M,3), WXY(2,82)
DIMENSION iCV(M,M,3)
20 Chapter 4. Color in Image and Video
c
c Chromaticity Coodinates (x and y) for wavelengths in 5 nm
c increments from 380 nm to 780 nm.
c
DATA ((WXY(I,J),I=1,2),J=1,81)/
& 0.1741,0.0050, 0.1740,0.0050, 0.1738,0.0049, 0.1736,0.0049,
& 0.1733,0.0048, 0.1730,0.0048, 0.1726,0.0048, 0.1721,0.0048,
& 0.1714,0.0051, 0.1703,0.0058, 0.1689,0.0069, 0.1669,0.0086,
& 0.1644,0.0109, 0.1611,0.0138, 0.1566,0.0177, 0.1510,0.0227,
& 0.1440,0.0297, 0.1355,0.0399, 0.1241,0.0578, 0.1096,0.0868,
& 0.0913,0.1327, 0.0687,0.2007, 0.0454,0.2950, 0.0235,0.4127,
& 0.0082,0.5384, 0.0039,0.6548, 0.0139,0.7502, 0.0389,0.8120,
& 0.0743,0.8338, 0.1142,0.8262, 0.1547,0.8059, 0.1929,0.7816,
& 0.2296,0.7543, 0.2658,0.7243, 0.3016,0.6923, 0.3373,0.6589,
& 0.3731,0.6245, 0.4087,0.5896, 0.4441,0.5547, 0.4788,0.5202,
& 0.5125,0.4866, 0.5448,0.4544, 0.5752,0.4242, 0.6029,0.3965,
& 0.6270,0.3725, 0.6482,0.3514, 0.6658,0.3340, 0.6801,0.3197,
& 0.6915,0.3083, 0.7006,0.2993, 0.7079,0.2920, 0.7140,0.2859,
& 0.7190,0.2809, 0.7230,0.2770, 0.7260,0.2740, 0.7283,0.2717,
& 0.7300,0.2700, 0.7311,0.2689, 0.7320,0.2680, 0.7327,0.2673,
& 0.7334,0.2666, 0.7340,0.2660, 0.7344,0.2656, 0.7346,0.2654,
& 0.7347,0.2653, 0.7347,0.2653, 0.7347,0.2653, 0.7347,0.2653,
& 0.7347,0.2653, 0.7347,0.2653, 0.7347,0.2653, 0.7347,0.2653,
& 0.7347,0.2653, 0.7347,0.2653, 0.7347,0.2653, 0.7347,0.2653,
& 0.7347,0.2653, 0.7347,0.2653, 0.7347,0.2653, 0.7347,0.2653,
& 0.7347,0.2653/
WXY(1,82)=WXY(1,1)
WXY(2,82)=WXY(2,1)
c
c Chromaticity Coordinates for Red, Green, Blue phosphors
c and White Point=D65
C SMPTE-C RGB:
c
XR=0.63
YR=0.34
XG=0.31
YG=0.595
XB=0.155YB=0.07
xw = 0.312713
yw = 0.329016
ZR=1.D0-(XR+YR)
ZG=1.D0-(XG+YG)
ZB=1.D0-(XB+YB)
ZW=1.D0-(XW+YW)
21
c
c Draw horseshoe outline
c
DO II=1,M
DO JJ=1,N
DO KK=1,3
CV(II,JJ,KK)=0
iCV(II,JJ,KK)=0
ENDDO
ENDDO
ENDDO
DO J=1,81
S1=REAL(M)*WXY(1,J)
S2=REAL(M)*WXY(1,J+1)
T1=REAL(N)*(1.D0-WXY(2,J))
T2=REAL(N)*(1.D0-WXY(2,J+1))
if ((s2-s1).ne.0.0) then
SLOPE=(T2-T1)/(S2-S1)
else
s2 = s2 + 0.00001
SLOPE=(T2-T1)/(S2-S1)
endif
I1=INT(S1)
I2=INT(S2)
DO II=I1,I2,JISIGN(1,I2-I1)
S=REAL(II)
J1=J2
J2=INT(T1+SLOPE*(S-S1))
IF ((J1.NE.0).AND.(J2.NE.0)) THEN
DO JJ=J1,J2,JISIGN(1,J2-J1)
DO KK=1,3
iCV(II,JJ,KK)=1
ENDDO
ENDDO
ENDIF
ENDDO
ENDDO
22 Chapter 4. Color in Image and Video
c
c Calculate RGB Values for x and y coordinates
c
rmax = 0.0
do J=1,N
do I=1,M
X=REAL(1.*I/M)
Y=REAL(1.*(N-J)/N)+0.00001
YY=1.0
XX=X*YY/Y
ZZ=(1.-X-Y)*YY/Y
c smpte-c
R=(3.5058*XX)-(1.7397*YY)-(0.5440*ZZ)
G=-(1.0690*XX)+(1.9778*YY)+(0.0352*ZZ)
B=(0.0563*XX)-(0.1970*YY)+(1.0501*ZZ)
C print*, j,i,xx,yy,zz,r,g,b
if (.not.( (iCV(I,J,1).eq.1).and.(iCV(I,J,1).eq.1).and.
(iCV(I,J,1).eq.1))) then
if ((R.LT.0.).OR.(G.LT.0.).OR.(B.LT.0.)) then
R=0.
G=0.
B=0.
else
R=R**GAM
G=G**GAM
B=B**GAM
C Have same chromaticity if rescale RGB, so brighten:
rmax = max(r,g,b)
r = r/rmax
g = g/rmax
b = b/rmax
endif
CV(I,J,1)=R
CV(I,J,2)=G
CV(I,J,3)=B
endif
enddo
enddo
C white point: draw a plus sign
i = int(xw*M)
j = int(N - (yw-0.00001)*N)
do ii = i-5,i+5
CV(ii,j,1) = 0D0
CV(ii,j,2) = 0D0
CV(ii,j,3) = 0D0
enddo
do jj = j-5,j+5
CV(i,jj,1) = 0D0
CV(i,jj,2) = 0D0
CV(i,jj,3) = 0D0
enddo
23
C print*, rmax
do J=1,N
do I=1,M
do k=1,3
if (iCV(I,J,K).eq.1) CV(I,J,K)=1.0
enddo
enddo
enddo
c
c Write to PPM File
c
OPEN(UNIT=20,FILE=’gamut.ppm’,STATUS=’UNKNOWN’)
1 FORMAT(A10)
WRITE(20,1) ’P3 ’
WRITE(20,1) ’# gamut.ppm’
WRITE(20,*) M,N
WRITE(20,*) L
DO J=1,N
DO I=1,M
100 FORMAT(3(I4,2X))
WRITE(20,100) (nint(255*CV(I,J,k)),k=1,3)
ENDDO
ENDDO
STOP
END
c *************************************************************************
C Should we wish to go out of gamut:
SUBROUTINE XYZTORGB(xr,yr,zr,xg,yg,zg,xb,yb,zb,xc,yc,zc,r,g,b)
IMPLICIT REAL*8 (a-h,o-z)
r=(-xg*yc*zb+xc*yg*zb+xg*yb*zc-xb*yg*zc-xc*yb*zg+xb*yc*zg)/
* (+xr*yg*zb-xg*yr*zb-xr*yb*zg+xb*yr*zg+xg*yb*zr-xb*yg*zr)
g=(+xr*yc*zb-xc*yr*zb-xr*yb*zc+xb*yr*zc+xc*yb*zr-xb*yc*zr)/
* (+xr*yg*zb-xg*yr*zb-xr*yb*zg+xb*yr*zg+xg*yb*zr-xb*yg*zr)
b=(+xr*yg*zc-xg*yr*zc-xr*yc*zg+xc*yr*zg+xg*yc*zr-xc*yg*zr)/
* (+xr*yg*zb-xg*yr*zb-xr*yb*zg+xb*yr*zg+xg*yb*zr-xb*yg*zr)
IF (R.LT.0.) R=0.
IF (G.LT.0.) G=0.
IF (B.LT.0.) B=0.
IF (R.GT.1.) R=1.
IF (G.GT.1.) G=1.
IF (B.GT.1.) B=1.
RETURN
END
c *************************************************************************
real*8 function max(r,g,b)
rmax = r
if (g .gt. rmax) rmax = g
if (b .gt. rmax) rmax = b
end
c *************************************************************************
This code is also given as
ExerciseAnswers/resources exercises/chap4/makegamutppm.f
In C, the code looks as below.
http://www.fourmilab.ch/documents/specrend/ also has a C program version, specrend.c, to
24 Chapter 4. Color in Image and Video
output ascii values:
Built-in test program which displays the x, y, and Z and RGB
values for black body spectra from 1000 to 10000 degrees kelvin.
When run, this program should produce the following output:
Temperature x y z R G B
----------- ------ ------ ------ ----- ----- -----
1000 K 0.6528 0.3444 0.0028 1.000 0.007 0.000 (Approximation)
1500 K 0.5857 0.3931 0.0212 1.000 0.126 0.000 (Approximation)
2000 K 0.5267 0.4133 0.0600 1.000 0.234 0.010
2500 K 0.4770 0.4137 0.1093 1.000 0.349 0.067
3000 K 0.4369 0.4041 0.1590 1.000 0.454 0.151
3500 K 0.4053 0.3907 0.2040 1.000 0.549 0.254
4000 K 0.3805 0.3768 0.2428 1.000 0.635 0.370
4500 K 0.3608 0.3636 0.2756 1.000 0.710 0.493
5000 K 0.3451 0.3516 0.3032 1.000 0.778 0.620
5500 K 0.3325 0.3411 0.3265 1.000 0.837 0.746
6000 K 0.3221 0.3318 0.3461 1.000 0.890 0.869
6500 K 0.3135 0.3237 0.3628 1.000 0.937 0.988
7000 K 0.3064 0.3166 0.3770 0.907 0.888 1.000
7500 K 0.3004 0.3103 0.3893 0.827 0.839 1.000
8000 K 0.2952 0.3048 0.4000 0.762 0.800 1.000
8500 K 0.2908 0.3000 0.4093 0.711 0.766 1.000
9000 K 0.2869 0.2956 0.4174 0.668 0.738 1.000
9500 K 0.2836 0.2918 0.4246 0.632 0.714 1.000
10000 K 0.2807 0.2884 0.4310 0.602 0.693 1.000
25
/* makegamutppm.c
Places an ascii .ppm (portable pixmap) image of approximation
of chromaticity diagram on stdout.
Link the resulting object file with the math library:
gcc makegamutppm.c -o makegamutppm -lm
This code is an extension of a program in Fortran
at link
http://www.physics.sfasu.edu/astro/color/chromaticity.html
*/
#include <math.h>
#include <string.h>
/* Table of constant values */
/* Width and height and color depth of the ppm image; gamma */
#define N 300 /* square image */
#define LEVELS 255
#define GAMMA 1.0
#define NN3 N*N*3
#define NN N*N
#define NPLUS1 N+1
#define NNNPLUS1 NN+NPLUS1
/* Builtin functions */
double over_pow();
/* Local functions */
double over_pow();
double transfersign();
double max3();
/* Main program */ main()
{
/* Initialized data */
/* Chromaticity Coodinates (x and y) for wavelengths in 5 nm */
/* increments from 380 nm to 780 nm.0*/
26 Chapter 4. Color in Image and Video
static double xy81[164] /* think of as [2][82] */
= { .1741,.005,.174,.005,
.1738,.0049,.1736,.0049,.1733,.0048,.173,.0048,.1726,.0048,.1721,
.0048,.1714,.0051,.1703,.0058,.1689,.0069,.1669,.0086,.1644,.0109,
.1611,.0138,.1566,.0177,.151,.0227,.144,.0297,.1355,.0399,.1241,
.0578,.1096,.0868,.0913,.1327,.0687,.2007,.0454,.295,.0235,.4127,
.0082,.5384,.0039,.6548,.0139,.7502,.0389,.812,.0743,.8338,.1142,
.8262,.1547,.8059,.1929,.7816,.2296,.7543,.2658,.7243,.3016,.6923,
.3373,.6589,.3731,.6245,.4087,.5896,.4441,.5547,.4788,.5202,.5125,
.4866,.5448,.4544,.5752,.4242,.6029,.3965,.627,.3725,.6482,.3514,
.6658,.334,.6801,.3197,.6915,.3083,.7006,.2993,.7079,.292,.714,
.2859,.719,.2809,.723,.277,.726,.274,.7283,.2717,.73,.27,.7311,
.2689,.732,.268,.7327,.2673,.7334,.2666,.734,.266,.7344,.2656,
.7346,.2654,.7347,.2653,.7347,.2653,.7347,.2653,.7347,.2653,.7347,
.2653,.7347,.2653,.7347,.2653,.7347,.2653,.7347,.2653,.7347,.2653,
.7347,.2653,.7347,.2653,.7347,.2653,.7347,.2653,.7347,.2653,.7347,
.2653,.7347,.2653 };
/* locals */
int i, j, k;
int ii, jj, kk;
int i1, i2, i3, i4, i5, i6, j1, j2;
double rmax;
double r, b, g;
double s, x, y, slope;
double s1, s2, t1, t2;
double temp;
static double rgb[NN3]; /* think of as [N][N][3] */
double xb, yb, zb,
xg, yg, zg,
xr, yr, zr,
xw, yw, zw,
yy;
double xx, zz;
static int occup[NN]; /* think of as [N][N] */
xy81[162] = xy81[0];
xy81[163] = xy81[1];
/* Chromaticity Coordinates for Red, Green, Blue phosphors */
/* and White Point=D65 */
/* SMPTE-C RGB: */
27
xr = 0.63;
yr = 0.34;
xg = 0.31;
yg = 0.595;
xb = 0.155;
yb = 0.07;
xw = 0.312713;
yw = 0.329016;
zr = 1.0- (xr + yr);
zg = 1.0- (xg + yg);
zb = 1.0- (xb + yb);
zw = 1.0- (xw + yw);
/* Draw spectrum locus. */
for (ii = 1; ii <= N; ++ii) {
for (jj = 1; jj <= N; ++jj) {
for (kk = 1; kk <= 3; ++kk) {
occup[ii + jj * N - NPLUS1] = 0;
rgb[ii + (jj + kk * N) * N - NNNPLUS1] = 0.0;
}
}
}
j1 = 0;
j2 = 0;
for (j = 1; j <= 81; ++j) {
s1 = N* xy81[(j << 1) - 2];
s2 = N* xy81[(j + 1 << 1) - 2];
t1 = N* (1.0- xy81[(j << 1) - 1]);
t2 = N* (1.0- xy81[((j + 1) << 1) - 1]);
if ((s2 - s1) != 0.0) {
slope = (t2 - t1) / (s2 - s1);
} else {
s2 += 1e-5;
slope = (t2 - t1) / (s2 - s1);
}
i1 = (int) s1;
i2 = (int) s2;
i3 = i2 - i1;
i5 = transfersign(1.0, i3);
ii = i1;
28 Chapter 4. Color in Image and Video
while (i5 < 0 ? ii >= i2 : ii <=i2)
{
s = (double) ii;
j1 = j2;
j2 = (int) (t1 + slope * (s - s1));
if ((j1 != 0) && (j2 != 0)) {
i6 = j2 - j1;
i4 = transfersign(1.0, i6);
jj = j1;
while (i4 < 0 ? jj >= j2 : jj <= j2)
{
occup[ii + jj * N - NPLUS1] = 1;
jj += i4;
}
}
ii += i5;
}
}
/* Calculate RGB Values for x and y coordinates */
rmax = 0.0;
for (j = 1; j <= N; ++j) {
for (i = 1; i <= N; ++i) {
x = i * 1.0/ N;
y = (N - j) * 1.0/ N + 1e-5;
yy = 1.0;
xx = x * yy / y;
zz = (1.0- x - y) * yy / y;
/* smpte-c */
r = xx * 3.5058 - yy * 1.7397 - zz * .544;
g = -(xx * 1.069) + yy * 1.9778 + zz * .0352;
b = xx * .0563 - yy * .197 + zz * 1.0501;
if (! (occup[i + j * N - NPLUS1]) ) {
if ((r < 0.0) || (g < 0.0) || (b < 0.0)) {
r = 0.0;
g = 0.0;
b = 0.0;
} else {
r = over_pow(r, GAMMA);
g = over_pow(g, GAMMA);
b = over_pow(b, GAMMA);
29
/* Have same chromaticity if rescale RGB,
so brighten: */
rmax = max3(r,g,b);
r /= rmax;
g /= rmax;
b /= rmax;
} /* else */
rgb[i + (j + N) * N - NNNPLUS1] = r;
rgb[i + (j + 2*N) * N - NNNPLUS1] = g;
rgb[i + (j + 3*N) * N - NNNPLUS1] = b;
} /* if */
} /* for i */
} /* for j */
/* white point: draw a plus sign */
i = (int) (xw * N);
j = (int) (N - (yw - 1.0e-5) * N);
i2 = i + 5;
for (ii = i - 5; ii <= i2; ++ii) {
rgb[ii + (j + N) * N - NNNPLUS1] = 0.0;
rgb[ii + (j + 2*N) * N - NNNPLUS1] = 0.0;
rgb[ii + (j + 3*N) * N - NNNPLUS1] = 0.0;
}
i2 = j + 5;
for (jj = j - 5; jj <= i2; ++jj) {
rgb[i + (jj + N) * N - NNNPLUS1] = 0.0;
rgb[i + (jj + 2*N) * N - NNNPLUS1] = 0.0;
rgb[i + (jj + 3*N) * N - NNNPLUS1] = 0.0;
}
for (j = 1; j <= N; ++j) {
for (i = 1; i <= N; ++i) {
for (k = 1; k <= 3; ++k) {
if (occup[i + j * N - NPLUS1] == 1) {
rgb[i + (j + k * N) * N - NNNPLUS1] = 1.0;
}
}
}
}
30 Chapter 4. Color in Image and Video
/* Write ascii PPM file to stdout */
printf("P3 \n");
printf("# gamut.ppm\n");
printf("%d %d\n", N,N);
printf("%d\n", LEVELS);
for (j = 1; j <= N; ++j) {
for (i = 1; i <= N; ++i) {
for (k = 1; k <= 3; ++k) {
temp = rgb[i + (j + k * N) * N - NNNPLUS1] ;
temp = rgb[i + (j + k * N) * N - NNNPLUS1] * 255;
i2 = (int)(temp);
printf("%d ", i2);
}
printf("\n");
}
}
} /* end of main */
31
/****************************************************/
double max3(r, g, b)
double r, g, b;
{
/* Local variable */
double rmax;
rmax = r;
if (g > rmax) {
rmax = g;
}
if (b > rmax) {
rmax = b;
}
return rmax;
} /* end of max3 */
/****************************************************/
/* does transfer of sign, |a1|* sign(a2) */
double transfersign(a1,a2)
double a1,a2;
{
if (a2>=0)
return((double)(fabs(a1)));
else
return((double)(-1.0*fabs(a1)));
}
/****************************************************/
double over_pow(x,p)
double x,p;
{
if ( (x < 0.0) && ((p*0.5) - (int)(p*0.5) != 0.0) )
{
return(-pow(-x,p)) ;
}
else
return(pow(x,p)) ;
} /* end of over_pow */
This code is also given as
ExerciseAnswers/resources exercises/chap4/makegamutppm.c
10. Hue is the color, independent of brightness and how much pure white has been added to it. We can
make a simple definition of hue as the set of ratios R:G:B. Suppose a color (i.e., an RGB) is divided
by 2.0, so that the RGB triple now has values 0.5 times its former values. Explain, using numerical
values:
(a) If gamma correction is applied after the division by 2.0 and before the color is stored, does the
darker RGB have the same hue as the original, in the sense of having the same ratios R:G:B of
light emanating from the CRT display device? (We’re not discussing any psychophysical effects
that change our perception — here we’re just worried about the machine itself).
32 Chapter 4. Color in Image and Video
(b) If gamma correction is not applied, does the second RGB have the same hue as the first, when
displayed?
(c) For what color triples is the hue always unchanged?
Answer:
(a) 2 With gamma correction, RGB is stored as (RGB)^(1/gamma) and (RGB/2) is stored as
(RGB/2)^(1/gamma). After the CRT gamma takes effect, color^gamma, the gamma-correction
power law is reversed, and we’re back to RGB and RGB/2, so the hue does not change.
(b) 2 But if there is no gamma-correction, then RGB results in light (RGB)^gamma, and RGB/2
is viewed as (RGB/2)^gamma, so we see a different hue. As an example, suppose RGB=1,1/2,1/4.
Suppose gamma is 2.0. Then the color viewed is 1,1/4,1/16, which is a different hue.
(c) 1 Suppose RGB=1,1,1. Then the color viewed is also 1,1,1. And RGB/2=1/2*(1,1,1) is viewed
as 1/4*(1,1,1) so the hue is unchanged (just darker). [ Also, if RGB=(1,0,0), then RGB/2=1/2*(1,0,0)
is displayed as 1/4*(1,0,0), which is the same hue. Any color with some equal entries, e.g. (1,1,0)
will be the same hue, just darker. And any color with two ”0” entries will also be the same hue,
just darker.]
11. We wish to produce a graphic that is pleasing and easily readable. Suppose we make the background
color pink. What color text font should we use to make the text most readable? Justify your answer.
Answer:
Pink is a mixture of white and red; say that there are half of
each: 1 1 1
1 0 0
-----
2 1 1 / 2 --> 1,.5,.5 = pink (Chrom = .5, .25)
Then complementary color is (1,1,1)-pink = (0,.5,.5) (Chrom = 0,.5)
which is pale cyan. [colorwh.frm]
12. To make matters simpler for eventual printing, we buy a camera equipped with CMY sensors, as
opposed to RGB sensors (CMY cameras are in fact available).
(a) Draw spectral curves roughly depicting what such a camera’s sensitivity to frequency might look
like.
Answer:
(see plot Fig. 4.22)
(b) Could the output of a CMY camera be used to produce ordinary RGB pictures? How?
Answer:
Suppose we had C=G+B (i.e., with coefficients 1.0 times G and B), etc. Then C=R,G,B -
R,0,0 = 1-R, and so R=1-C. So use R=1-C, G=1-M, B=1-Y.
13. Color inkjet printers use the CMY model. When the cyan ink color is sprayed onto a sheet of white
paper,
(a) Why does it look cyan under daylight?
33
Block Dyes
Wavelength
Tra
nsm
itta
nce
400 450 500 550 600 650 700
0.0
0.2
0.4
0.6
0.8
1.0
C
M
Y
Fig. 4.22: CMY “Block” dye transmittance.
(b) What color would it appear under a blue light? Why?
Answer:
(i) RED from the daylight is absorbed (subtracted).
(ii) BLUE. The CYAN ink will not absorb BLUE, and BLUE is the only color in the light.
Chapter 5
Fundamental Concepts in Video
Exercises
1. NTSC video has 525 lines per frame and 63.6 �sec per line, with 20 lines per field of vertical retrace
and 10.9 �sec horizontal retrace.
(a) Where does the 63.6 �sec come from?
Answer:
1 / (525 lines/frame�29.97 frame/sec) = 63.6�10−6 sec/line
(b) Which takes more time, horizontal retrace or vertical retrace? How much more time?
Answer:
horiz = 10.9�10−6 sec,
vertical is 20 line * 63.6 �sec = 1272 �sec = 1.272 msec, so
vertical is 1272/10.9 = 117 times longer than horizontal.
2. Which do you think has less detectable flicker, PAL in Europe or NTSC is North America? Justify
your conclusion.
Answer:
PAL could be better since more lines, but is worse because of fewer frames/sec.
3. Sometimes the signals for television are combined into fewer than all the parts required for TV trans-
mission.
(a) Altogether, how many and what are the signals used for studio broadcast TV?
Answer:
5
R, G, B, audio, sync; can say “blanking” instead, too.
(b) How many and what signals are used in S-Video? What does S-Video stand for?
Answer:
Luminance+chrominance = 2+audio+sync = 4
Separated video
(c) How many signals are actually broadcast for standard analog TV reception? What kind of video
is that called?
34
35
Answer:
1
Composite
4. Show how the Q signal can be extracted from the NTSC chroma signal C (Eq. 5.1) during the
demodulation process.
Answer:
To extract Q:
(a) Multiply thesignal C by 2 sin(Fsct), i.e.,
C � 2 sin(Fsct) = I � 2 sin(Fsct) cos(Fsct) +Q � 2 sin2(Fsct)
= I � sin(2Fsct) +Q � (1− cos(2Fsct))
= Q+ I � sin(2Fsct)−Q � cos(2Fsct):
(b) Apply a low-pass filter to obtain Q and discard the two higher frequency (2Fsc) terms.
5. One sometimes hears that the old Betamax format for videotape, which competed with VHS and lost,
was actually a better format. How would such a statement be justified?
Answer:
Betamax has more samples per line: 500, as opposed to 240.
6. We don’t see flicker on a workstation screen when displaying video at NTSC frame rate. Why do you
think this might be?
Answer:
NTSC video is displayed at 30 frames per sec, so flicker is possibly present. Nonetheless, when
video is displayed on a workstation screen the video buffer is read and then rendered on the
screen at a much higher rate, typically the refresh rate — 60 to 90 Hz — so no flicker is per-
ceived. (And in fact most display systems have double buffers, completely removing flicker:
since main memory is much faster than video memory, keep a copy of the screen in main mem-
ory and then when we this buffer update is complete, the whole buffer is copied to the video
buffer.)
7. Digital video uses chroma subsampling. What is the purpose of this? Why is it feasible?
Answer:
Human vision has less acuity in color vision than it has in black and white — one can distinguish
close black lines more easily than colored lines, which soon are perceived just a mass without
texture as the lines move close to each other. Therefore, it is acceptable perceptually to remove
a good deal of color information. In analog, this is accomplished in broadcast TV by simply
assigning a smaller frequency bandwidth to color than to black and white information. In dig-
ital, we “decimate” the color signal by subsampling (typically, averaging nearby pixels). The
purpose is to have less information to transmit or store.
8. What are the most salient differences between ordinary TV and HDTV?
Answer:
More pixels, and aspect ratio of 16/9 rather than 4/3.
What was the main impetus for the development of HDTV?
36 Chapter 5. Fundamental Concepts in Video
Immersion — “being there”. Good for interactive systems and applications such as virtual
reality.
9. What is the advantage of interlaced video? What are some of its problems?
Answer:
Positive: Reduce flicker. Negative: Introduces serrated edges to moving objects and flickers
along horizontal edges.
10. One solution that removes the problems of interlaced video is to de-interlace it. Why can we not just
overlay the two fields to obtain a de-interlaced image? Suggest some simple de-interlacing algorithms
that retain information from both fields.
Answer:
The second field is captured at a later time than the first, creating a temporal shift between the
odd and even lines of the image.
The methods used to overcome this are basically two: non-motion compensated and motion
compensated de-interlacing algorithms.
The simplest non-motion compensated algorithm is called “Weave”; it performs linear interpo-
lation between the fields to fill in a full, “progressive”, frame. A defect with this method is that
moving edges show up with significant serrated lines near them.
A better algorithm is called “Bob”: in this algorithm, one field is discarded and a a full frame is
interpolated from a single field. This method generates no motion artifacts (but of course detail
is reduced in the resulting progressive image).
In a vertical-temporal (VT) de-interlacer, vertical detail is reduced for higher temporal frequen-
cies. Other, non-linear, techniques are also used.
Motion compensated de-interlacing performs inter-field motion compensation and then com-
bines fields so as to maximize the vertical resolution of the image.
Chapter 6
Basics of Digital Audio
Exercises
1. My old Soundblaster card is an 8–bit card.
(a) What is it 8 bits of?
Answer:
Quantization levels (not sampling frequency).
(b) What is the best SQNR (Signal to Quantization Noise Ratio) it can achieve?
Answer:
(a) Quantization levels (not sampling frequency)
(b) Best SQNR is 1 level out of 256 possible levels.
Calculate SQNR using largest value in dynamic range:
SNR = 20 log_10 (255/2ˆ0 )
˜= 20 log 2ˆ8
= 20*8*log 2
˜= 20*8* 0.3
= 48 db (actually, 48.16 db)
2. If a set of ear protectors reduces the noise level by 30 dB, how much do they reduce the intensity (the
power)?
Answer:
A reduction in intensity of 1000.
3. A loss of audio output at both ends of the audible frequency range is inevitable, due to the frequency
response function of an audio amplifier and the medium (e.g., tape).
(a) If the output was 1 volt for frequencies at midrange, what is the output voltage after a loss of
−3 dB at 18 kHz?
(b) To compensate for the loss, a listener can adjust the gain (and hence the output) on an equalizer
at different frequencies. If the loss remains −3 dB and a gain through the equalizer is 6 dB at
18 kHz, what is the output voltage now? Hint: Assume log10 2 = 0:3.
37
38 Chapter 6. Basics of Digital Audio
Answer:
(a) 20 log V1 = −3; 2 log V = −0:3; 2 log V = − log 2; log(V 2) = − log 2; V = 1p2 = 0:7 volts
(b) -3 + 6 = 3 dB; V = p2 = 1:4 volts
4. Suppose the sampling frequency is 1.5 times the true frequency. What is the alias frequency?
Answer:
0.5 times the True Frequency.
5. In a crowded room, we can still pick out and understand a nearby speaker’s voice, notwithstanding
the fact that general noise levels may be high. This is known as the cocktail-party effect. The way it
operates is that our hearing can localize a sound source by taking advantage of the difference in phase
between the two signals entering our left and right ears (binaural auditory perception). In mono, we
could not hear our neighbor’s conversation well if the noise level were at all high. State how you
think a karaoke machine works. Hint: The mix for commercial music recordings is such that the
“pan” parameter is different going to the left and right channels for each instrument. That is, for an
instrument, either the left or right channel is emphasized. How would the singer’s track timing have
to be recorded to make it easy to subtract the sound of the singer (which is typically done)?
Answer:
For the singer, left and right is always mixed with the exact same pan. This information can be
used to subtract out the sound of the singer. To do so, replace the left channel by the difference
between the left and the right, and boost the maximum amplitude; and similarly for the right
channel.
6. The dynamic range of a signal V is the ratio of the maximum to the minimum absolute value, ex-
pressed in decibels. The dynamic range expected in a signal is to some extent an expression of the
signal quality. It also dictates the number of bits per sample needed to reduce the quantization noise
to an acceptable level. For example, we may want to reduce the noise to at least an order of magnitude
below Vmin. Suppose the dynamic range for a signal is 60 dB. Can we use 10 bits for this signal? Can
we use 16 bits?
Answer:
The range is mapped to−2(N−1) : : : 2(N−1)−1. Vmax is mapped to top value, � 2(N−1). In fact,
whole range Vmax down to (Vmax − q=2) is mapped to that, where q is the quantization interval.
The largest negative signal, −Vmax is mapped to −2(N−1). Therefore q = (2 � Vmax)=(2N ),
since there are 2N intervals.
The dynamic range is Vmax=Vmin, where Vmin is the smallest positive voltage we can see that is
not masked by the noise. Since the dynamic range is 60 dB, we have 20 log10(Vmax=Vmin) = 60
so Vmin = Vmax=1000.
At 10 bits, the quantization noise, equal to q=2==half a quantization interval q, is q=2 = (2 �
Vmax=2N )=2 = Vmax=(210), or in other words Vmax=1024. So this is not sufficient intensity
resolution.
At 16 bits, the noise is Vmax=(216) = Vmax=(64�1024), which is more than an orderof magnitude
smaller than Vmin so is fine.
7. Suppose the dynamic range of speech in telephony implies a ratio Vmax=Vmin of about 256. Using
uniform quantization, how many bits should we use to encode speech to make the quantization noise
at least an order of magnitude less than the smallest detectable telephonic sound?
Answer:
Vmin = Vmax=256.
39
The quantization noise is Vmax=2n, if we use n bits. Therefore to get quantization noise about a
factor of 16 below the minimum sound, we need 12 bits.
8. Perceptual nonuniformity is a general term for describing the nonlinearity of human perception. That
is, when a certain parameter of an audio signal varies, humans do not necessarily perceive the differ-
ence in proportion to the amount of change.
(a) Briefly describe at least two types of perceptual nonuniformities in human auditory perception.
(b) Which one of them does A-law (or �-law) attempt to approximate? Why could it improve
quantization?
Answer:
(a):
(1) Logarithmic response to magnitude,
(2) different sensitivity to different frequencies,
(b): A-law (or �-law) approximates the non-linear response to magnitude. It makes better use
of the limited number of bits available for each quantized data.
9. Draw a diagram showing a sinusoid at 5.5 kHz and sampling at 8 kHz (show eight intervals between
samples in your plot). Draw the alias at 2.5 kHz and show that in the eight sample intervals, exactly
5.5 cycles of the true signal fit into 2.5 cycles of the alias signal.
Answer:
% matlab script
% time = 1 msec only.
truesamplingfreq = 1000*5.5; % 5.5 MHz
freqin1msec = 5.5; % 5.5 cycles per msec.
aliassamplingfreq = 1000*2.5; % 2.5 MHz
aliasfreqin1msec = 2.5; % 2.5 cycles per msec.
drawingpoints = 1000; % just for making smooth figure.
time = (0:drawingpoints)/drawingpoints;
% define a signal with 5.5x10ˆ3 cps.
signal = sin(freqin1msec*2*pi*time);
% define an alias signal with 2.5x10ˆ3 cps.
alias = -sin(aliasfreqin1msec*2*pi*time);
% And how did we get this alias from sampling at 8kHz?--
% We undersample at 8kHz==> just 8 samples in 1msec.
undersamplinginterval = max(time)/8.0;
undersamplingtimes = (0:undersamplinginterval:max(time));
undersampled = sin(freqin1msec*2*pi*undersamplingtimes);
%
plot(time,signal,’g’,’LineWidth’,2);
hold on
plot(time,alias,’r--’);
%
plot(undersamplingtimes,undersampled, ’ob’);
xlabel(’Time (msec)’);
40 Chapter 6. Basics of Digital Audio
ylabel(’Signal’);
hold off
print -depsc /figs/chap6/alias.eps
10. Suppose a signal contains tones at 1, 10, and 21 kHz and is sampled at the rate 12 kHz (and then
processed with an antialiasing filter limiting output to 6 kHz). What tones are included in the output?
Hint: Most of the output consists of aliasing.
Answer:
1 kHz, 12-10=2 kHz, and 2*12-21=3 kHz tones are present.
11. (a) Can a single MIDI message produce more than one note sounding?
Answer:
No.
(b) Is it possible for more than one note to sound at once on a particular instrument? If so, how is it
done in MIDI?
Answer:
Yes — use two NoteOn messages for one channel before the NoteOff message is sent.
(c) Is the Program Change MIDI message a Channel Message? What does this message accom-
plish? Based on the Program Change message, how many different instruments are there in
General MIDI? Why?
Answer:
Yes.
Replaces patch for a channel.
128, since has one data byte, which must be in 0..127.
(d) In general, what are the two main kinds of MIDI messages? In terms of data, what is the main
difference between the two types of messages? Within those two categories, list the different
subtypes.
Answer:
Channel Messages and System Messages.
Channel voice messages, Channel mode messages, System real-time messages, System com-
mon messages, System exclusive messages.
Channel messages have a status byte with leading most-significant-bit set, and 4 bits of
channel information; System messages have the 4 MSBs set.
12. (a) Give an example (in English, not hex) of a MIDI voice message.
Answer:
NoteOn
(b) Describe the parts of the “assembler” statement for the message.
Answer:
(1) opcode=Note on; (2) data = note, or key, number; (3) data = ”velocity”==loudness.
(c) What does a Program Change message do? Suppose Program change is hex “&HC1.” What
does the instruction “&HC103” do?
Answer:
Changes the patch to #4 on channel 2.
41
Time
Si
gn
al
50
10
0
15
0
20
0
25
0
Time
Si
gn
al
50
10
0
15
0
20
0
25
0
(a) (b)
Fig. 6.19: (a) DPCM reconstructed signal (dotted line) tracks the input signal (solid line). (b) DPCM
reconstructed signal (dashed line) steers farther and farther from the input signal (solid line).
13. In PCM, what is the delay, assuming 8 kHz sampling? Generally, delay is the time penalty associated
with any algorithm due to sampling, processing, and analysis.
Answer:
Since there is no processing associated with PCM, the delay is simply the time interval between
two samples, and at 8 kHz, this is 0.125 msec.
14. (a) Suppose we use a predictor as follows:
fˆn = trunc
h
1
2(f˜n−1 + f˜n−2)
i
en = fn − fˆn
(6:25)
Also, suppose we adopt the quantizer Equation (6.20). If the input signal has values as follows:
20 38 56 74 92 110 128 146 164 182 200 218 236 254
show that the output from a DPCM coder (without entropy coding) is as follows:
20 44 56 74 89 105 121 153 161 181 195 212 243 251
Figure 6.19(a) shows how the quantized reconstructed signal tracks the input signal. As a pro-
gramming project, write a small piece of code to verify your results.
(b) Suppose by mistake on the coder side we inadvertently use the predictor for lossless coding,
Equation (6.14), using original values fn instead of quantized ones, f˜n. Show that on the decoder
side we end up with reconstructed signal values as follows:
20 44 56 74 89 105 121 137 153 169 185 201 217 233
so that the error gets progressively worse.
Figure 6.19(b) shows how this appears: the reconstructed signal gets progressively worse. Mod-
ify your code from above to verify this statement.
Answer:
% dcpm.m -- matlab script-- included in figs/chap6
%Let the predictor be
42 Chapter 6. Basics of Digital Audio
%s_nˆ=fix((s_n-1+s_n-2)/2)
%and let the quantizer be
%Q(s(n)) = 16*fix(e/16) + 8 % NO % NO % NO % NO % NO
%%%%%%
errors = (-255):255; %set up for errors % length==511
% Quantizer:
errorstilde =16*fix((255+errors)/16) - 256 + 8;
% ==which errors go to which levels.
plot(errors,errorstilde); % staircase
%%%%%%
load ’figs/chap6/signal.dat’ signal -ascii
% 20;38;56;74;92;110;128 146;164;182;200;218;236;254
signal = [signal(1), signal]’; % append an extra first element
ss = size(signal);
errorunquantized = zeros(ss,1); % declare
signaltilde = errorunquantized; % declare
signalpredicted = errorunquantized; % declare
%% first and second elements are exact:
signalpredicted(1:2) = signal(1:2);
errorunquantized(1:2) = 0;
errorquantized(1:2) = 0;
signaltilde(1:2) = signal(1:2);
%%
for i = 3:ss
signalpredicted(i) = fix((signaltilde(i-1)+...
signaltilde(i-2))/2);
% in (-255):255
errorunquantized(i) = signal(i) - signalpredicted(i);
errorquantized(i) = 16*fix((255+errorunquantized(i))/16)...
- 256 + 8;
signaltilde(i) = signalpredicted(i) + errorquantized(i);
end
% decode: what’s received is errorquantized(i)
for i = 3:ss
signalpredicted(i) = fix((signaltilde(i-1)+...
signaltilde(i-2))/2);
signaltilde(i) = signalpredicted(i) + errorquantized(i);
end
% signal = 20 20 38 56 74 92 110 128 146 164 182 200
% 218 236 254
% signaltilde = 20 20 44 56 74 89 105 121 153 161 181
% 195 212 243 251
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Now let’s do it wrong:
signaltildewrong = signaltilde;
signalpredictedwrong = signalpredicted;
43
errorunquantizedwrong = errorunquantized;
errorquantizedwrong = errorquantized;
for i = 3:ss
signalpredictedwrong(i)= fix((signal(i-1)+signal(i-2))/2);
% in (-255):255
errorunquantizedwrong(i) = signal(i) - signalpredictedwrong(i);
errorquantizedwrong(i) = 16*fix((255+...
errorunquantizedwrong(i))/16) - 256 + 8;
signaltildewrong(i) = signalpredictedwrong(i) +...
errorquantizedwrong(i);
end
% signaltildewrong = 20 20 44 53 71 89 107 125 143 161
% 179 197 215 233 251
% errorquantizedwrong = 0 0 24 24 24 24 24 24 24 24 24 24
% 24 24 24
% decode:
for i = 3:ss
signalpredictedwrong(i) = fix((signaltildewrong(i-1)+...
signaltildewrong(i-2))/2);
signaltildewrong(i) = signalpredictedwrong(i) +...
errorquantizedwrong(i);
end
% Now signaltildewrong = 20 20 44 56 74 89 105 121 137 153...
% 169 185 201 217 233
% Gets progressively lower than the correct line.
plot(signal(2:end));
xlabel(’Time’);
ylabel(’Signal’);
hold on
plot(signaltilde(2:end),’--’);
hold off
plot(signal(2:end));
xlabel(’Time’);
ylabel(’Signal’);
hold on
plot(signaltildewrong(2:end),’--’);
hold off
Chapter 7
Lossless Compression Algorithms
Exercises
1. Suppose eight characters have a distribution A:(1), B:(1), C:(1), D:(2), E:(3), F:(5), G:(5), H:(10).
Draw a Huffman tree for this distribution. (Because the algorithm may group subtrees with equal
probability in a different order, your answer is not strictly unique.)
Answer:
A:(1) B:(1)
E:(3)
2
G:(5)
5
10
C:(1) D:(2)
F:(5)
3
H:(10)
8
18
28
2. (a) What is the entropy (�) of the image below, where numbers (0, 20, 50, 99) denote the gray-level
intensities?
44
45
99 99 99 99 99 99 99 99
20 20 20 20 20 20 20 20
0 0 0 0 0 0 0 0
0 0 50 50 50 50 0 0
0 0 50 50 50 50 0 0
0 0 50 50 50 50 0 0
0 0 50 50 50 50 0 0
0 0 0 0 0 0 0 0
(b) Show step by step how to construct the Huffman tree to encode the above four intensity values
in this image. Show the resulting code for each intensity value.
(c) What is the average number of bits needed for each pixel, using your Huffman code? How does
it compare to �?
Answer:
(a) P20 = P99 = 1=8; P50 = 1=4; P0 = 1=2:
� = 2� 1
8
log2 8 +
1
4
log2 4 +
1
2
log2 2 =
3
4
+
1
2
+
1
2
= 1:75
(b) Only the final tree is shown below. Resulting code: 0: “1”, 50: “01”, 20: “000”, 99: “001”
1
1
P3
0
P2
P1
20 99
50
1
0
0
0
(c) Average number of bits = 0:5� 1 + 0:25 � 2 + 2� 0:125 � 3 = 1:75:
This happens to be identical to � — it only happens when all probabilities are 2−k where k
is an integer. Otherwise, this number will be larger than �.
3. Consider an alphabet with two symbols A;B, with probability P (A) = x and P (B) = 1− x.
(a) Plot the entropy as a function of x. You might want to use log2(3) = 1:6 ; log2(7) = 2:8.
Answer:
46 Chapter 7. Lossless Compression Algorithms
Prob of A
En
tro
py
0.0 0.2 0.4 0.6 0.8 1.0
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
If x = 1=2, � = 1=2 + 1=2 = 1.
If x = 1=4, � = 1=4 � 2 + 3=4 � (log2(4)− log2(3))
= 1=2 + 3=4 � (2− 1:6) = 1=2 + 0:3 = .8
If x = 3=4, same.
If x = 1=8, � = 1=8 � 3 + 7=8� log2(8=7)
=3=8 + 7=8� (3− log2(7))
=3=8 + 7=8� (3− 2:8)
=3=8 + 7=8� (:2)
=:375 + :175 = :55
If x = 0, do not count that symbol; ! � = 1� log2(1) = 0
%matlab script:
x = (0:0.01:1)’;
ss = size(x,1);
x(1) = 0.000001;
x(ss) = 1.0-0.000001;
y = enf(x);
plot(x,y);
xlabel(’Prob of A’);
ylabel(’Entropy’);
print -depsc entropyplot.eps
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function theentropy = enf(x)
theentropy = ( x.*log(1.0./x) + (1-x).*log(1.0./(1-x)) )...
/log(2);
(b) Discuss why it must be the case that if the probability of the two symbols are 1=2+� and 1=2−�,
with small �, the entropy is less than the maximum.
Answer:
H =
P
i pilg(1=pi) and
P
i pi = 1, so that if all probabilities are equal, then pi = 1=N ,
H =
P
i 1=Nlg(N) = lg(N). E.g., if N=256, H=8.
47
Now if probabilities not all equal, then suppose e.g. that one is a little smaller than 1/N,
and one is a little bigger. The small probability has a larger lg(1/probability) than the one
for even distribution; and the large probability has a smaller lg(1/probability) than the one
for even distribution. But since for x < 1lg(x) changes faster than x [the derivative of x
is 1.0, and that for lg(1/x) is -1/x/ln(2)=-x/0.69=-1.4427x], -1/x/ln(2)=-1.4427/x], the smaller
probability has more of an effect, and the result is that the overall entropy is less.
[ If we want to be formal, then suppose we have just 2 symbols, one with probability 1=2+�,
and the other with probability 1=2 − �. Forming a Taylor series,
lg(p1) ’ −1 + 2=ln(2) � �− 2=ln(2) � �2;
lg(p2) ’ −1− 2=ln(2) � �− 2=ln(2) � �2;
so H ’ −(1=2 + �) � lg(p1)− (1=2 − �) � lg(p2) equals 1− 2 � �2=ln(2) ].
(c) Generalize the above result by showing that, for a source generating N symbols, the entropy is
maximum when the symbols are all equiprobable.
Answer:
for i = 1::N , pi = 1=N if equi-probable.
H =
PN
i=1−1=Nlg(1=N) = lg(N).
Consider 2 symbols, as above: equi-probable gives H = 1, and not equi-probable reduces
H . By induction, result follows.
(d) As a small programming project, write code to verify the conclusions above.
Answer:
# Maple script:
# Suppose that have alphabet of 2 symbols,
# with prob’s 1/2+eps,1/2-eps:
p1 := 1/2+eps;
p2 := 1/2-eps;
lp1 := log(p1)/log(2);
lp2 := log(p2)/log(2);
taylor(lp1,eps,3); # -1 + 2/ln(2) *eps - 2/ln(2)*epsˆ2;
taylor(lp2,eps,3); # -1 - 2/ln(2) *eps - 2/ln(2)*epsˆ2;
lp1 := -1 + 2/ln(2) *eps - 2/ln(2)*epsˆ2;
lp2 := -1 - 2/ln(2) *eps - 2/ln(2)*epsˆ2;
ans := - p1*lp1 - p2*lp2;
simplify(ans); # 1 - 2*epsˆ2/ln(2)
4. Extended Huffman Coding assigns one codeword to each group of k symbols. Why is average(l) (the
average number of bits for each symbol) still no less than the entropy � as indicated in Eq. (7.7)?
Answer:
Eq. 7.6 shows � < l¯ < � + 1 for Huffman coding. If we simply treat each of the group symbols
as a new symbol in an ordinary Huffman coding, and use l¯(k) and �(k) as the average bit length
for each new symbol and the new entropy, respectively, we have
�(k) � l¯(k) < �(k) + 1
48 Chapter 7. Lossless Compression Algorithms
Since l¯(k) is the average number of bits need for k symbols, l¯(k) = k � l¯: It follows,
�(k)
k
� l¯ < �
(k)
k
+
1
k
It can be proven (see Sayood [2], p.50 for proof) that �(k) = k � �. Therefore, for extended
Huffman coding
� � l¯ < � + 1
k
:
— that is, the average number of bits for each symbol is still no less than the entropy �:
5. Arithmetic Coding and Huffman Coding are two popular lossless compression methods.
(a) What are the advantages and disadvantages of Arithmetic Coding as compared to Huffman Cod-
ing?
Answer:
The main advantage of Arithmetic Coding over Huffman Coding is that whereas the min-
imum code length for a symbol in Huffman Coding is 1, since we create a binary tree with
0 or 1 attached to each branch, in Arithmetic Coding the number of bits per symbol can
be fractional.
(b) Suppose the alphabet is [A;B;C], and the known probability distribution is PA = 0:5; PB =
0:4; PC = 0:1. For simplicity, let’s also assume that both encoder and decoder know that the
length of the messages is always 3, so there is no need for a terminator.
i. How many bits are needed to encode the message BBB by Huffman coding?
Answer:
6 bits. Huffman Code: A - 0, B - 10, C - 11; or A - 1, B - 00, C - 01.
ii. How many bits are needed to encode the message BBB by arithmetic coding?
Answer:
Symbol low high range
0 1.0 1.0
B 0.5 0.9 0.4
B 0.7 0.86 0.16
B 0.78 0.844 0.064
4 bits. Binary codeword is 0.1101, which is 0.8125.
6. (a) What are the advantages of Adaptive Huffman Coding compared to the original Huffman Coding
algorithm?
(b) Assume that the Adaptive Huffman Coding is used to code an information source S with a
vocabulary of four letters (a, b, c, d). Before any transmission,the initial coding is a = 00, b =
01, c = 10, d = 11. As in the example illustrated in Fig. 7.7, a special symbol NEW will be sent
before any letter if it is to be sent the first time.
Fig. 7.11 is the Adaptive Huffman Tree after sending letters aabb.
After that, the additional bitstream received by the decoder for the next few letters is 01010010101.
i. What are the additional letters received?
ii. Draw the adaptive Huffman trees after each of the additional letters is received.
49
a
bNEW
10
10
20
22
4
Fig. 7.11: Adaptive Huffman Tree.
Answer:
(a) Like any other adaptive compression algorithms, it is more dynamic, therefore offers better
compression and works even when prior statistics of the data distribution is unavailable as
it is in most multimedia applications. It also saves the overhead since no symbol table needs
to be transmitted.
(b) (i) The additional letters received are “b (01) a (01) c (00 10) c (101)”.
(ii) The trees are as below.
533
7
4
31
10 0NEWNEW c c
a a
bb
After "c" After another "c"
8
2
5
32
6
33
300 NEWNEW
bb
aa
After another "b" After another "a"
2
32
7. Compare the rate of adaptation of adaptive Huffman coding and adaptive arithmetic coding (see the
textbbook web site for the latter). What prevents each method from adapting to quick changes in
source statistics?
50 Chapter 7. Lossless Compression Algorithms
Answer:
Both methods would have a similar rate of adaptation since they both use symbol occurrences
as estimates of symbol probability. The difference between the two algorithms is the way these
symbol occurrences are utilized. In the adaptive Huffman case, the symbol occurrences are used
as weights on each node to enforce the sibling property while adaptive arithmetic coding uses
them to form cumulative frequency table. In any case, any change in the source statistics is
reflected right away in both algorithms.
What prevents both algorithms to adapt to quick changes in input statistics is that symbols must
occur enough times according to their probablity in order for the Huffman tree and the cumu-
lative frequency table to emerge into a form that reflects the intended probabilities. Thus, if
there is a large number of symbols that occurred many many times already, any quick changes
in the input statistics will not be adapted very much for both algorithms. For adaptive arith-
metic coding, this can be resolved by renormalizing the cumulative frequency table from time
to time. Similarly, for adaptive Huffman coding, the Huffman tree can be purged to increase
the adaptiveness.
8. Consider the dictionary-based LZW compression algorithm. Suppose the alphabet is the set of sym-
bols f0,1g. Show the dictionary (symbol sets plus associated codes) and output for LZW compres-
sion of the input
0 1 1 0 0 1 1
Answer:
With input 0 1 1 0 0 1 1, we have
DICTIONARY
w k wk | output | index symbol
- - -- | ------ | ----- ------
NIL 0 0 | |
0 1 01 | 0 | 2 01
1 1 11 | 1 | 3 11
1 0 10 | 1 | 4 10
0 0 00 | 0 | 5 00
0 1 01 | |
01 1 011| 2 | 6 011
1 | |
(Students don’t need to finish outputing
for the final input value of 1 since that
wasn’t made clear in the algorithm.)
9. Implement Huffman coding, adaptive Huffman, arithmetic coding, and the LZW coding algorithms
using your favorite programming language. Generate at least three types of statistically different
artificial data sources to test your implementation of these algorithms. Compare and comment on
each algorithm’s performance in terms of compression ratio for each type of data source.
51
Answer:
To show the difference between these algorithms, the students can generate a uniformly dis-
tributed source, a normally distributed source, and a Markov source (source with memory). It
is expected that arithmetic coding to perform well for the first two types, while the LZW algo-
rithm should perform well for the last kind. The students should also note how well the input
statistics is adapted in the adaptive Huffman algorithm.
Chapter 8
Lossy Compression Algorithms
Exercises
1. Assume we have an unbounded source we wish to quantize using an M -bit midtread uniform quan-
tizer. Derive an expression for the total distortion if the step size is 1.
Answer:
The total distortion can be divided into two components: the granular distortion and overload
distortion. Let k = 2M2 . Since we have an M bit midtread quantizer, the number of reconstruc-
tion levels are k − 1. Since two reconstruction values are allocated for the overload regions,
there are k − 3 reconstruction levels in the granular region. Therefore, we have the following
expression for the total distortion.
D = Dg +Do
=
0
@2
k
2
−2X
i=1
Z i+0.5
i−0.5
(x− i)2fX(x)dx+
Z 0.5
−0.5
x2fX(x)dx
1
A +
 
2
Z 1
k
2
−2
(x− (k
2
− 2 + 0:5))2fX(x)dx
!
2. Suppose the domain of a uniform quantizer is [−bM ; bM ]. We define the loading fraction as
γ =
bM
�
where � is the standard deviation of the source. Write a simple program to quantize a Gaussian
distributed source having zero mean and unit variance using a 4-bit uniform quantizer. Plot the SNR
against the loading fraction and estimate the optimal step size that incurs the least amount of distortion
from the graph.
Answer:
The plot should look something like Fig. 8.28.
If there are M reconstruction levels, then the optimal step size is 2b
�
M
M , where b
�
M is the value of
bM at which the SNR is maximum on the graph.
52
53
Loading Fraction
SNR
dB
Fig. 8.28: SNR VS Loading Fraction
3. � Suppose the input source is Gaussian-distributed with zero mean and unit variance — that is, the
probability density function is defined as
fX(x) = 1p2pi e
−x2
2 (8:66)
We wish to find a four-level Lloyd–Max quantizer. Let yi = [y0i ; : : : ; y3i ] and bi = [b0i ; : : : ; b3i ]. The
initial reconstruction levels are set to y0 = [−2;−1; 1; 2]. This source is unbounded, so the outer two
boundaries are +1 and −1.
Follow the Lloyd–Max algorithm in this chapter: the other boundary values are calculated as the mid-
points of the reconstruction values. We now have b0 = [−1;−1:5; 0;
1:5;1]. Continue one more iteration for i = 1, using Eq. (8.13) and find y10, y11, y12 , y13, using
numerical integration. Also calculate the squared error of the difference between y1 and y0.
Iteration is repeated until the squared error between successive estimates of the reconstruction levels
is below some predefined threshold �. Write a small program to implement the Lloyd–Max quantizer
described above.
Answer:
The reconstruction values at i = 1 is calculated using Equation (8.13). Using numerical integra-
tion, we have
y10 =
R −1:5
−1
xp
2�
e−
x
2 dxR −1:5
−1
1p
2�
e−
x
2 dx
= −1:94; y11 =
R 0
−1:5
xp
2�
e−
x
2 dxR 0
−1:5
1p
2�
e−
x
2 dx
= −0:62;
y12 =
R 1:5
0
xp
2�
e−
x
2 dxR 1:5
0
1p
2�
e−
x
2 dx
= 0:62; y13 =
R1
1:5
xp
2�
e−
x
2 dxR1
1:5
1p
2�
e−
x
2 dx
= 1:94:
The square error of the difference between y1 and y0 is computed as
(−1:92 + 2)2 + (−0:62 + 1)2 + (0:62 − 1)2 + (1:92 − 1)2 = 0:296
This process is repeated until the square error between successive estimates of the reconstruc-
tion levels are below some predefined threshold �.
4. If the block size for a 2D DCT transform is 8 � 8, and we use only the DC components to create a
thumbnail image, what fraction of the original pixels would we be using?
Answer:
1/64, because each 8� 8 block only has one DC.
54 Chapter 8. Lossy Compression Algorithms
5. When the blocksize is 8, the definition of the DCT is given in Eq. (8.17).
(a) If an 8 � 8 grayscale image is in the range 0 :: 255, what is the largest value a DCT coefficient
could be, and for what input image? (Also,