Logo Passei Direto
Buscar
Material
páginas com resultados encontrados.
páginas com resultados encontrados.

Prévia do material em texto

Web Answers for Digital Communication 
 
Chapter 2: Source Encoding 
 
Example 24 
It is required to transmit 800 characters / sec where each character is represented by its 7-bit ASCII 
codeword followed by an eighth bit for error detection per character. A multilevel PAM waveform 
with M = 16 levels is used 
(a) What is the effective transmitted bit rate? 
(b) What is the symbol rate? 
 
Solution: 
(a) Total number of bits in one character = 7 + 1 = 8 bits. 
So, the number of bits transmitted per second 
 = Effective transmitted bit rate 
 = 800 × 8 bps 
 = 6400 bits per second. 
(b) 
16m
 and 
42 16 2nm
. 
Hence 
4n
 
Thus number of bits per symbol = 4 
So, symbol rate
6400 bits s
1600
4 bits symbol
symbols / second. 
 
Example 25 
Determine the minimum sampling rate necessary to sample and perfectly reconstruct the signal 
sin (6280 )
( )
(6280 )
t
x t
t
. 
 
Solution: 
 
sin
sin(6280 ) 2( )
(6280 )
2
Wt
t
x t
Wtt
 where 
2 6280
2
W
f
radians
2 (1000)
 
Hence 
1000f
Hz. 
 
So, 
 
 
1
for 1000Hz
0 elsewhere 
f
X f W
 
 
 
Thus, 
1000m
Hz 
Hence, minimum sampling rate
2 2 1000s mf
 
 
2000
 samples / s 
 
 
Example 26 
A waveform 
( ) 10cos 1000 20cos 2000
3 6
x t t t
 is to be uniformly sampled for 
digital transmission. 
(a) What is the minimum allowable time interval between sample values that will ensure 
perfect signal reproduction? 
(b) If we want to reproduce 1 hour of this waveform, how many sample values need to be 
stored? 
 
Solution: 
(a) Input waveform is 
 
( ) 10cos(1000 3) 20cos(2000 6)x t t t
 
Hence maximum angular frequency, 
 
2 2000m mf
 
So, 
2000
318.3
2
mf
 Hz. 
Sampling frequency, 
 
2 2 318.3 636.6s mf f
 samples / sec. 
Sampling period, 
 
1 1
636.6
s
s
T
f
 
i.e. 
0.00157sT
 sec. 
Hence, the maximum allowable time interval between sample values 
 
0.00157s 1.57
 ms. 
 
(b) 1 hour
60 60 3600sec
. 
So, total number of sample values per hour 
 
3600 636.6
 
 
62.29 10
 samples. 
 
 
 
 
 
Example 27 
A signal in the frequency range 300 to 3300 Hz is limited to peak-to-peak swing of 10 V. It is 
sampled at 8000 samples / sec and the samples are quantized to 64 evenly spaced levels. Calculate 
and compare the bandwidth and SQR if the quantized samples are transmitted as binary pulses. 
 
Solution: 
Sampling frequency, 
 
8000sf
samples / s 
 L = Number of levels = 64 
But 
2nL
 where n = Number of bits / sample 
So, 
62 64 2n
. 
Thus 
6n
 
So, Transmission rate
8000 6 48,000R
 bits / s 
Bandwidth
1
48,000
b
W R
T
 Hz. 
We know, 
 
23SQR L
 
So, 
23(64) 12,228 41SQR
 dB. 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Chapter 4: Baseband Transmission 
 
Schwartz’s Inequality 
 
Proof: 
Let 
( )x t
 and 
( )y t
 be denoted by the real-valued functions 
( )a t
 and 
( )b t
 respectively such that 
 
( ) ( )a t x t
 and … (8) 
 
( ) ( )b t y t
 
We may define 
( )a t
 and 
( )b t
 in terms of a pair of orthonormal functions 
1( )t
 and 
2 ( )t
. 
So, 
1 1 2 2( ) ( ) ( ) ... (9)a t a t a t
 
 
1 1 2 2( ) ( ) ( ) ... (10)b t b t b t
 
where 
( ) ( ) for 1, 2 ... (11)i ia a t t dt i
 
 
( ) ( ) for 1, 2... (12)i ib b t t dt i
 
 
1( )t
 and 
2 ( )t
 are related as 
 
 
We may represent the two time functions 
( )a t
 and 
( )b t
 by vectors. 
Thus 
 
1 2
1 2
( , )
... (14)
( , )
a a a
b b b
 
The cosine of the angle between the two vector 
a
 and 
b
 is given by 
 
( · )
cos( , ) . ... (15)
a b
a b
a b
 
where 
( · )a b
 is the inner product of the vector 
a
 and 
b
 and 
a
 and 
b
 are their absolute values 
or norms. 
We know, the cosine of an angle has a magnitude less than or equal to unity. Hence
 
( · ) ... (16)a b a b
 
The equality holds if and only if 
b Ka
, where K is a constant. 
 
 
 
 
1 for
( ) ( ) ... (13)
0 for
i j
i j
t t
i j
 
 
 
Using equations, 9, 10, 11, 12 and 13, we can write 
 
( · ) ( ) ( ) ... (17)a b a t b t dt
 
 
2 1 2[ ( ) ] ... (18)a a t dt
 
 
2 1 2[ ( ) ] ... (19)b b t dt
 
Substituting equations (17), (18) and (19) in equation (16) we obtain, 
 
2 2 2[ ( ) ( ) ] ( ) ( ) ... (20)a t b t dt a t dt b t dt
 
Equation (20) is the Schwartz’s inequality. 
 
 
Equivalence of Correlation and Matched Filter Receivers 
 
Let us consider a linear time-invariant (LTI) filter with impulse response 
( )jh t
. If 
( )x t
 is the input 
signal to the filter and 
( )y t
 is the output signal of the filter, then 
 
( ) ( ) ( )jy t x h t d
 
From the definition of a matched filter, we know that the impulse response 
( )jh t
 of a LTI filter 
matched to an input single 
( )j t
 is a time-reversed and delayed version of the input 
( )j t
. 
Thus 
( ) ( )j jh t T t
 
The resulting filter output is 
 
( ) ( ) ( )j jy t x T t d
 
We sample this output at time 
t T
, we obtain 
 
( ) ( ) ( )j jy T x d
. 
By definition, 
( )j t
 is zero outside the interval 
0 t T
. 
So, 
( )jy t
 is actually the j
th
 correlator output 
jx
 produced by the received signal 
( )x t
. 
Thus 
0
( ) ( ) ( )
T
j jy T x d
 
The detector part of the optimum receiver may also be implemented using a bank of matched filters. 
 
 
 
 
Question 6 
State the differences between Matched Filters and Conventional Filters. 
 
Answer: 
Unwanted spectral components of a received signal are filtered out by a conventioned filter. It 
maintains some measure of fidelity for signals in the passband. Conventional filters provide 
approximately uniform gain and linear phase-frequency characteristic over the passband. It also 
provides a specified minimum attenuation over the stop band. 
Matched filters, however, are designed to maximize the SNR of a known signal in the presence of 
additive white Gaussian noise (AWGN). Matched filters are applied to known signals with random 
parameters, while conventional filters are applied to random signals defined only by their bandwidth. 
The matched filter is like a template that is matched to the known shape of the signal being processed. 
A matched filter largely modifies the temporal structure of the signal. It gathers the signal energy 
matched to its template and at the end of each symbol time presents the result as a peak amplitude. 
However, a conventional filter attempts to preserve the temporal or spectral structure of the signal of 
interest. 
A conventional filter in a communication receiver isolates and extracts a high-fidelity estimate of the 
signal for presentation to the matched filter. 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Chapter 7: Digital Modulation Techniques 
 
 
4
- shifted QPSK 
4
- shifted QPSK is a variant of quadriphase-shift keying modulation. Two commonly used signal 
constellations for QPSK are shown below. 
 
 
 
 
 
 
In 
4
- shifted QPSK, the carrier phase used for the transmission of successive dibits is alternatively 
picked from one of the two QPSK constellations. Thus in 
4
- shifted QPSK, there are eight possible 
phase states asshown in the figure below. 
A 
4
- shifted QPSK signal may reside in any one of the eight 
possible states. Envelope variations of 
4
- shifted QPSK signals 
due to filtering is significantly reduced, compared to those in 
QPSK. 
4
- shifted QPSK can be detected non-coherently. This 
considerably simplifies the receiver design. 
 
 
 
 
 
 
 
 
 
 
 
 
1 
2 
o 1 
2 
o 
2 
 
1 
 
o 
 
 
 
Signal-space Diagram for MSK system 
 
The signal constellation for an MSK signal is two-dimensional. It has four possible message points. 
The signal-space diagram for MSK system is shown in the figure below. 
 
 
 
 
 
 
 
 
 
 
 
The co-ordinates of the message points are: 
 
1 2( , ), ( , )b b b bm E E m E E
 
 
3 4( , ) and ( , )b b b bm E E m E E
 
In MSK, unlike QPSK, one of two message points is used to represent the transmitted symbol at any 
one time, depending on the value of 
(0)
. 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1 
Message point, m2 
2 
Eb
 
Eb
 
–
Eb
 
–
Eb
 
Message point, m1 
Message point, m4 
Message point, m3 
 0 0, ( ) 2bT    
 
 0 , ( ) 2bT     
 
0, ( ) 2bT   
 
0 , ( ) 2bT
 
 
 
Question 16 
Calculate the probability of error of MSK. 
 
Answer: 
For an AWGN channel, the received signal is given by 
 
( ) ( ) ( )x t s t t
 
where 
( )s t
 is the transmitted MSK signal and 
( )t
 is the sample function of white Gaussian noise 
process of zero mean and power spectral density 
2oN
. 
For the optimum detection of 
(0)
, we first find the projection of the received signal 
( )x t
 onto the 
reference signal 
( )t
over the interval 
b bT t T
. This is given by 
 
1 1( ) ( )
b
b
T
T
x x t t dt
 
 
1 1 for b bs T t T
. 
If 
1 0x
, then the receiver chooses the estimate 
ˆ 0
. However, if 
1 0x
, it chooses the estimate 
ˆ(0)
. 
Similarly, the projection of the received signal 
( )x t
 onto the second reference signal 
2 ( )t
 over the 
interval 
0 2 bt T
 is given by 
 2
2 2
0
( ) ( )
bT
x x t t dt
 
 
2 2 for 0 2 bs t T
. 
If 
2 0x
, the receiver chooses the estimate 
ˆ( )
2
bT
. If 
2 0x
, it chooses the estimate 
ˆ( )
2
bT
. 
If we have the estimates 
ˆ(0) 0
 and 
ˆ( )
2
bT
, or alternatively, if 
ˆ(0)
 and 
ˆ( )
2
bT
, 
the receiver makes a decision in favour of symbol 0. 
If we have the estimate 
ˆ(0)
 and 
ˆ( )
2
bT
 or alternatively, if 
ˆ(0) 0
 and 
ˆ( )
2
bT
, the 
receiver makes a decision in favour of symbol 1. 
Now the MSK and QPSK signals have similar signal space diagrams. Thus for AWGN channel they 
will have the same formula for their average probability of symbol error. Thus the average probability 
of symbol error for the MSK is given by 
 
2( ) 1 4 ( )s b o b oP erfc E N erfc E N
 
If 
1b oE N
, we may ignore the second term on the RHS. 
 
 
So, 
( )s b oP erfc E N
 
Hence the bit error rate (or probability of bit error) is given by for MSK 
 
1 2 ( )e b oP erfc E N
. 
 
 
Question 17 
Draw the sequences and waveforms involved in the generation of an MSK signal for the binary 
sequence 1101000. 
 
Answer: 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Question 18 
State the advantages and disadvantages of Gaussian MSK (GMSK) and its principal application. 
 
Answer: 
If we pass an NRZ binary data steam through a baseband pulse-shaping filter whose impulse response 
is defined by a Gaussian function then the resulting method of MSK signal is referred to as Gaussian-
filtered MSK or just GMSK. In GMSK system, the design parameter is the time-bandwidth product 
WTb. It is found that when WTb is less than unity, increasingly more of the transmit power is 
concentrated inside the passband of the GMSK signal. This is an advantage of GMSK. 
1 1 0 1 0 0 0 
π/2 π/2 π/2 –π/2 
0 
0 π π 0 
+ – – + 
Input Binary 
sequence → 
+ – – – 
 (kTb) → 
Polarity of s1 → 
s1 1 (t) 
 (kTb) → 
Polarity of s2 → 
s2 2 (t) 
s(t) 
2Tb 4Tb 6Tb 
 
 
The disadvantage of GMSK is that it generates intersymbol interference which increases with 
decreasing WTb. This disadvantage is known as performance degradation of GMSK. Thus the choice 
of WTb offers a trade-off between spectral compactness and performance loss. 
The principal application of GMSK is in GSM wireless communication. For GSM mobile 
communication WTb is standardized at 0.3. 
 
Question 19 
Calculate the probability of error for a non-coherent receiver. 
 
Answer: 
Let the upper path of the non-coherent receiver be called the in-phase path and the lower path be 
called quadrative path. Now let the signal 
1( )s t
 be transmitted for the interval 
0 t T
. Refer to the 
figure below for a generalized binary receiver for non-coherent orthogonal modulation. 
 
 
 
 
 
 
 
 
An error occurs if the receiver noise 
( )t
 is such that the output 
2
 of the lower path is greater than 
the output 
1
 of the upper path. Then the receiver makes a decision in favour of 
2 ( )s t
 rather than 
1( )s t
. 
Let 
2Ix
 and 
2Qx
 denote the in-phase and quadrature components of the matched filter output in the 
lower path of the above figure. The equivalent quadrature receiver equivalent to either one of the two 
matched filters is shown below. 
 
 
 
 
 
 
 
 
Then for 
2 2
2 2 22, ... (21)I Qi x x
 
 
Filter matched 
to 1(t) 
Envelope 
Detector 
Filter matched 
to 2(t) 
Envelope 
Detector 
Comparator 
If l1 > l2 
Choose s1(t) 
If l1 > l2 
Choose s2(t) 
Sample at t = T 
Sample at t = T 
x (t) 
l1 
l2 
x (t) 
Square-law 
detector 
Square-law 
Detector 
 
 
 
0
t
dt
 
i (t) 
i (t) 
xQ i 
xI i 
x2Q i 
x2I i 
+ 
+ 
li
2 
0
t
dt
 
 
 
The random variable 
2IX
 and 
2QX
 are both Gaussian distributed with zero mean and variance 
2oN
. Hence 
 
2
2
2 2
1
( ) exp( ) ... (22)
IX I I o
o
f x x N
N
 
 
2
2
2 2
1
( ) exp( ) ... (23)
QX Q Q o
o
f x x N
N
 
Now we know that the envelope of a Gaussian process is payleigh distributed. The random variable L2 
has the probability density function given by 
 
 
 
 
The variation of 
2 2( )Lf 
 with 
2
 is shown in the figure below. 
 
 
 
 
 
 
 
 
The conditional probability that 
2 1 
, given the sample value 
1
 is shown by the shaded area in 
the above figure. Hence 
 
2
1
2 1 1 2 2( ) ( ) ... (25)LP f d

    
 
Substituting equation (25) in equation (24) and integrating, we obtain 
 
2
2 1 1( ) exp( ) ... (26)I oP N   
 
Now let us consider the output amplitude 
1
 due to the upper path. 
Here 
1
 is due to signal plus noise. Let 
 
1I
x
In-phase component at the output of the matched filter 
 
1Q
x
Quadrature component at the output of the matched filter 
Then for 
1
2
1 11, I Qi x x
 
The random variable 
1I
X
 represented by sample value 
1I
x
 is Gaussian distributed with mean 
E
 
and variance 
2oN
, where E is the signal energyper symbol. The random variable 
1Q
X
 is Gaussian 
2
2 2
2
2 2
2
exp for 0
( ) ... (24)
0 elsewhere
L o o
f N N
 

 
l1 l2 
Conditional probability 
of error 
fL2 (l2) 
 
 
distributed with zero mean and variance 
2oN
. Hence the probability density functions of these two 
independent random variables are given by 
 
1 11
2( ) 1 exp( ( ) ) ... (27)
IX I o I o
f x N x E N
 
 
11
2
1( ) 1 exp( ) ... (28)QX Q o Q of x N x N
 
Given 
1I
x
 and 
1Q
x
, an error occurs when the lower path’s output amplitude 
2
 due to noise alone 
exceeds 
1
 due to signal plus noise. Thus 
 
1 1
2 2 2 ... (29)I I Qx x
 
Substituting equation (29) in equation (26) we get 
 
1 1 1 1
2(error , ) exp( ) ... (30)I Q I Q oP x x x x N
 
Error density is given by 
 
1 1 1 11 1
(error , ) ( ) ( )
I QI Q X I X Q
P x x f x f x
 
 
1 1 1 1
2 2 2 21 exp{ 1 [ ( ) ] ... (31)o o I Q I QN N x x x E x
 
We note, 
 
1 1 1 1 1 1
2 2 2 2 2 2( ) 2( 2) 2 2 ...(32)I Q I Q I Qx x x E x x E x E
 
Hence the average probability of error is 
 
1 1 1 1 1 11 1
(error , ) ( ) ( )
I Qe I Q X I X Q I Q
P P x x f x f x dx dx
 
 
1 1
21/ exp( / 2 ) exp[ 2 ( 2) ]o o o I IN E N N x E dx
 
 
1 1
2exp( 2 ) ... (33)Q o Qx N dx
 
Now, 
1 1
2exp[ 2 ( 2) ] 2 ... (34)o I I oN x E dx N
 
and 
1 1
2exp( 2 ) 2 ... (35)Q o Q ox N dx N
 
using equations (34) and (35) we obtain 
 
1 2exp( 2 ) ... (36)e oP E N
 
Equation (36) gives the probability of error for a non-coherent orthogonal receiver. 
 
 
 
 
 
 
 
 
Chapter 8: Spread Spectrum Modulation 
 
 
Need of synchronization in spread spectrum modulation and its 
implementation 
 
For proper operation of spread spectrum communication, it is necessary that the locally generated PN 
sequences in the receiver are synchronized to the PN sequence used in the transmitter. 
Synchronization is implemented in two parts, namely, acquisition and tracking. Acquisition is known 
as coarse synchronization and tracking is termed as fine synchronization. Acquisition means the two 
PN codes are aligned to within a fraction of the chip in as short time as possible. Tracking takes place 
once the incoming PN code has been acquired. Acquisition consists of two steps. Firstly, the received 
signal is multiplied by a locally generated PN code to produce a measure of correlation between it and 
the PN code used in the transmitter. Then an appropriate decision rule and search strategy is employed 
to process the measure of correlation so obtained. This determines whether the two codes are in 
synchronism. It also decides what to do if they are not in synchronism. Tracking is accomplished by 
using phase-lock loop techniques. These are similar to those used for the local generation of coherent 
carrier references. 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Chapter 10: Coding Theory 
 
Channel Coding Theorem 
 
Channel coding theorem has two parts. 
First Part 
Let a discrete memoryless source with an alphabet S have entropy 
( )H X
 and produce symbols once 
every 
sT
 seconds. Let a discrete memoryless channel have capacity C and be used once every 
cT
 
seconds. Then if 
 
( )
s c
H X C
T T
 
these exists a coding scheme for which the source output can be transmitted over the channel and be 
reconstructed with an arbitrarily small probability of error. 
The channel coding theorem specifies the channel capacity C as a fundamental limit on the rate at 
which the transmission of error-free messages can take place over a discrete memoryless channel. The 
theorem asserts the existence of good codes. But it does not show us how to construct a good code. 
 
Example 13 
Consider an (n, 1) repetition code where n = 5. For this code, 
(a) Construct the generator matrix G 
(b) Find all code words using G 
(c) Find the parity check matrix H for this code 
(d) Show that GHT = 0 
 
Solution: 
(a) The two code words in the code are [1 1 1 1 1] and [0 0 0 0 0] 
 
]1111[TP
 
Generator matrix, 
]11111[G
 
(b) For 
]00000[]11111[]0[,0 11 cd
 
For 
]11111[]11111[]1[,1 11 cd
 
(c) The parity check matrix H is given by 
 4
11000
10100
[ : ]
10010
10001
H P I 
(d) 
11000
10100
11111 0000 0
10010
10001
TGH .

Mais conteúdos dessa disciplina