Buscar

ak_slides9

Esta é uma pré-visualização de arquivo. Entre para ver o arquivo original

Digital Communications I:
Modulation and Coding Course
Term 3 - 2008
Catharina Logothetis
Lecture 9 – Textbook: B. Sklar, “Dig. Comm”
From http://www.signal.uu.se/Courses/CourseDirs/ModDemKod/2009X/main.html
Adapted by Aldebaro Klautau, UFPA, Brazil 2017
*
Lecture 9
*
Last time we talked about:
	Evaluating the average probability of symbol error for different bandpass modulation schemes
	Comparing different modulation schemes based on their error performances.
*
Lecture 9
*
Today, we are going to talk about:
	Channel coding
	Linear block codes
	The error detection and correction capability
	Encoding and decoding
	Hamming codes
	Cyclic codes
*
Lecture 9
*
Block diagram of a DCS
Format
Source
encode
Format
Source
decode
Channel
encode
Pulse
modulate
Bandpass
modulate
Channel
decode
Demod.
 Sample
Detect
Channel
Digital modulation
Digital demodulation
*
	Channel coding:
Transforming signals to improve communications performance by increasing the robustness against channel impairments (noise, interference, fading, ...)‏
	Waveform coding: Transforming waveforms to better waveforms
	Structured sequences: Transforming data sequences into better sequences, having structured redundancy.
-“Better” in the sense of making the decision process less subject to errors.
Lecture 9
*
What is channel coding?
*
Lecture 9
*
Error control techniques
	Automatic Repeat reQuest (ARQ)‏
	Half or full-duplex connection, error detection
	The receiver sends feedback to the transmitter, saying that if any error is detected in the received packet or not (Not-Acknowledgement (NACK) and Acknowledgement (ACK), respectively).
	The transmitter retransmits the previously sent packet if it receives NACK.
	Forward Error Correction (FEC)‏
	Simplex connection, error correction codes
	The receiver tries to correct some errors 
	Hybrid ARQ (ARQ+FEC)‏
	Full-duplex, error detection and correction codes
*
	Error performance vs. bandwidth
	Power vs. bandwidth
	Data rate vs. bandwidth
	Capacity vs. bandwidth 
Lecture 9
*
Why using error correction coding?
Coding gain:
For a given bit-error probability, 
the reduction in the Eb/N0 that can be
realized through the use of code:
 Point A to C
 D to E
 D/F to E
D/E (less interference,
e.g. CDMA)
Uncoded
A
F
B
D
C
E
Coded
*
Lecture 9
*
Channel models
	Discrete memoryless channels
	Discrete input, discrete output
	Binary Symmetric channels
	Binary input, binary output
	Gaussian channels
	Discrete input, continuous output
Digital communication models adopt various forms for the
demodulator output: for example, the demodulator can deliver
to the decoder bits or real values
*
Lecture 9
*
Demodulator’s hard or soft decision
Format
Source
encode
Format
Source
decode
Channel
encode
Pulse
modulate
Bandpass
modulate
Channel
decode
Demod.
 Sample
Detect
Channel
Digital modulation
Digital demodulation
Hard decision: bits
Soft decisions: real values
*
	Discrete input, discrete output
	Conditional probability of sequences
	Input U=u1,u2,u3 and output Z=z1,z2,z3
	P(Z/U) = P(z1/u1) P(z2/u2) P(z3/u3)
	
Lecture 9
*
Discrete memoryless channels (DMC)
*
Lecture 9
*
Binary Symmetric Channels (BSC)
	Special example of the DMC channel model: 
	discrete and memoryless, and also:
	binary (alphabet) and symmetric (probabilities)
	completely characterized by the bit error probability p
	Example: input U=0110 and output Z=0011
	P(Z/U)=P(0/0) P(0/1) P(1/1) P(1/0)
	P(Z/U)=(1-p) p (1-p) p = (1-p)2 p2
Tx. bits
Rx. bits
1-p
1-p
p
p
1
0
0
1
*
Gaussian channel
	Given input U, the output is Z = U + N, where N is AWGN
	Discrete input, real-valued output
	The likelihood is
	Conditional probability
	Example: input U=(3,-5,1,3) and output Z=(2.9,-5.2, 1.4, 2.1), assuming the noise has variance 0.8 and zero mean
	P(Z/U)=0.4433 x 0.4350 x 0.4036 x 0.2688 = 0.0209
Hard-decision decoding
	In hard-decision (or simply “hard”) decoding, the demodulator delivers bits
Soft-decision (or “soft”) decoding
	Demodulator delivers real “reliability” values
	For simplicity, voltage values are used in the example below, but in practice log-likelihood ratios (LLR) are adopted
On the use of soft or hard-decision
	Decoders using soft decisions typically have better performance but are more complex than hard-decision decoders
	Block codes are usually implemented with hard-decision
	For convolutional codes, both hard- and soft-decision are equally popular
A Simplified Taxonomy of FEC
Lecture 9
*
Linear block codes
	The information bit stream is chopped into blocks of k bits. 
	Each block is encoded to a larger block of n bits.
	The coded bits are modulated and sent over the channel.
	The reverse procedure is done at the receiver.
Data block
Channel
encoder
Codeword
k bits
 n bits
*
Lecture 9
*
Linear block codes – cont’d
	The Hamming weight of the vector U, denoted by w(U), is the number of non-zero elements in U.
	The Hamming distance between two vectors U and V, is the number of elements in which they differ. 
	The minimum distance of a block code is 
Reason is: codewords compose a subspace (closure), and the sum of a pair of codewords x,y is another codeword z=x+y, and Hamming distance between x,y is the Hamming weight of z
*
Lecture 9
*
Linear block codes – cont’d
	Error detection capability is given by
	Error correcting-capability t of a code is defined as the maximum number of guaranteed correctable errors per codeword, that is
*
Recall “Combination”
Good review: http://www.mathsisfun.com/combinatorics/combinations-permutations.html
Matlab: nchoosek(['a','b','c','d'],2)
output:
	ab
	ac
	ad
	bc
	bd
	cd
How many n-bits vectors have r errors (bits 1)?
Lecture 9
*
Linear block codes – cont’d
	For memory less channels, the probability that the decoder commits an erroneous decoding is
	 is the transition probability or bit error probability over channel. 
	The decoded bit error probability is 
*
http://www.mathsisfun.com/combinatorics/combinations-permutations.html
	Note that for coded systems, the coded bits are modulated and transmitted over the channel. For example, for M-PSK modulation on AWGN channels (M>2):
where is energy per coded bit, given by
Lecture 9
*
Uncoded vs coded transmission
*
Lecture 9
*
Linear block codes
	Let us first review some basic definitions that are useful in understanding Linear block codes.
*
Lecture 9
*
Some definitions
	Binary field: 
	The set {0,1}, under modulo 2 binary addition and multiplication forms a field. 
	Binary field is also called Galois field, GF(2)
	Note in GF(2) that for any vector x, the sum x+x leads to an all-zero vector
Addition
Multiplication
*
Lecture 9
*
Some definitions…
	Fields: 
	Let F be a set of objects on which two operations ‘+’ and ‘.’ are defined. 
	F is said to be a field if and only if
	F forms a commutative group under + operation. The additive identity element is labeled “0”.
	F-{0} forms a commutative group under . operation. The multiplicative identity element is labeled “1”.
	The operations “+” and “.” are distributive:
*
Lecture 9
*
Some definitions…
	Vector space:
	Let V be a set of vectors and F a field of elements called scalars. V forms a vector space over field F if the properties are obeyed:
1. Commutative:
2. 
3. Distributive: 
4. Associative:
5. 
*
Lecture 9
*
Some definitions…
	Example of a vector space
	The set of binary n-tuples, denoted by 
	Vector subspace:
	A subset S of the vector space is called a subspace if:
	The all-zero vector is in S.
	The sum of any two vectors in S is also in S.
Example:
*
Lecture 9
*
Some definitions…
	Spanning set:
	A collection of vectors , is said to be a spanning set for V or to span V if 
 linear combinations of the vectors in G include all vectors in the vector space
V, 
	Example:		
	Bases:
	The spanning set of V (of basis vectors) that has minimal cardinality is called the bases for V.
	Cardinality of a set is the number of objects in the set.
	Example:
*
Linear algebra in GF(2): a warning
	Be careful when working in GF(2) and making analogies to real vector spaces:
	Strictly, an inner product is a mapping with 3 properties: bilinear (also called linear), symmetric and positive definitive. 
	Inner product spaces are defined over a field of scalars that are real or complex-valued. You can then “order” elements and define a set of “positive” numbers
	But there is no useful notion of positive elements in finite fields and, consequently, of “length”. Example, in GF(2) let us say that 1 is “positive”. Then the sum of two positives 1+1 should also be positive, but it is 0 in GF(2)
	But the “conventional” inner product is still useful in GF(2). It is bilinear and symmetric:
	
See: http://math.stackexchange.com/questions/1346664/how-to-find-orthogonal-vectors-in-gf2
http://math.stackexchange.com/questions/49348/inner-product-spaces-over-finite-fields
http://everything2.com/title/inner+product
http://math.stackexchange.com/questions/185403/do-there-exist-vector-spaces-over-a-finite-field-that-have-a-dot-product
See: http://math.stackexchange.com/questions/1346664/how-to-find-orthogonal-vectors-in-gf2
http://math.stackexchange.com/questions/49348/inner-product-spaces-over-finite-fields
http://everything2.com/title/inner+product
http://math.stackexchange.com/questions/185403/do-there-exist-vector-spaces-over-a-finite-field-that-have-a-dot-product
*
An issue: “inner product” in GF(2)
	The “regular” inner product 
is useful but has an issue in GF(2):
	Nonnegativity fails: <x,x> ≥ 0, and 0 iif x=0.
	Example <(1,1); (1,1)> = 1+1 = 0
	In summary:
	Inner product spaces are defined over a field which is either R or C, not a finite field. 
	GF(2) does not lead to inner product space
	Orthogonality concept is used but Gram-Schmidt procedure does not apply in GF(2)
	Ex: (1100) and (0011) are “orthogonal” to each other, but also “orthogonal” to themselves!
Orthogonality in GF(2)
	Similar to what is used for inner product spaces, also in GF(2), orthogonality implies the inner product is zero
	But the geometrical interpretation of “orthogonality” as a generalization of “perpendicular” is not valid!
	For example, note that any vector x with an even number of 1’s will have <x,x>=0. And such vector is not (geometrically) perpendicularity to itself!
	Assume 3-tuples of elements in GF(2). The dimension of the space is 3. Find the subspace orthogonal to (1,1,1). In this case, this subspace has dimension 2 and consists of the elements (0,0,0), (1,1,0), (1,0,1), (0,1,1). There is no pair of these 4 vectors that form an orthogonal basis for the subspace
Lecture 9
*
Recollecting
	Vector space: set of binary n-tuples 
	Vector subspace (closure property):
	Spanning set:
	Bases (spans and has minimum # basis vectors):
*
Lecture 9
*
Linear block codes
	Linear block code (n,k)‏
	A set with cardinality is called a linear block code if, and only if, it is a subspace of the vector space .
	Members of C are called codewords.
	The all-zero codeword is a codeword.
	Any linear combination of codewords is a codeword.
*
Lecture 9
*
Linear block codes – cont’d
Bases of C
mapping
*
Lecture 9
*
Linear block codes –cont’d
	A matrix G is constructed by taking as its rows the vectors of the basis, .
Bases of C
mapping
*
Lecture 9
*
Linear block codes – cont’d
	Encoding in (n,k) block code
	The rows of G are linearly independent.
*
Lecture 9
*
Linear block codes – cont’d
	Example: Block code (6,3)‏
Message vector
Codeword
Bases of C
mapping
*
Lecture 9
*
Linear block codes – cont’d
	Systematic block code (n,k)‏
	For a systematic code, the first (or last) k elements in the codeword are information bits.
*
Lecture 9
*
Linear block codes – cont’d
	For any linear code we can find a matrix , such that its rows are orthogonal to the rows of :
	H is called the parity check matrix and its rows are linearly independent.
	For systematic linear block codes:
*
Lecture 9
*
Systematic (Hamming) code (7,4)‏
*
Systematic block code (6,3) in Matlab
n=6; %number of bits in codeword
k=3; %number of information bits
Ik=eye(k); %identity matrix of dimension k x k
P=[1 1 0; 0 1 1; 1 0 1];
G=[P Ik]; 
%% Codeword for specific message
m=[1 1 1];
U=mod(m*G, 2)
%% To find all codewords
messages=de2bi(0:(2^k - 1), k)
U=mod(messages*G, 2)
Lecture 9
*
Linear block codes – cont’d
	Syndrome testing:
	S is the syndrome of r, corresponding to the error pattern e.
Format
Channel 
encoding
Modulation
Channel
decoding
Format
Demodulation
Detection
Data source
Data sink
channel
*
Decoding systematic block code (6,3)
n=6; %number of bits in codeword
k=3; %number of information bits
Ik=eye(k); %identity matrix of dimension k x k
P=[1 1 0; 0 1 1; 1 0 1];
G=[P Ik]; 
%% Decode received codeword
Ink=eye(n-k); %identity matrix (n-k) x (n-k)
H=[Ink transpose(P)];
r=[0 0 1 1 1 0];
S=mod(r*transpose(H),2)
S=[1 0 0]
Lecture 9
*
Linear block codes – cont’d
Example 6.4 in Sklar’s textbook
Error pattern
Syndrome
*
Lecture 9
*
Linear block codes – cont’d
	Example: Standard array for the (6,3) code
Coset leaders
coset
codewords
*
Lecture 9
*
Linear block codes – cont’d
	Standard array
	For row find a vector in of minimum weight that is not already listed in the array.
	Call this pattern and form the row as the corresponding coset 
zero 
codeword
coset
coset leaders
*
Lecture 9
*
Linear block codes – cont’d
	Standard array and syndrome table decoding
	Calculate 
	Find the coset leader, , corresponding to .
	Calculate and the corresponding .
	Note that 
	If , the error is corrected.
	If , undetectable decoding error occurs.
*
Lecture 9
*
Linear block codes – cont’d
	Example: Standard array for the (6,3) code
Coset leaders
coset
codewords
*
Lecture 9
*
Linear block codes – cont’d
Error pattern
Syndrome
*
Lecture 9
*
	Hamming codes
	 Hamming codes are a subclass of linear block codes and belong to the category of perfect codes.
	Hamming codes are expressed as a function of a single integer . 
	The columns of the parity-check matrix, H, consist of all non-zero binary m-tuples.
Hamming codes
*
Lecture 9
*
Hamming codes
	Example: Systematic Hamming code (7,4)‏
*
Lecture 9
*
Cyclic block codes
	Cyclic codes are a subclass of linear block codes.
	Encoding and syndrome calculation are easily performed using feedback shift-registers.
	Hence, relatively long block codes can be implemented with a reasonable complexity.
	BCH and Reed-Solomon codes are cyclic codes. 
*
Lecture 9
*
Cyclic block codes
	A linear (n,k) code is called a Cyclic code if all cyclic shifts of a codeword are also codewords.
	Example:
“i” cyclic shifts of U
*
Lecture 9
*
Cyclic block codes
	Algebraic structure of Cyclic codes, implies expressing codewords in polynomial form
	Relationship between a codeword and its cyclic shifts:
	Hence:
By extension
*
Lecture 9
*
Cyclic block codes
	Basic properties of Cyclic codes:
	Let C be a binary (n,k) linear cyclic code
	Within the set of code polynomials in C, there is a unique monic polynomial with minimal degree is called the generator polynomial.
	Every code polynomial in C can be expressed uniquely as 
	The generator polynomial is a factor of 
 
*
Lecture 9
*
Cyclic block codes
	The orthogonality of G and H in polynomial form is expressed as . This means
is also a factor of 
	The row , of the generator matrix is formed by the coefficients of the cyclic shift of the generator polynomial.
 
*
Lecture 9
*
Cyclic block codes
	Systematic encoding algorithm for an (n,k) Cyclic code:
	Multiply the message polynomial by 
	Divide the result of Step 1 by the generator polynomial . Let be the reminder.
	Add to to form the codeword 
*
Lecture 9
*
Cyclic block codes
	Example: For the systematic (7,4) Cyclic code with generator polynomial 
	Find the codeword for the message
*
Lecture 9
*
Cyclic block codes
	Find the generator and parity check matrices, G and H, respectively.
Not in systematic form.
We do the following:
*
Lecture 9
*
Cyclic block codes
	Syndrome decoding for Cyclic codes:
	Received codeword in polynomial form is given by
	The syndrome is the remainder obtained by dividing the received polynomial by the generator polynomial. 
	With syndrome and Standard array, the error is estimated.
	In Cyclic codes, the size of standard array is considerably reduced. 
Received codeword
Error pattern
Syndrome
*
Lecture 9
*
Example of the block codes
8PSK
QPSK
*
[dB]
[dB]
 
[dB]
 
c
0
u
0
÷
÷
ø
ö
ç
ç
è
æ
-
÷
÷
ø
ö
ç
ç
è
æ
=
N
E
N
E
G
b
b
(dB)
 
/
0
N
E
b
B
P
rate
 
Code
 
bits
 
Redundant 
 
n
k
R
n-k
c
=
)
(
)
(
V
U
V
U,
Å
=
w
d
)
(
min
)
,
(
min
min
i
i
j
i
j
i
w
d
d
U
U
U
=
=
¹
ú
û
ú
ê
ë
ê
-
=
2
1
min
d
t
1
min
-
=
d
e
j
n
j
n
t
j
M
p
p
j
n
P
-
+
=
-
÷
÷
ø
ö
ç
ç
è
æ
£
å
)
1
(
1
j
n
j
n
t
j
B
p
p
j
n
j
n
P
-
+
=
-
÷
÷
ø
ö
ç
ç
è
æ
»
å
)
1
(
1
1
p
(
)
(
)
÷
÷
ø
ö
ç
ç
è
æ
÷
ø
ö
ç
è
æ
=
÷
÷
ø
ö
ç
ç
è
æ
÷
ø
ö
ç
è
æ
»
M
N
R
E
M
Q
M
M
N
E
M
Q
M
p
c
b
c
p
p
sin
log
2
log
2
sin
log
2
log
2
0
2
2
0
2
2
c
E
b
c
c
E
R
E
=
0
1
1
1
0
1
1
1
0
0
0
0
=
Å
=
Å
=
Å
=
Å
1
1
1
0
0
1
0
1
0
0
0
0
=
×
=
×
=
×
=
×
F
a
b
b
a
F
b
a
Î
+
=
+
Þ
Î
"
,
F
a
b
b
a
F
b
a
Î
×
=
×
Þ
Î
"
,
)
(
)
(
)
(
c
a
b
a
c
b
a
×
+
×
=
+
×
V
u
v
V
v
Î
=
×
Þ
Î
"
Î
"
a
F
a
,
v
u
v
u
v
v
v
×
+
×
=
+
×
×
+
×
=
×
+
a
a
a
b
a
b
a
)
(
 
and
 
)
(
V
u
+
v
=
v
+
u
V
Î
Þ
Î
"
v
u,
)
(
)
(
,
,
v
v
v
×
×
=
×
×
Þ
Î
"
Î
"
b
a
b
a
V
F
b
a
v
v
V
v
=
×
Î
"
1
 
,
 
.
 
of
 
subspace
 
a
 
is
 
)}
1111
(
),
1010
(
),
0101
(
),
0000
{(
4
V
n
V
)}
1111
(
),
1101
(
),
1100
(
),
1011
(
),
1010
(
),
1001
(
),
1000
(
 
),
0111
(
),
0101
(
),
0100
(
),
0011
(
),
0010
(
),
0001
(
),
0000
{(
4
=
V
{
}
.
for 
 
bases
 
a
 
is
 
0001
0010
0100
1000
4
V
)
(
),
(
),
(
),
(
{
}
.
 
spans
 
)
1001
(
),
0011
(
),
1100
(
),
0110
(
),
1000
(
4
V
{
}
n
G
v
v
v
,
,
,
2
1
K
=
{
}
.
for 
 
basis
 
a
 
is
 
)
0001
(
),
0010
(
),
0100
(
),
1000
(
4
V
n
V
C
Ì
k
2
 
n
k
V
C
V
Ì
®
k
V
C
}
,
,
,
{
2
1
k
V
V
V
K
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ë
é
=
ú
ú
ú
û
ù
ê
ê
ê
ë
é
=
kn
k
k
n
n
k
v
v
v
v
v
v
v
v
v
L
M
O
M
L
L
M
2
1
2
22
21
1
12
11
1
V
V
G
mG
U
=
k
n
k
k
n
V
m
+
+
V
m
+
V
m
=
)
u
,
,
u
,
(u
V
V
V
)
m
,
,
m
,
(m
=
)
u
,
,
u
,
(u
×
¼
×
×
¼
ú
ú
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ê
ê
ë
é
×
¼
¼
k
2
2
1
1
2
1
2
1
2
1
2
1
M
ú
ú
ú
û
ù
ê
ê
ê
ë
é
=
ú
ú
ú
û
ù
ê
ê
ê
ë
é
=
1
0
0
0
1
0
0
0
1
1
1
0
0
1
1
1
0
1
3
2
1
V
V
V
G
1
1
1
1
1
0
0
0
0
1
0
1
 
 
1
1
1
1
1
0
1
1
0
0
0
1
1
0
1
1
1
1
1
0
0
0
1
1
 
 
1
1
0
0
0
1
1
0
1
 
0
0
0
1
0
0
0
1
0
1
0
0
1
1
0
0
1
0
 
 
0
0
0
1
0
0
0
1
0
matrix
 
)
(
matrix
identity 
 
]
[
k
n
k
k
k
k
k
k
-
´
=
´
=
=
P
I
I
P
G
)
,...,
,
,
,...,
,
(
)
,...,
,
(
bits
 
message
2
1
bits
parity 
2
1
2
1
4
4
3
4
4
2
1
4
4
3
4
4
2
1
k
k
n
n
m
m
m
p
p
p
u
u
u
-
=
=
U
n
k
n
´
-
)
(
H
G
0
GH
=
T
]
[
T
k
n
P
I
H
-
=
]
[
1
0
1
1
1
0
0
1
1
0
1
0
1
0
1
1
1
0
0
0
1
3
3
T
P
I
H
´
=
ú
ú
ú
û
ù
ê
ê
ê
ë
é
=
]
[
1
0
0
0
1
1
1
0
1
0
0
0
1
1
0
0
1
0
1
0
1
0
0
0
1
1
1
0
4
4
´
=
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ë
é
=
I
P
G
or vector
pattern 
error 
 
)
,....,
,
(
or vector
 
codeword
 
received
 
)
,....,
,
(
2
1
2
1
n
n
e
e
e
r
r
r
=
=
e
r
e
U
r
+
=
T
T
eH
rH
S
=
=
U
r
m
m
ˆ
111
010001
100
100000
010
010000
001
001000
110
000100
011
000010
101
000001
000
000000
(101110)
(100000)
(001110)
ˆ
ˆ
estimated
 
is
 vector 
corrected
 
The
(100000)
ˆ
is
 
syndrome
 
 this
 to
ing
correspond
pattern 
Error 
(100)
(001110)
:
computed
 
is
 
 
of
 
syndrome
 
The
received.
 
is
 
(001110)
ted.
 transmit
(101110)
=
+
=
+
=
=
=
=
=
=
=
e
r
U
e
H
rH
S
r
r
U
T
T
010110
100101
010001
010100
100000
100100
010000
111100
001000
000110
110111
011001
101101
101010
011110
110000
000100
000101
110001
011111
101011
101100
011000
110110
000010
000110
110010
011100
101000
101111
011011
110101
000001
000111
110011
011101
101001
101110
011010
110100
000000
L
L
M
M
M
M
k
k
n
k
n
k
n
k
k
2
2
2
2
2
2
2
2
2
2
2
2
1
U
e
U
e
e
U
e
U
e
e
U
U
U
Å
Å
Å
Å
-
-
-
L
M
O
L
M
L
L
k
n
i
-
=
2
,...,
3
,
2
i
e
th
:
i
T
rH
S
=
i
e
e
=
ˆ
S
e
r
U
ˆ
ˆ
+
=
)
ˆ
ˆ
(
ˆ
ˆ
e
(e
U
e
e)
U
e
r
U
+
+
=
+
+
=
+
=
e
e
=
ˆ
e
e
¹
ˆ
2
³
m
 
t
m
n-k
m
k
n
m
m
1
 
:
capability
 
correction
Error 
 
 
:
bits
parity 
 
of
Number 
1
2
 
:
bits
n 
informatio
 
of
Number 
1
2
 
 
 
:
length
 
Code
=
=
-
-
=
-
=
U
U
U
U
U
U
=
=
=
=
=
=
)
1101
(
 
)
1011
(
 
)
0111
(
 
)
1110
(
)
1101
(
)
4
(
)
3
(
)
2
(
)
1
(
)
,...,
,
,
,
,...,
,
(
)
,...,
,
,
(
1
2
1
0
1
1
)
(
1
2
1
0
-
-
-
+
-
-
-
=
=
i
n
n
i
n
i
n
i
n
u
u
u
u
u
u
u
u
u
u
u
U
U
)
1
(
 
degree
 
...
)
(
1
1
2
2
1
0
n-
X
u
X
u
X
u
u
X
n
n
-
-
+
+
+
+
=
U
)
1
(
)
(
...
...,
)
(
1
)
1
(
)
1
(
1
1
)
(
1
2
2
1
0
1
1
1
2
2
1
0
1
)
1
(
+
+
=
+
+
+
+
+
+
=
+
+
+
=
-
+
-
-
-
-
-
-
-
-
-
n
n
X
u
n
n
n
X
n
n
n
n
n
n
n
X
u
X
u
X
u
X
u
X
u
X
u
u
X
u
X
u
X
u
X
u
X
X
n
n
U
U
U
4
4
3
4
4
2
1
4
4
4
4
4
4
3
4
4
4
4
4
4
2
1
)
1
(
 
modulo
 
)
(
)
(
)
(
+
=
n
i
i
X
X
X
X
U
U
)
1
(
 
modulo
 
)
(
)
(
)
1
(
+
=
n
X
X
X
X
U
U
)
(
X
g
)
(
 
.
X
n
r
g
<
r
r
X
g
X
g
g
X
+
+
+
=
...
)
(
1
0
g
)
(
X
U
)
(
)
(
)
(
X
X
X
g
m
U
=
1
+
n
X
ú
ú
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ê
ê
ë
é
=
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ë
é
=
-
r
r
r
r
k
g
g
g
g
g
g
g
g
g
g
g
g
X
X
X
X
X
L
L
O
O
O
O
L
L
M
1
0
1
0
1
0
1
0
1
)
(
)
(
)
(
0
0
g
g
g
G
1
)
(
)
(
+
=
n
X
X
X
h
g
)
(
X
h
k
i
i
,...,
1
,
=
"
1
"
-
i
)
(
X
m
k
n
X
-
)
(
X
p
)
(
X
X
k
n
m
-
)
1
 
1
 
0
 
1
 
0
 
0
 
1
(
1
)
(
)
(
)
(
:
polynomial
 
codeword
 
 the
Form
1
)
1
(
)
1
(
:
(
by 
 
)
(
 
Divide
)
1
(
)
(
)
(
1
)
(
)
1011
(
3
 
,
4
 
,
7
bits
 
message
bits
parity 
6
5
3
3
)
(
remainder 
generator 
3
quotient 
3
2
6
5
3
6
5
3
3
2
3
3
3
2
3
2
1
3
2
1
4
3
4
2
1
4
3
4
2
1
4
4
4
3
4
4
4
2
1
=
+
+
+
=
+
=
+
+
+
+
+
+
=
+
+
+
+
=
+
+
=
=
+
+
=
Þ
=
=
-
=
=
-
-
U
m
p
U
g
m
m
m
m
m
p
g
q
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X)
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
k
n
k
n
X
(X)
(X)
k
n
k
n
)
1011
(
=
m
3
1
)
(
X
X
X
+
+
=
g
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ë
é
=
=
Þ
×
+
×
+
×
+
=
1
0
1
1
0
0
0
0
1
0
1
1
0
0
0
0
1
0
1
1
0
0
0
0
1
0
1
1
)
1101
(
)
,
,
,
(
1
0
1
1
)
(
3
2
1
0
3
2
G
g
g
g
g
g
X
X
X
X
row(4)
row(4)
row(2)
row(1)
row(3)
row(3)
row(1)
®
+
+
®
+
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ë
é
=
1
0
0
0
1
0
1
0
1
0
0
1
1
1
0
0
1
0
1
1
0
0
0
0
1
0
1
1
G
ú
ú
ú
û
ù
ê
ê
ê
ë
é
=
1
1
1
0
1
0
0
0
1
1
1
0
1
0
1
1
0
1
0
0
1
H
4
4
´
I
3
3
´
I
T
P
P
)
(
)
(
)
(
X
X
X
e
U
r
+
=
)
(
)
(
)
(
)
(
X
X
X
X
S
g
q
r
+
=
[dB]
 
/
0
N
E
b
 
B
P

Teste o Premium para desbloquear

Aproveite todos os benefícios por 3 dias sem pagar! 😉
Já tem cadastro?

Continue navegando