Error Control Coding By Shu Lin Pdf Free 14 ##BEST##
Error Control Coding By Shu Lin Pdf Free 14 ===== https://tiurll.com/2thlTd
In information theory and coding theory with applications in computer science and telecommunication, error detection and correction (EDAC) or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data in many cases.
Good error control performance requires the scheme to be selected based on the characteristics of the communication channel. Common channel models include memoryless models where errors occur randomly and with a certain probability, and dynamic models where errors occur primarily in bursts. Consequently, error-detecting and correcting codes can be generally distinguished between random-error-detecting/correcting and burst-error-detecting/correcting. Some codes can also be suitable for a mixture of random errors and burst errors.
If the channel characteristics cannot be determined, or are highly variable, an error-detection scheme may be combined with a system for retransmissions of erroneous data. This is known as automatic repeat request (ARQ), and is most notably used in the Internet. An alternate approach for error control is hybrid automatic repeat request (HARQ), which is a combination of ARQ and error-correction coding.
Automatic repeat request (ARQ) is an error control method for data transmission that makes use of error-detection codes, acknowledgment and/or negative acknowledgment messages, and timeouts to achieve reliable data transmission. An acknowledgment is a message sent by the receiver to indicate that it has correctly received a data frame.
Shannon's theorem is an important theorem in forward error correction, and describes the maximum information rate at which reliable communication is possible over a channel that has a certain error probability or signal-to-noise ratio (SNR). This strict upper limit is expressed in terms of the channel capacity. More specifically, the theorem says that there exist codes such that with increasing encoding length the probability of error on a discrete memoryless channel can be made arbitrarily small, provided that the code rate is smaller than the channel capacity. The code rate is defined as the fraction k/n of k source symbols and n encoded symbols.
The actual maximum code rate allowed depends on the error-correcting code used, and may be lower. This is because Shannon's proof was only of existential nature, and did not show how to construct codes which are both optimal and have efficient encoding and decoding algorithms.
Error-correcting memory controllers traditionally use Hamming codes, although some use triple modular redundancy. Interleaving allows distributing the effect of a single cosmic ray potentially upsetting multiple physically neighboring bits across multiple words by associating neighboring bits to different words. As long as a single-event upset (SEU) does not exceed the error threshold (e.g., a single error) in any particular word between accesses, it can be corrected (e.g., by a single-bit error-correcting code), and the illusion of an error-free memory system may be maintained.[22]
From problem 3.17, the rst inequality garantees the existence ofa systematic linear codewith minimum distance dmin.14Chapter 44.1 Aparity-check matrix for the (15, 11) Hamming code isH =1 0 0 0 1 00 1 1 0 1 0 1 1 10 1 0 0 1 1 0 0 0 1 1 1 0 1 10 0 1 0 0 1 1 0 1 0 11 1 0 10 0 0 1 0 0 1 1 0 1 0 1 1 1 1Let r = (r0, r1, . . . , r14)be the received vector. The syndrome of r is (s0, s1, s2, s3)withs0 = r0 +r4 +r7 +r8 +r10 +r12 +r13 +r14,s1 = r1 +r4 +r5 +r9+r10 +r11 +r13 +r14,s2 = r2 +r5 +r6 +r8 +r10 +r11 +r12 +r14,s3 = r3+r6 +r7 +r9 +r11 +r12 +r13 +r14.Set up the decoding table as Table4.1. From the decoding table, we nd thate0 = s0 s1 s2 s3, e1 = s0s1s2 s3, e2 = s0 s1s2 s3,e3 = s0 s1 s2s3, e4 = s0s1 s2 s3, e5 =s0s1s2 s3,. . . , e13 = s0s1 s2s3, e14 = s0s1s2s3.15Table 4.1:Decoding Tables0 s1 s2 s3 e0 e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 e11 e12e13 e140 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 01 0 0 0 1 0 0 0 0 0 0 00 0 0 0 0 0 00 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 00 0 1 0 0 0 1 0 00 0 0 0 0 0 0 0 0 00 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 01 1 0 0 0 00 0 1 0 0 0 0 0 0 0 0 0 00 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 00 0 11 0 0 0 0 0 0 1 0 0 0 0 0 0 0 01 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 001 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 00 1 0 1 0 0 0 0 0 0 0 0 0 1 00 0 0 01 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 00 1 1 1 0 0 0 0 0 0 0 00 0 0 1 0 0 01 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 01 1 0 1 0 0 0 0 00 0 0 0 0 0 0 0 1 01 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1r0r1BufferRegisterr14+...s0+s1+s2+s3...s0s1s2s3s0s1s2s3s0s1s2s3s0s1s2s3+e0r0+e1r1+e2r2+e14r14Decodedbits... ... ...164.3 From (4.3), the probability of an undetectederror for a Hamming code isPu(E) = 2m{1 + (2m1)(1 2p)2m1} (1 p)2m1=2m+ (1 2m)(1 2p)2m1(1 p)2m1. (1)Note that(1 p)2 1 2p, (2)and(1 2m)0. (3)Using (2) and (3) in (1), we obtain the followinginequality:Pu(E) 2m+ (1 2m)
p3(1 p)20.The probability of a decoding error isP(E) = 1P(C).5.29(a) Consider two single-error patterns, e1(X) = Xiande2(X) = Xj, where j > i. Supposethat these two error patternsare in the same coset. Then Xi+ Xjmust be divisible byg(X) = (X3+1)p(X). This implies that Xji+ 1 must be divisible by p(X). Thisisimpossible since j i < n and n is the smallest positiveinteger such that p(X) dividesXn+ 1. Therefore no two single-errorpatterns can be in the same coset. Consequently, allsingle-errorpatterns can be used as coset leaders.Now consider a single-errorpattern e1(X) = Xiand a double-adjacent-error pattern e2(X) =Xj+Xj+1, where j > i. Suppose that e1(X) and e2(X) are in the samecoset. ThenXi+Xj+Xj+1must be divisible by g(X) = (X3+1)p(X). Thisis not possible since g(X)has X + 1 as a factor, however Xi+ Xj+Xj+1does not have X + 1 as a factor. Hence nosingle-error patternand a double-adjacent-error pattern can be in the samecoset.Consider two double-adjacent-error patterns,Xi+Xi+1andXj+Xj+1where j > i. Supposethat these two error patterns are inthe same cosets. Then Xi+ Xi+1+ Xj+ Xj+1must bedivisible by (X3+1)p(X). Note thatXi+ Xi+1+Xj+ Xj+1= Xi(X + 1)(Xji+ 1).We see thatfor Xi(X + 1)(Xji+ 1) to be divisible by p(X), Xji+ 1 must bedivisibleby p(X). This is again not possible since j i < n.Hence no two double-adjacent-errorpatterns can be in the samecoset.Consider a single error pattern Xiand a triple-adjacent-errorpattern Xj+ Xj+1+ Xj+2. Ifthese two error patterns are in the samecoset, then Xi+Xj+Xj+1+Xj+2must be divisibleby (X3+1)p(X). ButXi+Xj+Xj+1+Xj+2= Xi+Xj(1 +X +X2) is not divisible byX3+1 =(X+1)(X2+X+1). Therefore, no single-error pattern and atriple-adjacent-errorpattern can be in the same coset.Now weconsider a double-adjacent-error pattern Xi+Xi+1and atriple-adjacent-error pattern28Xj+ Xj+1+ Xj+2. Suppose that thesetwo error patterns are in the same coset. ThenXi+ Xi+1+ Xj+ Xj+1+Xj+2= Xi(X + 1) + Xj(X2+ X + 1)must be divisible by (X3+1)p(X).This is not possible since Xi+Xi+1+Xj+Xj+1+Xj+2does not have X+1 asa factor but X3+1 has X+1 as a factor. Hence adouble-adjacent-errorpattern and a triple-adjacent-error patterncan not be in the same coset.Consider two triple-adjacent-errorpatterns, Xi+ Xi+1+ Xi+2and Xj+ Xj+1+ Xj+2. Ifthey are in the samecoset, then their sumXi(X2+ X + 1)(1 + Xji)must be divisible by(X3+ 1)p(X), hence by p(X). Note that the degree of p(X) is 3orgreater. Hence p(X) and (X2+ X + 1) are relatively prime. As aresult, p(X) must divideXji+1. Again this is not possible. Hence notwo triple-adjacent-error patterns can be in thesamecoset.Summarizing the above results, we see that all the single-,double-adjacent-, and triple-adjacent-error patterns can be used ascoset leaders.29Chapter 66.1 (a) The elements , 2and 4have the sameminimal polynomial 1(X). From table 2.9, wend that1(X) = 1 + X3+X4The minimal polynomial of 3= 21= 6is3(X) = 1 + X + X2+ X3+X4.Thusg0(X) = LCM(1(X), 2(X))= (1 + X3+X4)(1 + X + X2+ X3+X4)= 1 +X + X2+ X4+ X8.(b)H =1 23456789101112131413691215182124273033363942H =1 1 1 0 1 0 1 1 0 0 1 0 0 0 10 1 0 0 01 1 1 1 0 1 0 1 1 00 0 0 1 1 1 1 0 1 0 1 1 0 0 10 1 1 1 1 0 1 0 1 10 0 1 0 01 0 1 0 0 1 0 1 0 0 1 0 1 0 00 0 1 0 1 0 0 1 0 1 0 0 1 010 1 1 0 0 0 1 1 0 0 0 1 1 0 00 1 1 1 1 0 1 1 1 1 0 1 1 1 1.30(c)The reciprocal of g(X) in Example 6.1 isX8g(X1) = X8(1 + X4+ X6+X7+X8= X8+ X4+X2+ X + 1 = g0(X)6.2 The table for GF(s5) with p(X) = 1+ X2+ X5is given in Table P.6.2(a). The minimalpolynomials ofelements in GF(2m) are given in Table P.6.2(b). The generatorpolynomialsof all the binary BCH codes of length 31 are given inTable P.6.2(c)Table P.6.2(a) Galois Field GF(25) with p() = 1 + 2+5= 00 (0 0 0 0 0)1 (1 0 0 0 0) (0 1 0 0 0)2(0 0 1 0 0)3(0 0 0 10)4(0 0 0 0 1)5= 1 + 2(1 0 1 0 0)31Table P.6.2(a) Continued6= + 3(01 0 1 0)7= 2+ 4(0 0 1 0 1)8= 1 + 2+ 3(1 0 1 1 0)9= + 3+ 4(0 1 0 11)10= 1 + 4(1 0 0 0 1)11= 1 + + 2(1 1 1 0 0)12= + 2+ 3(0 1 1 10)13= 2+ 3+ 4(0 0 1 1 1)14= 1 + 2+ 3+ 4(1 0 1 1 1)15= 1 + + 2+ 3+4(1 1 1 1 1)16= 1 + + 3+ 4(1 1 0 1 1)17= 1 + + 4(1 1 0 0 1)18= 1 +(1 1 0 0 0)19= + 2(0 1 1 0 0)20= 2+ 3(0 0 1 1 0)21= 3+ 4(0 0 0 11)22= 1 + 2+ 4(1 0 1 0 1)23= 1 + + 2+ 3(1 1 1 1 0)24= + 2+ 3+ 4(0 11 1 1)25= 1 + 3+ 4(1 0 0 1 1)26= 1 + + 2+ 4(1 1 1 0 1)27= 1 + + 3(11 0 1 0)28= + 2+ 4(0 1 1 0 1)29= 1 + + 3(1 0 0 1 0)30= + 4(0 1 0 01)32Table P.6.2(b)Conjugate Roots i(X)1, 2, 4, 8, 163, 6, 12, 24,175, 10, 20, 9, 187, 14, 28, 25, 1911, 22, 13, 26, 2115, 30, 29,27, 231 + X1 + X2+ X51 + X2+ X3+ X4+ X51 + X + X2+ X4+ X51 + X +X2+ X3+ X51 + X + X3+ X4+ X51 + X3+ X5Table P.6.2(c)n k t g(X)31 261 g1(X) = 1 + X2+X521 2 g2(X) = 1(X)3(X)16 3 g3(X) = 1(X)3(X)5(X)115 g4(X) = 1(X)3(X)5(X)7(X)6 7 g5(X) = 1(X)3(X)5(X)7(X)11(X)6.3 (a)Use the table for GF(25) constructed in Problem 6.2. The syndromecomponents ofr1(X) = X7+X30are:S1 = r1() = 7+ 30= 19S2 = r1(2) =14+ 29= 7S3 = r1(3) = 21+ 28= 1233S4 = r1(4) = 28+ 27= 14Theiterative procedure for nding the error location polynomial isshown in Table P.6.3(a)Table P.6.3(a) ()(X) d 2 -1/2 1 1 0 -10 1190 01 1 + 19X 251 1( = 1/2)2 1 + 19X + 6X2 2 2( = 0)Hence (X) = 1+ 19X + 6X2. Substituting the nonzero elements of GF(25) into(X),we nd that (X) has and 24as roots. Hence the error locationnumbers are 1= 30and 24= 7. As a result, the error polynomialise(X) = X7+ X30.The decoder decodes r1(X) into r1(X) +e(X) = 0.(b)Now we consider the decoding of r2(X) = 1 +X17+X28. The syndromecomponents ofr2(X) are:S1 = r2() = 2,S2 = S21 = 4,S4 = S22 = 8,S3 =r2(3) = 21.The error location polynomial (X) is found by llingTable P.6.3(b):34Table P.6.3(b) ()(X) d 2 -1/2 1 1 0 -10 1 20 01 1+ 2X 301 1( = 1/2)2 1 + 2X + 28X2 2 2( = 0)The estimated errorlocation polynomial is(X) = 1 + 2X + 28X2This polynomial does nothave roots in GF(25), and hence r2(X) cannot be decoded andmustcontain more than two errors.6.4 Let n = (2t + 1). Then(Xn+ 1)= (X+ 1)(X2t+ X(2t1)+ + X+ 1The roots of X+ 1 are 1, 2t+1, 2(2t+1),, (1)(2t+1). Hence, , 2, , 2tare rootsof the polynomialu(X) = 1 +X+ X2+ + X(2t1)+ X2t.This implies that u(X) is code polynomialwhich has weight 2t + 1. Thus the code hasminimum distance exactly2t + 1.6.5 Consider the Galois eld GF(22m). Note that 22m 1 = (2m1) (2m+ 1). Let bea primitive element in GF(22m). Then = (2m1)is anelement of order 2m+ 1. Theelements 1, , 2, 2, 3, 4, , 2mare allthe roots of X2m+1+1. Let i(X) be the minimal35polynomial of i.Then a t-error-correcting non-primitive BCH code of length n = 2m+1 isgenerated byg(X) = LCM {1(X), 2(X), , 2t(X)} .6.10 Use Tables6.2 and 6.3. The minimal polynomial for 2= 6and 4= 12is2(X) = 1 + X+X2+ X4+ X6.The minimal polynomial for 3= 9is3(X) = 1 + X2+ X3.Theminimal polynomial for 5= 15is5(X) = 1 + X2+ X4+ X5+X6.Henceg(X) =2(X)3(X)5(X)The orders of 2, 3and 5are 21,7 and 21 respectively.Thus the length isn = LCM(21, 7, 21),and the code is adouble-error-correcting (21,6) BCH code.6.11 (a) Let u(X) be a codepolynomial and u(X) = Xn1u(X1) be the reciprocal of u(X).A cycliccode is said to be reversible if u(X) is a code polynomial thenu(X) is also a codepolynomial. Consideru(i) = (n1)iu(i)Since u(i) =0 for t i t, we see that u(i) has t, , 1, 0, 1, , tas roots36and isa multiple of the generator polynomial g(X). Therefore u(X) is acode polynomial.(b) If t is odd, t+1 is even. Hence t+1is theconjugate of (t+1)/2and (t+1)is the conjugateof (t+1)/2. Thust+1and (t+1)are also roots of the generator polynomial. It followsfromthe BCH bound that the code has minimum distance 2t + 4 (Sincethe generator polynomialhas (2t + 3 consecutive powers of asroots).37Chapter 77.2 The generator polynomial of thedouble-error-correcting RS code over GF(25) isg(X) = (X + )(X +2)(X + 3)(X + 4)= 10+ 29X + 19X2+ 24X3+ X4.The generator polynomialof the triple-error-correcting RS code over GF(25) isg(X) = (X +)(X + 2)(X + 3)(X + 4)(X + 5)(X + 6)= 21+ 24X + 16X2+ 24X3+ 9X4+10X5+ X6.7.4 The syndrome components of the received polynomialare:S1 = r() = 7+ 2+ = 13,S2 = r(2) = 10+ 10+ 14= 14,S3 = r(3) =13+ 3+ 12= 9,S4 = r(4) = + 11+ 10= 7,S5 = r(5) = 4+ 4+ 8= 8,S6 =r(6) = 7+ 12+ 6= 3.The iterative procedure for nding the errorlocation polynomial is shown in Table P.7.4. Theerror locationpolynomial is(X) = 1 + 9X3.The roots of this polynomial are 2, 7,and 12. Hence the error location numbers are 3, 8,and 13.From thesyndrome components of the received polynomial and the coefcientsof the error1Table P.7.4 (X) d l l1 1 1 0 10 1 130 01 1 + 13X 101 0(take = 1)2 1 + X 71 1 (take = 0)3 1 + 13X + 10X292 1 (take = 1)4 1+ 14X + 12X282 2 (take = 2)5 1 + 9X30 3 2 (take = 3)6 1 + 9X3location polynomial, we nd the error value evaluator,Z0(X) = S1 +(S2 + 1S1)X + (S3 + 1S2 + 2S1)X2= 13+ (14+ 013)X + (9+ 014+ 013)X2=13+ 14X + 9X2.The error values at the positions X3, X8, andX13are:e3 = Z0(3) 153554b96e
https://www.mega2030.com/forum/general-discussions/weiss-saracon-1-61-27-rar-free
https://www.iqbalacedemyhyderabad.com/forum/welcome-to-the-forum/download-italian-movie-commandante