Compression and Robust Transmission of Images Over Wireless Channels

Share Embed Donate


Short Description

Image transmission...

Description

Abstract In this dissertation we study compression and robust transmission of images over wireless channels. Due to the problems associated with wireless channels, image communication over these channels requires error-resilient coding schemes that must offer good compression and low complexity. We propose an analysis-by-synthesis coding technique called Variable Block-size two-dimensional Code Excited Linear Predictive (VB 2D-CELP) coding that implements block-adaptive prediction and variable block-size coding. The method can be used for still picture coding or for periodic intra-frame coding required in time-varying image coding. The scheme demonstrates its merits over the DCT-based JPEG standard by reducing the block effects of the DCT method while having low decoder complexity and offering provisions for error-resilience. Another important problem studied in this dissertation is the transmission of images over wireless channels, in particular over CDMA Rayleigh fading channels. We develop a robust coding scheme and propose error-resilient tools that are implemented in the source coding scheme to mitigate the effect of uncorrected channel errors and to limit error p r o p agation. Source error detection and concealment techniques are implemented under the source coding constraining conditions or under separation of the responses into zero-input response and zero-state response of each image block. Based on an investigation of the error sensitivity of the bit-stream of coded images, we propose and investigate strategies that combine error-resilient source coding and channel error control for the purpose of providing robust transmission. The a priori knowledge of the bit-sensitivity of the different types of information of the compressed image data enables us to perform an efficient unequal error protection for robust transmission. For the channel error control, we investigate a type-I hybrid ARQ protocol using concatenated Reed-Solomon/Convolutional coding. We study the system performance for two extreme channel conditions: the perfectly interleaved channel and a quasi-static highly correlated channel. For applications with different quality of service requirements, we study the system performance in terms of reliability and transmission delay and examine the effect of outer interleaving and maximum number of retransmissions on the system performance using a quasi-analytical method for the case of the channel with non-independent errors. Finally, a coding control technique that dynamically adapts the source coder rate and channel error control to the channel condition is proposed. Based on an estimate of the

channel condition, the rate control adapts the source coder rate so as the compression ratio is changed to provide higher channel protection when the channel is severe, a d improve the source rate and provide better performance when the conditions are favorable.

Sommaire Cette Btude traite du codage et de la transmission robuste d'images sur canaux radiomobiles. Ces canaux peuvent &re fortement bruit& ce qui nkessite le recours B des techniques de codage robustes qui offrent B la fois un t a u de compression eleve et une complexit6 rkduite. Nous proposons une methode de codage b&e sur un traitement du type analyse-par-synthbe, B savoir le codage "Variable Block-size two-dimensional Code Excited Linear Predictiven (VB 2D-CELP). La methode peut &re utilisb pour le codage B debit r6duit aussi bien d'images fixes que de trames de skuences d'images. L'6tude nous a permis de conclure que le codage VB 2D-CELP fournit, B un taux de compression donne, des images de meilleure qualit6 que le DCT. Par ailleurs, le decodeur du VB 2D-CELP est moins complexe que celui du DCT adopt6 dans le standard JPEG et est robuste aux erreurs de transmission. On s'int6resse aussi B la transmission des images cod& sur un lien CDhlA caracterise par des Qvanouissementsde Rayleigh. Nous dbveloppons une variante du codeur de source afin de rendre celui-ci capable de detecter puis cacher l'effet des erreurs de transmission qui n'ont pas 6tB corrigees ou detectees par le codage de canal. Ces techniques sont fondees sur les contraintes du dCcodage ou sur la separation de la r6ponse dans chaque bloc de l'image en reponses "zero-inputn et "zero-state". Pour une transmission robuste, on propose, sur la base de la sensibilit6 des donn6es d'images cod6cs aux erreurs de transmission, des strategies de contrde d'erreurs qui adaptent le codage de canal aux diffgrents types d'information B transmettre de la source. Quant au codage de canal, nous Qtudions un codage hybride de type-I qui utilise la concatenation d'un code Reed-Solomon (RS) au codeur Convolutionel. Nous Qvaluons la performance du systhme dans deux cas: un canal B 6vanouissements de Rayleigh independants pour les systhmes sans contraintes de d6lai et un canal quasi-statique oh les evanouissements varient lentement pendant la durke d'un paquet de donnees. Dans le cas du canal de Rayleigh non independant, une methode quasi-analytique est d6velopp6e afin d'evaluer la performance pour des applications ayant differentes contraintes de qualit6 de service. Cette methode nous permet d'evaluer la performance en fonction de l'effet de I'entrelacernent sur les symboles RS e t de celui de la troncature des retransmissions, nous permettant ainsi d'examiner la fiabilit.4 et le d6lai de transmission de la communication. Comme l'utilisation de la retransmission automatique en cas d'errecrs peut, engendrer

des delais indffiirables, il est preferable d'adapter le volume des d o n n k Q transmettre en fonction de I'ktat du canal. Nous proposons une methode de contrde du debit qui I'adapte Q 1'Qtat du canal. Dans des conditions defavorables du canal, le volume de d o n n k Q transmettre avec retransmission s t r d u i t pour pouvoir plus proteger les bits sensible aux erreurs. Dans le cas oa les conditions sont favorables, le debit Q la source est eleve pour permettre une reconstruction de meilleure qualit6 Q la rkeption.

Acknowledgments First and foremost I wish to express my gratitude to my supervisor Professor Eric Dubois for his guidance and continuous support throughout my research. With hi insights and suggestions Dr Dubois made this research both possible and interesting. My special thanks go to my family and most of all my parents and my husband for their love, support, encouragement and understanding. Special thanks to all the sources of funding t h i research. I gratefully acknowledge the support of my supervisor, the NSERC and GDC. I would also like to express appreciation to Professor Susumu Yoshida who made my research visit in his laboratory at Kyoto university both possible and fruitful. I am also thankful to Dr Charles Despins of Microcell Labs for constructive remarks on the subject of channel coding. Special thanks to all the professional and technical me~nbersat the Institut National de la Recherche Scientifique (1NRS)-Telecommunications,who gave me the opportunity to have access to their laboratory facilities. Thanks to Professor Amar Mitiche, Albert, Slah, Oku, other past and current members of the visual communications group at INRS, and many others a t INRS, McGill university, Nortel, Kyoto university and NTT. Finally, and most I would like to thank God who is my greatest support and inspiration.

Contents 1 Introduction

1.1 The future of wireless Multimedia communications . . . . . . . . . . . . . 1.1.1 The future of wireless communications . . . . . . . . . . . . . . . . 1.1.2 Wireless Multimedia communications . . . . . . . . . . . . . . . . . 1.2 Image coding and wireless transmission . . . . . . . . . . . . . . . . . . . . 1.2.1 Requirements for image wireless communications . . . . . . . . . . 1.2.2 Source and channel coding dilemma . . . . . . . . . . . . . . . . . . 1.2.3 Advantages of error resilience . . . . . . . . . . . . . . . . . . . . . 1.3 Research Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 General objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Research strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Structure and contributions of the dissertation . . . . . . . . . . . . . . . .

1 1 1 2 4 4 5 6 6 6 7 9

2 Wireless Image Communications 11 2.1 Image coding techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 11 2.1.1 Digital communication systems and compression 2.1.2 Conventional techniques . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.3 Block-based image coding . . . . . . . . . . . . . . . . . . . . . . . 13 2.1.4 Advanced techniques . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2 Wireless image communication systems . . . . . . . . . . . . . . . . . . . . 15 2.2.1 Performance evaluation: is comparison possible? . . . . . . . . . . . 15 2.2.2 Effects of channel errors . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2.3 Preventing channel errors . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.4 Approaches for image transmission over wireless channels . . . . . . 17

...........

Contents 2.3 Summary

vii

....................................

3 Analysis-by-Synthesis Coding System 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 An overview of JPEG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Problems with JPEG . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Analysis-by-Synthesis coding . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 2D-CELP coding system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Definitions and notation . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 2D-CELP decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 2D-CELP encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Residual Codebook design . . . . . . . . . . . . . . . . . . . . . . . 3.5 Block-adaptive prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Predictor design algorithm . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Design issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Open loop predictor design results . . . . . . . . . . . . . . . . . . . 3.6 Variable Block-size 2D-CELP coding . . . . . . . . . . . . . . . . . . . . . 3.6.1 Variable block-size coding . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.3 Variable block-size coding concept . . . . . . . . . . . . . . . . . . . 3.6.4 Threshold of block subdivision . . . . . . . . . . . . . . . . . . . . . 3.6.5 Quad-tree structure encoding . . . . . . . . . . . . . . . . . . . . . 3.6.6 Codeword multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Image transmission over a noiseless channel . . . . . . . . . . . . . . . . . . 3.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20 21 21 21 22 23 25 25 26 27 28 29 29 31 33 35 35 36 36 38 38 38 39 42

4 T h e Wireless Transmission Environment a n d Channel E r r o r Control 50 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2 DSCDMA transmission environment . . . . . . . . . . . . . . . . . . . . . 51 4.2.1 Direct-Sequence CDMA . . . . . . . . . . . . . . . . . . . . . . . . 51 4.2.2 Transmission impairments . . . . . . . . . . . . . . . . . . . . . . . 51 4.2.3 Transmission system requirements . . . . . . . . . . . . . . . . . . . 52 4.3 Error control techniques for the wireless channel . . . . . . . . . . . . . . . 54

Contents 4.3.1 Forward error correction in fading channels . . . . . . . . . . . . . . 4.3.2 Automatic Repeat reQuest schemes . . . . . . . . . . . . . . . . . . 4.3.3 Hybrid ARQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Transmission requirements and choice of error control protocol . . . . . . . 4.4.1 Error control for QoS requirements . . . . . . . . . . . . . . . . . . 4.4.2 Choice of error control protocol . . . . . . . . . . . . . . . . . . . . 4.4.3 Type-I RS/CC hybrid ARQ error control . . . . . . . . . . . . . . . 4.4.4 Delay-limited coding . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 The system and its model . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Uplink transceiver description . . . . . . . . . . . . . . . . . . . . . 4.5.2 Channel Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 Simulation parameters . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Protocol Performance Analysis i n CDMA Rayleigh Fading Channels

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Type-I RS/CC hybrid ARQ error control . . . . . . . . . . . . . . . . . . . 5.2.1 Principle of the transmission protocol . . . . . . . . . . . . . . . . . 5.2.2 Interleaving and ARQ truncation . . . . . . . . . . . . . . . . . . . 5.3 Performauce evaluation criteria . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Transmission delay and queuing delay . . . . . . . . . . . . . . . . . 5.4 Performance of FEC scheme over memoryless channel . . . . . . . . . . . . 5.4.1 BER performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Concatenated coding scheme performance . . . . . . . . . . . . . . 5.5 Performance of the hybrid ARQ protocol on a memoryless channel . . . . . 5.5.1 Throughput efficiency . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Protocol error probability . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Average transmission delay analysis . . . . . . . . . . . . . . . . . . 5.5.4 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Protocol performance evaluation in the presence of non-independent errors

viii

Contents 5.6.1 blarkovian analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 RS decoder performance . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 Reliability, Throughput and Transmission Delay . . . . . . . . . . . 5.6.4 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Transmission of VB 2D-CELP Coded Images over Noisy Channels 6.1 Error sensitivity analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Error propagation due to loss of vital information . . . . . . . . . . 6.1.2 Error propagation due to incorrect predictions . . . . . . . . . . . . 6.1.3 Error propagation due to loss of codeword synchronization . . . . . 6.1.4 Error propagation due to loss of coefficient synchronization . . . . . 6.1.5 Effect of channel error on variable block-size coding . . . . . . . . . 6.2 Resilience to channel errors . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Robust predictive coding . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Data frame structure . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Decoder error detection . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 Decoder error concealment . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Backward and Forward decoding . . . . . . . . . . . . . . . . . . . 6.3 Results of transmission of coded images . . . . . . . . . . . . . . . . . . . . 6.3.1 The limiting case of the memoryless channel . . . . . . . . . . . . . 6.3.2 A quasi-static highly correlated channel . . . . . . . . . . . . . . . . 6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 Source a n d Channel Coding Interdependency 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Enhanced VB 2D-CELP coding scheme . . . . . . . . . . . . . . . . . . . . 7.2.1 Coding scheme description . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Separation of ZIR and ZSR . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Effects of channel errors . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Coded data structure . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 Consequences on codebook indexing . . . . . . . . . . . . . . . . . . 7.3 A bit-stream structure for improved error-resilience . . . . . . . . . . . . .

ix

Contents 7.3.1 Error sensitivity analysis . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Bit-stream of the VB 2D-CELP compressed image . . . . . . . . . . 7.4 Coding performance improvement . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Results: a compdrative study . . . . . . . . . . . . . . . . . . . . . 7.4.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Coding with Dynamic Rate Control . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Delay-limited adaptive coding . . . . . . . . . . . . . . . . . . . . . 7.5.2 Coding with variable compression . . . . . . . . . . . . . . . . . . . 7.5.3 Adaptive control scheme . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Results and conclusions . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Conclusions and F u r t h e r Research 8.1 Summary and contributions of the dissertation . . . . . . . . . . . . . . . . 8.1.1 Compression coding . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Error control strategy . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Error resilient coding . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.4 Combined source/channel coding strategies . . . . . . . . . . . . . . 8.1.5 Summary of contributions . . . . . . . . . . . . . . . . . . . . . . . 8.2 Topics for future research . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Coding for image compression . . . . . . . . . . . . . . . . . . . . . 8.2.2 Channel error control . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Transmission environment . . . . . . . . . . . . . . . . . . . . . . . 8.2.4 Error resilience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.5 Image transmission performance evaluation . . . . . . . . . . . . . . 8.3 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Bibliography

x

List of Figures 1.1 Conventional coding strategy. 2.1 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11

.........................

Block-based image coding.

.......................... Analysis-by-synthesis procedure. . . . . . . . . . . . . . . . . . . . . . . . 2D-CELP decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2D-CELP encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geometry of tw+dimensional predictor support . . . . . . . . . . . . . . . .

Block shapes that ensure causal computability for P2 = 1,2,3. . . . . . . . Orientation of initial predictors . . . . . . . . . . . . . . . . . . . . . . . . Oriented predictors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Variable Block-size 2D-CELP coding flowchart. . . . . . . . . . . . . . . . Original image "boat" . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enlarged window of "boat" . . . . . . . . . . . . . . . . . . . . . . . . . . . Enlarged window of 2D-CELP coded "boat" with K = 1, block-size=4 x 4: bit rate=0.4 bpp, PSNR=31.39 dB. . . . . . . . . . . . . . . . . . . . . . 3.12 Enlarged window of 2D-CELP coded "boat" with K = 5, block-size=4 x 4: bit rate=0.39 bpp, PSNR=30.89 dB. . . . . . . . . . . . . . . . . . . . . . 3.13 Variable block-size segmentation for image "boat": high-activity region (block size 2x 2) represented by 0 gray level and low-activity region (block size 4x4) represented by light shade . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14 Variable block-size segmentation for image "lena": high-activity region (block size 2 x 2) represented by 0 gray level and low-activity region (block size 4 x4) represented by light shade . . . . . . . . . . . . . . . . . . . . . . . . . . .

List of Figures

xii

3.15 Variable block-size distribution for image " k t " : large blocks are those in light shade, the three regions from brighter to darker represent 8 x 8 , 4 x 4 and 2 x 2 sizes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.16 Variable block-size distribution for image "lenan: large blocks are those in light shade, the three regions from brighter to darker represent 8 x 8 , 4 x 4 and 2 x 2 sizes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.17 PSNR performance for image "boat" coded with 2D-CELP using fixed blocksize of 4 x 4, and with VB 2D-CELP using variable block-size coding. . . 3.18 Performance of VB 2D-CELP coding for the image "boat" versus JPEG. . 3.19 Performance of VB 2D-CELP coding system for image "lenan versus JPEG. 3.20 Image "boat" coded with VB 2D-CELP using two block sizes 4 x4, and 2 x 2: bit rate=0.545 bpp, PSNR=33.58 dB. . . . . . . . . . . . . . . . . . . . . 3.21 Image "boat" coded with VB 2D-CELP using three block sizes 8 x 8, 4 x 4, and 2 x 2: bit rate=0.519 bpp, PSNK=34.45 dB. . . . . . . . . . . . . . . 3.22 JPEG coded "boat": bit rate=0.543 bpp, PSNR=33.40 dB. . . . . . . . . 3.23 JPEG coded "boat": bit rate=0.51 bpp, PSNR=33.O dB. . . . . . . . . . . 3.24 Enlarged window of "boat" coded with VB 2D-CELP using two block sizes 4 x 4, and 2 x 2: bit rate=0.545 bpp, PSNR=33.58 dB. . . . . . . . . . . 3.25 Enlarged tvindow of "boat" coded with VB 2D-CELP using three block sizes 8 x 8 , 4 x 4, and 2 x 2: bit rate=0.519 bpp, PSNR=34.45 dB. . . . . . . . 3.26 Enlarged window of JPEG coded "boatn: bit rate=0.543 bpp, PSNR=33.40 dB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.27 Enlarged window of JPEG coded "boat": bit rate=0.51 bpp, PSNR=33.0 dB. 3.28 Image "lenan coded with VB 2D-CELP using two block sizes 4 x 4, and 2 x 2: bit rate=0.517 bpp, PSNR=34.89 dB. . . . . . . . . . . . . . . . . 3.29 JPEG coded "lena": bit rate=0.51 bpp, PSNR=34.74 dB. . . . . . . . . . 3.30 Image "lena" coded with VB 2D-CELP using three block sizes 8 x 8 , 4 x 4, and 2 x 2: bit rate=0.459 bpp, PSNR=35.19 dB. . . . . . . . . . . . . . . 3.31 JPEG coded "lena": bit rate=0.462 bpp, PSNR=34.26 dB. . . . . . . . . 4.1 Uplink transmitter block diagram. 4.2 Uplink receiver block diagram. .

....... ... . . ..... ...... ... ... . . ....

List of Figures 5.1 BER performance of the Rayleigh fading channel with infinite interleaving and one or twc-branch diversity: comparison between simulations and analytical bounds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Concatenated coding scheme probability of total error Pt: comparison between simulation results and upper analytical bound. . . . . . . . . . . . . 5.3 Concatenated scheme probability of total error Pt as function of RS interleaving depth I; I = 1 corresponds to no interleaving. . . . . . . . . . . . 5.4 Effect of RS outer interleaving depth I on the Protocol error probability of the untruncated protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Protocol error probability of the truncated protocol as function of maximum number of retransmissions L and outer interleaving depth I. . . . . . . . . 5.6 Untruncated protocol throughput performance: comparison between simulation results and upper theoretical bound. . . . . . . . . . . . . . . . . . 5.7 Truncated protocol throughput performance as function of maximum number of retransmissions L and outer interleaving depth I. . . . . . . . . . . 5.8 Average normalized delay of the untruncated protocol: comparison between simulation results and upper theoretical bound. . . . . . . . . . . . . . . . 5.9 Average normalized delay of the untruncated protocol as function of RS interleaving depth I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Average normalized delay of the truncated protocol as function of maximum number of retransmissions L and outer interleaving depth I. . . . . . . . . 5.11 Average transmission delay of the truncated protocol as function of maximum number of retransmissions L and RS interleaving depth I. . . . . . . 5.12 Average transmission delay of the untruncated protocol as function of RS interleaving depth I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13 Average transmission delay of the truncated protocol as function of reliability represented for different values of RS interleaving depth I. . . . . . . . . . 5.14 Concatenated scheme performance: probability of total error Pt as function of RS interleaving depth I. . . . . . . . . . . . . . . . . . . . . . . . . . . 5.15 Protocol error probability of the untruncated protocol as function of RS interleaving depth I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.16 Untruncated protocol throughput performance as function of RS interleaving depth I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xiii

List of Figures

xiv

5.17 Untruncated protocol delay performance as function of RS interleaving depth

I.

.........................................

5.18 Throughput of the untruncated protocol: comparison between simulation and analytical results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.19 Protocol error probability of the untruncated protocol: comparison between simulation and analytical results. . . . . . . . . . . . . . . . . . . . . . . . 5.20 Average transmission delay of the untruncated protocol: comparison between simulation and analytical results. . . . . . . . . . . . . . . . . . . . 5.21 Throughput of the truncated protocol: comparison between simulation and analytical results for the no-interleaving case. . . . . . . . . . . . . . . . . 5.22 Reliability of the truncated protocol: comparison between simulation and analytical results for the no-interleaving case. . . . . . . . . . . . . . . . . 5.23 Average transmission delay of the truncated protocol: comparison between simulation and analytical results for the no-interleaving case. . . . . . . . 5.24 Protocol error probability of the truncated protocol: comparison between simulation and analytical results for the ideal interleaving case. . . . . . . 5.25 The system throughput for the different allowed maximum number of retransmission attempts L. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.26 Effect of the outer interleaving on the system throughput for the different allowed maximum number of retransmissions L. . . . . . . . . . . . . . . . 5.27 Protocol error probability of the truncated protocol for the different allowed maximum number of retransmissions L. . . . . . . . . . . . . . . . . . . . 5.28 Truncated protocol delay performance for the different allowed maximum number of retransmissions L, and two degrees of interleaving depth I. . . 5.29 Average transmission delay as function of the probability of total error for the different allowed maximum number of retransmissions L and no interleaving. 5.30 Average transmission delay as function of reliability for the different allowed maximum number of retransmissions and no interleaving. . . . . . . . . . 6.1 6.2 6.3 6.4

Predictive coder feedback loop. . . . . . . . . . . . . . . . . . . . . Impulse response of synthesis filter corresponding to predictor H?. Impulse response of synthesis filter corresponding to predictor H!). Impulse response of synthesis filter corresponding to predictor

~2

.... .... .... ' .. . . .

List of Figures Impulse response of synthesis filter corresponding to predictor H!' . . . . . Impulse response of synthesis filter corresponding to predictor H?) . . . . . Framing of the encoded data. . . . . . . . . . . . . . . . . . . . . . . . . . Synchronization and error concealment . . . . . . . . . . . . . . . . . . . . An image example of the error detection algorithm . . . . . . . . . . . . . The flowchart of error detection and concealment algorithm . . . . . . . . . Overview of the simulation blocks. . . . . . . . . . . . . . . . . . . . . . . Original image "lena" . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reconstructed "lena" over a noiseless channel: source bit rate=0.567 bpp, PSNR=34.88 dB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.14 Best reconstructed image a t Eb/No = 3 dB with L = 0, using backward decoding and no concealment: Max-PSNR=9.41 dB. . . . . . . . . . . . . 6.15 Best reconstructed image at Eb/No = 3 dB with L = 0, using backward decoding and GOB concealment: Max-PSNR=26.99 dB. . . . . . . . . . . 6.16 Best reconstructed image a t Eb/No = 3 dB with L = 0, with backward decoding and Line concealment: Max-PSNR=27.15 dB. . . . . . . . . . . 6.17 Best reconstructed image at Eb/No = 3 dB with L = 0, with no backward decoding and using Line concealment: Max-PSNR=20.47 dB. . . . . . . . 6.18 Worst reconstructed image at Eb/No = 3 dB with L = 1, with no backward decoding and no concealment: Min-PSNR=10.90 dB. . . . . . . . . . . . . 6.19 Worst reconstructed image at Eb/No = 3 dB with L = 1, using backward decoding and no concealment: Min-PSNR=13.40 dB. . . . . . . . . . . . . 6.20 Worst reconstructed image at Eb/No = 3 dB with L = 1, using backward decoding and GOB concealment: Min-PSNR=26.99 dB. . . . . . . . . . . 6.21 Worst reconstructed image at Eb/No = 3 dB with L = 1, using backward decoding and Line concealment: Min-PSNR=27.13 dB. . . . . . . . . . . . 6.22 Mean PSNR performance as function of Eb/N0 for "lena": comparison between Line concealment and no concealment using BD for L = 0, 1 and 3. 6.23 Mean PSNR performance as function of Eb/No for "lena": comparison between BD and forward decoding only, using Line concealment for L=O, 1 a n d 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.5 6.6 6.7 6.8 6.9 6.10 6.11 6.12 6.13

xv

List of Figures 6.24 Image "lenan Mean PSNR performance as function of FER: comparison between BD and forward decoding only, using Line concealment for L=O, 1 and3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.25 Mean PSNR performance as function of Eb/No for "lena": comparison between Line concealment and no concealment using backward decoding (BD) for L = 0, 1 and 3 in the case of a correlated channel. . . . . . . . . . . . 6.26 Image "lena" Mean PSNR performance as function of FER for a correlated channel, using backward decoding and Line concealment for L = 0, 1 and 3. 6.27 Effect of RS interleaving depth I on the Mean PSNR performance for "lena". 6.28 Effect of RS interleaving depth I on the Minimum PSNR performance for "lena". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.29 Image "lena" Mean PSNR performance as function of FER for a correlated channel, using backward decoding and Line concealment: influence of L and I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.30 Best decoded image using GOB concealment a t Eb/No=3.8 dB and with no retransmission: Max-PSNR=20.12 dB. . . . . . . . . . . . . . . . . . . . . 6.31 Best decoded image using Line concealment at Eb/No=3.8 dB and with no retransmission: Max-PSNR=20.29 dB. . . . . . . . . . . . . . . . . . . . . 6.32 Worst decoded image using Line concealment at Eb/No=3.8 dB and with a maximum of one retransmission: Min-PSNR=15.43 dB. . . . . . . . . . . 6.33 Worst decoded image using Line concealment at Eb/N,=3.8 dB and with a maximum of two retransmissions: Min-PSNR=16.53 dB. . . . . . . . . . . Source encoder/decoder components. . . . . . . . . . . . . . . . . . . . . . Encoder block diagram based on separation of the ZIR and ZSR. . . . . . Decoder block diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Organization of residual codebooks for five predictors. . . . . . . . . . . . Organization of codebooks into a global codebook. . . . . . . . . . . . . . Codebook C histogram for VB 2D-CELP coded "lena" with two block sizes. Codebook C histogram for VB 2D-CELP coded "boat" with two block sizes. Predictors histogram for VB 2D-CELP coded "lena" with two block sizes 4~4and2x2. ................................

xvi

xvii

List of Figures 7.9

Predictors histogram for VB 2D-CELP coded "boatn with two block sizes 4x4and2x2.

................................

7.10 Predictor distribution for "Lenan. The five regions from white to black r e p

HF)

resent predictors H?), H f ) , H?), and H;); the high-activity region (block size 2 x 2) is represented by 0 gray level. . . . . . . . . . . . . . . . 7.11 Predictor distribution for "Lenan.The five regions from white to black represent predictors H?), H?), and the low-activity region (block size 4 x 4) is represented by 0 gray level. . . . . . . . . . . . . . . . . . . 7.12 Predictor distribution for "boat". The five regions from white to black r e p resent predictors H f ) , H f ) , H?), and H?; the high-activity region (block size 2 x 2) is represented by 0 gray level. . . . . . . . . . . . . . . . 7.13 Predictor distribution for "boat": The five regions from white to black represent predictors H?), H f ) , H?), and H?); the low-activity region (block size 4 x 4) is represented by 0 gray level. . . . . . . . . . . . . . . . . . . 7.14 Spatial distribution of Predictor H?) used for image "Lena" coded with two block sizes 4 x 4 and 2 x 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.15 Spatial distribution of Predictor used for image "lena" coded with two block sizes 4 x 4 and 2 x 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16 Spatial distribution of Predictor used for image "lena" coded with two block sizes 4 x 4 and 2 x 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.17 Spatial distribution of Predictor H f ) used for image 'Sena" coded with two block sizes 4 x 4 and 2 x 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.18 Spatial distribution of Predictor H f ) used for image "lena" coded with two block sizes 4 x 4 and 2 x 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.19 Spatial distribution of Predictor H?) used for image "lena" coded with two block sizes 4 x 4 and 2 x 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.20 Spatial distribution of Predictor used for image "lena" coded with two block sizes 4 x 4 and 2 x 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.21 Spatial distribution of Predictor used for image "lena" coded with two block sizes 4 x 4 and 2 x 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.22 Spatial distribution of Predictor H f ) used for image "Lena" coded with two block sizes 4 x 4 and 2 x 2. . . . . . . . . . . . . . . . . . . . . . . . . . .

HP), HF) H?); HF)

HF)

HY) H?)

HF) H?)

List of Fieures

xviii

7.23 Spatial distribution of Predictor H?' used for image "lenancoded with two block sizes 4 x 4 and 2 x 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.24 Residual codebook organization so that only N, code-vector indices are addressed: the code-vector index entropy codebook is of size N,. . . . . . . . 7.25 Mean PSNR performance comparison for the ZIR scheme and the GOB scheme in case of no retransmission (L = 0). . . . . . . . . . . . . . . . . 7.26 Effect of the variation of the retransmission number L on the PSNR performance of the ZIR scheme and the GOB scheme. . . . . . . . . . . . . . . 7.27 Minimum PSNR obtained over the 25 decoded images using the ZIR scheme and different maximum number of retransmissions L. . . . . . . . . . . . . 7.28 Minimum PSNR obtained over the 25 decoded images using the GOB scheme and different maximum number of retransmissions L. . . . . . . . . . . . . 7.29 ZIR scheme PSNR variation at Eb/No=3.8 dB. . . . . . . . . . . . . . . . 7.30 GOB scheme PSNR variation at Eb/No=3.8 dB. . . . . . . . . . . . . . . 7.31 Best decoded image using the GOB scheme at Eb/No=3.8 dB and with no retransmission: Max-PSNR=19.96 dB. . . . . . . . . . . . . . . . . . . . . 7.32 Worst decoded image using the ZIR scheme at Eb/No=3.8 dB when type-I1 data is sent with no retransmission: Min-PSNR=16.93 dB. . . . . . . . . . 7.33 Worst decoded image using the ZIR scheme at Eb/No=3.8 dB when type-I1 data is sent with a maximum of one retransmission: Min-PSNR=23.07 dB. 7.34 Worst decoded image using the ZIR scheme a t Eb/lVo=3.8 dB when type-I1 data is sent with a maximum of two retransmissions: hlin-PSNR=29.01 dB. 7.35 Mean PSNR performance as function of data FER: co~nparisonbetween ZIR scheme and GOB scheme. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.36 Comparison between Minimum and hlaximum PSNR for the ZIR scheme, represented as function of data FER for L = 0. . . . . . . . . . . . . . . . 7.37 Average image transmission delay normalized by the minimum delay obtained using the GOB scheme. . . . . . . . . . . . . . . . . . . . . . . . . 7.38 Image transmission delay at Eb/No=3.8 dB normalized by the maximum average image transmission delay: comparison between the GOB scheme and the ZIR scheme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.39 Simplified transmitter block diagram. . . . . . . . . . . . . . . . . . . . .

List of Figures 7.40 Average PSNR computed over 40 decoded images for the different case of study considered. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.41 Average image transmission delay normalized by the minimum delay obtained using the GOB scheme, minimum volume of image coded data and a maximum of 3 retransmissions. . . . . . . . . . . . . . . . . . . . . . . . . 7.42 Enlarged window of reconstructed "lena" at a channel SNR of 3.8 dB and L = 3: GOB scheme in the absence of concealment, source bit rate=0.577 bpp, PSNR=13.977 dB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.43 Enlarged window of reconstructed "lena" at a channel SNR of 3.8 dB and L = 3: GOB scheme, source bit rate=0.577 bpp, PSNR=28.925 dB. . . . . 7.44 Enlarged window of reconstructed "lena" with a channel SNR of 3.8 dB and L = 3: ZIR scheme in the absence of concealment, source bit rate=0.573 bpp, PSNR=15.543 dB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.45 Enlarged window of reconstructed "lena" with a channel SNR of 3.8 dB and L = 3: ZIR scheme, source bit rate=0.573 bpp, PSNR=32.669 dB. . . . .

xix

164

164

166 166

166 166

List of Tables 3.1 Influence of initial conditions on final predictors 3.2 MSPE for different sets of predictors . . . . . . 4.1 Examples of QoS requirements.

. . . .. . ..... ... . ... . .. . ... . . . . .

. . . . .. .. .. . . ..... . . . . ...

6.1 PSNR performance results for "lena" in the presence of channel errors: without Backward Decoding, no concealment, L = 0. . . . . . . . . . . . . . . . 6.2 PSNR performance results for "lena" in the presence of channel errors: without Backward Decoding, GOB concealment, L = 0. . . . . . . . . . . . . . 6.3 PSNR performance results for "lena" in the presence of channel errors: without Backward Decoding, Line concealment, L = 0. . . . . . . . . . . . . . . 6.4 PSNR performance results for "lena" in the presence of channel errors: with Backward Decoding, no concealment, L = 0. . . . . . . . . . . . . . . . . . 6.5 PSNR performance results for "lena" in the presence of channel errors: with Backward Decoding, GOB concealment, L = 0. . . . . . . . . . . . . . . . 6.6 PSNR performance results for "lena" in the presence of channel errors: with Backward Decoding, Line concealment, L = 0. . . . . . . . . . . . . . . . . 6.7 PSNR performance results for "lena" in the presence of channel errors: without Backward Decoding, no concealment, L = 1. . . . . . . . . . . . . . . . 6.8 PSNR performance results for "lena" in the presence of channel errors: without Backward Decoding, GOB concealment, L = 1. . . . . . . . . . . . . . 6.9 PSNR performance results for "lena" in the presence of channel errors: without Backward Decoding, Line concealment, L = 1. . . . . . . . . . . . . . . 6.10 PSNR performance results for "lena" in the presence of channel errors: with Backward Decoding, no concealment, L = 1. . . . . . . . . . . . . . . . . .

List of Tables

xxi

6.11 PSNR performance results for 4enan in the presence of channel errors: with Backward Decoding, GOB concealment, L = 1. . . . . . . . . . . . . . . . 124 6.12 PSNR performance results for "Lena" in the presence of channel errors: with Backward Decoding, Line concealment, L = 1. . . . . . . . . . . . . . . . . 125

List of Acronyms and Abbreviations Acr. or Abb. Description ACK AR

0

ARQ ~ P P bps BER CDMA CELP 2D-CELP CRC DCT DPCM DS EOB FDMA FEC FLC FPLMTS GBN GE GOB JPEG kbps LBG

Positive acknowledgment Autoregressive filter Automatic repeat request bits per pixel bits per second Bit error rate Code division multiple access Code excited linear predictive quantization Two-dimensional code excited linear prediction Cyclic redundancy check Discrete cosine transform Differential pulse code modulation Direct sequence End of block Requency division multiple access Forward error correction Fixed length code Future public land mobile telecommunication systems Go-Back-N ARQ GilberLElliott Group of blocks Joint photographic experts group specification for still image coding kilo bits per second Linde-Bum-Gray VQ design algorithm

MA1 Mbps hlDS MMSPE MPEG MSE MSPE MSRE NAK PC PCN PCS PSNR PVQ QPSK

RS

m/cc SAW SNR SR TC TDMA UMTS VB VLC VLSI VQ WLAN ZIR ZSR

hlultiple access interference Mega bits per second hlaximum distance separable Minimum mean squared prediction error Specification for video coding suitable for TV and video quality Mean squared error Mean squared prediction error hlean squared reconstruction error Negative acknowledgment Predictive coding Personal communication networks Personal communication systems Peak signal to noise ratio Predictive Vector Quantization Quadrature phase-shift keying modulation Reed-Solomon coding Reed-Solomon/convolutional concatenated coding Stopand-wait ARQ Signal to noise ratio Selective-repeat ARQ Transform coding Time division multiple access Universal mobile telephone systems Variable block-size Variable length code Very large scale integration Vector quantization Wireless local area networks Zero-input response Zero-state response

Chapter 1 Introduction 1.1 The future of wireless Multimedia communications It is anticipated that the future will witness the next revolution through telecommunications technology to reach the ultimate goal of ubiquitous connectivity at anytime, anywhere, with anyone, and with any media. In recent years, the communications sector was one of the few constantly growing sectors in industry and a wide variety of new services were created. Two of the areas that have experienced a massive growth are multimedia communications and wireless communications. Based on this growth, there is an extensive demand for systems that combine these two areas to provide wireless multimedia communications. Because of the complex technology and the potential market, research challenges abound in all the required technologies, including terminals, communications transceivers, source coding schemes as well as networks. Powerful wireless multimedia systems are being discussed, planned and under construction and it is immediately apparent that this large field will open space for yet unknown applications. This work is primarily concerned with image coding and transmission over noisy channels, which is a small subset of wireless multimedia communications. 1.1.1 The future of wireless communications

In the last few years, there has been extensive discussion on future Personal Communication Systems (PCS) [I], [2], [3]. The existing mobile communications systems (2nd generation) mainl:. support voice services and are limited to relatively low speed services, such as

.

1 Introduction

2

facsimile, and data with bit rate of several kbps. With the ever-increasing demand for PCS and Wireless Local Area Networks (WLAN), possibilities of transmitting a wide range of data over the wireless channels are increasing. Future systems (3rd generation) will not be limited t o speech services. A wide range of service types including full-featured voice services, circuit and packet data, image with high resolution and eventually video are expected. ITU-R requirements imply that 3rd generation systems must have the capability of transmitting up to 2 Mbps. In addition to this, 3rd generation services must be of the same quality as fixed networks [3]. There have been many proposals for systems supporting PCS. These are known as Personal Communication Networks (PCN), Future Public Land Mobile Telecommunication Systems (FPLMTS) and Universal Mobile Telephone Systems (UMTS). One of the most widely discussed topics has been the access area and the competitor for 3rd generation wireless communication system is likely to be Direct-Sequence Code Division Multiple Access (DS-CDMA) [4], [S]. CDMA has been shown to be a promising technique for personal and mobile communications. To accommodate the rapidly increasing demand for PCS and the users of portable computers under a limited spectrum, CDMA has been shown to be a practical and promising alternative multiple access method to both Frequency Division Multiple Access (FDMA) and Time Division Multiple Access (TDMA) [6],[7].CDMA's main attractive features and advantages include high system capacity, soft hand-off, multi-path mitigation, interference suppression and low power transmission. Its principle is that all system users have access to the entire bandwidth simultaneously. Because it is interference limited, the number of users that share the same spectrum while still maintaining acceptable Quality of Service (QoS) is determined by the interference generated by remaining users. CDMA has been proposed and developed in cellular systems, mobile satellite networks and Personal Communication Networks (PCN) [S].In particular, PCN based on digital wireless technologies are expected to play a significant role in next-generation telecommunication systems. 1.1.2 Wireless Multimedia communications

The rapid growth of voice traffic over cellular networks and the continuing rapid growth rate of the population of wireless users has led t o a greater demand for all types of bandwidthintensive personal communications services, including wireless video, wireless image trans-

1Introduction

3

mission, and multimedia in general. With recent investigations, the provision for multimedia applications is becoming one of the main interests in CDMA personal communication systems [9]. To transmit multimedia traffic on a CDMA system, the wireless access has to be as similar as possible to the wired access and satisfy different QoS requirements for a wide range of services characterized by different bit rates and statistical behavior of traffic sources [lo], [ll]. This demand for wireless multimedia communications, the effective use of bandwidth resources and the need for understanding all the communication issues involved have been the driving force behind the extensive academic and industrial research into this area. In recent years, there has been an increased research activity in the field of wireless video in general and wireless image transmission in particular. Examples of image services are the transmission of image bulletins from a wireless environment and remote monitoring. These services would make it possible to quickly ascertain the conditions at a remote site whenever necessary. The image bulletin service would capitalize on the mobility of the portable terminals, while the monitoring service would send images to a control center from fixed cameras, utilizing the freedom of equip ment installation allowed by wireless communications. Moreover, the potential of mobile multimedia communications for use in disaster prevention systems has drawn considerable interest in recent years. In transmitting information from mobile terminals to a control center, it would be valuable to have capabilities to transmit both images and voice simultaneously. By using multiple communication equipment, this would enable the center to send instructions to a mobile terminal during the transmission of image data. A multimedia support service would facilitate transmission of information from fixed equipment at the center to mobile terminals. This service would enable mobile terminals in the field to down-load information from the center as needed. At the same time, field users could obtain assistance through voiced instructions from center users while both look a t the same screen-displayed information. On the other hand, a multimedia information distribution service would deliver multimedia information such as voice and image data stored in the center's database to mobile terminals. For transmission of multimedia to be feasible in a wireless environment, different error control schemes must be considered for each service. For example, voice communication requires real-time qualities though some degree of noise is tolerable. Data transmission, on the other hand, must be error-free. Image transmission requirements depend on the service

1 Introduction

4

that specifies whether it is delay-constrained, or that only high-quality is of importance. The application of the proper error control t o each class is hence essential over the wireless channel known to be bursty in nature [3].

1.2 Image coding and wireless transmission 1.2.1 Requirements for image wireless communications

'

With the fast growing business in wireless access of multimedia information, visual communication over wireless channels has become an important service in multimedia communications. Spectrum is always at a premium and as demand takes off, there is a need for high sibwal compression for image transmission over wireless channels. However, due to the limitations of power and complexity, and high transmission error rates, maintaining acceptable visual quality of low-bit-rate image signals through noisy channels necessitates codecs that not only provide good compression, but also offer robustness, fidelity as well as low-complexity. As Figure 1.1 shows, most conventional approaches to image and video communications consist of a modular approach consisting of two stages: compression coding (or source coding), and channel coding.

Encoder Compression Signal

Signal

omp press ion

Channel

Decoder Fig. 1.1 Conventional coding strategy. Two goals are generally considered in the design of image compression techniques: 0

Maximizing compression or equivalently minimizing the bit rate.

1 Introduction

5

hlaximizing the quality (acceptability and/or intelligibility) of the reconstructed data. The primary role of the compression coder is to pack the maximum information into the smallest signal. The main problem in this approach is the fact that as the compression is higher, even a single bit in error may result in corruption of a wide range of decoded data. Therefore, source coding error robustness and resilience are important issues for image transmission over wireless channels. In addition to thii, complexity should also be taken into consideration. In order to use these compression schemes over wireless channels it is necessary to have some form of channel coding. This is done using a variety of error correction, error detection, repeat transmission (ARQ) and hybrid techniques. Powerful channel error control is particularly important when reliable transmission is aimed for. The main problem with channel coding is that it adds redundancy back to the compressed signal. For good quality channels with a low error rate, thii redundancy is not significant. However, for lower quality channels the necessary redundancy becomes much larger. Appropriate channel coding should hence be employed for the purpose of meeting the transmission requirements without excessively reducing the effectiveness of the compression. 1.2.2 Source and channel coding dilemma A cellular mobile radio channel presents a dilemma in the selection of source coding and channel coding schemes. In general, wireless channels suffer from severe impairment that cause transmission errors. Every wireless channel has a bandwidth constraint which limits the transmitted data rate. Consequently, in order to increase the communication system capacity it is essential to maximize the information transfer by removing the source redundancy using a source coding algorithm. However, the use of efficient source coders always leads to extra sensitivity t o transmission errors. Therefore, the decision on what to compress in wireless communications must be made judiciously, with the objective of providing a balance between the source and channel coding schemes employed in order to meet the targeted performance requirements.

1Introduction

6

1.2.3 Advantages of error resilience

Error resilience offers the user several advantages. First, in good channel conditions, it is expected that an improved quality can be obtained compared to a scheme with channel coding, as the redundant information that would be added by channel coding is saved. Although a channel-coded scheme may offer improved quality a t an intermediate channel condition, the error-resilient scheme may offer better quality in adverse conditions where the channel-coding scheme fails. When error resilience and channel coding are intelligently combined it is possible to achieve high system performance, especially if reliability is of major concern.

1.3 Research Aims 1.3.1 General objective

The primary objective of this work is to provide reliable transmission of low-bit-rate coded images over personal communications networks employing CDMA access. Therefore, our interest is in high compression coding with robust design and reliable transmission over the noisy channels. Geuerally, high data-compression ratio and high error resilience are conflicting; as the compression rate is higher, a channel error influences a wider range of decoded data. Therefore, error robustness and resilience are important issues for the image coding scheme to be used for transmission over wireless networks. Moreover, when setting stringent constraints on reliability and maximum delay in an environment characterized by bursts of errors, not only is robust source coding required, but also powerful channel error control. The best scenarios arise when the source and channel interact in order to meet the transmission requirements without decreasing the system capacity. The domains of this research are error-resilient source coding and channel coding in general, and robust low-bit-rate image transmission over wireless channels in particular. Therefore, the focus of this work is twofold: 0

Development of an error-resilient low-bit-rate image source coding scheme. Study of a source/channel coding system capable of providing reliable image transmission over wireless channels.

1 Introduction

7

1.3.2 Research strategy

The incorporation of low-bit-rate image coders into emerging wireless communication systems presents problems which previous communication systems have never encountered. One of the most important of these problems is the degradation experienced in image quality as a result of corruption of the transmitted image information by channel errors. Thus, on one hand, images are transmitted in a form very sensitive to errors, whereas on the other hand, the channel is likely to corrupt the transmitted data. To meet the quality requirements of these systems, efficient techniques have to be employed to control the impact of channel errors on the received images. Source and channel error control techniques are hence a must in order to provide robust transmission of images over wireless channels. A standard technique in spatial coding is block Discrete Cosine Transform (DCT) with Huffman and run length coding. In order to improve performance of spatial coding, we are proposing a promising analysis-by-synthesis coding technique called two-dimensional codo excited linear predictive (2D-CELP) coding, with block-adaptive prediction and variable block-size (VB) quantization. Our encoder can be used for encoding still images as well as sequences of images. The proposed technique is shown to have the potential of reducing the block effects of the DCT method while having low decoder complexity [12]. Our VB 2D-CELP coding scheme is shown to yield better image quality reconstruction and higher compression ratio than conventional CELP methods [12]. When compared with the JPEG standard, the VB 2D-CELP scheme yields better performance in terms of image quality and low complexity. This is also true when compared with most of the current image coding techniques a t low bit rates. Moreover, variable block-size coding that is implemented not only reduces the bit rate but also offers the possibility of allowing dynamic interaction between the source and the channel for the purpose of increasing the wireless transmission system performance. Wireless communication radio channels suffer from burst errors in which a large number of consecutive bits are lost or corrupted by the fading channel. Typically, the bit error rate (BER) in a wireless channel ranges from lo-' to while the BER for a fixed channel ranges from to or less. Accordingly, the channel-fading effect is an obstacle. Therefore, it is desirable to design a robust image coding technique that provides resilience to errors encountered on the wireless fading channel. It is in this vein that we propose a VB 2D-CELP coding system that implements error-resilient coding design.

1Introduction

8

Using a robust source coder with no channel error control does not provide reliable communication in the wireless environment. Providing reliable transmission over fadiig channels characterized by bursts of errors requires the use of powerful channel error control. Therefore, special attention is devoted in this work to channel coding. Error correction schemes such as the Fonvard Error Correction (FEC) have been reported t o be ineffective [13],1141. Some researchers have employed the Automatic Repeat reQuest (ARQ) procedures in order to repeat the sequence of bits that are lost when fading errors are detected. However, excessive use of ARQ would decrease the channel throughput and cause increased transmission delay. In order to take advantage of both techniques, hybrid ARQ techniques are considered here. The main point of interest in hybrid ARQ protocols is to achieve maximum channel error control performance with minimum redundancy and delay. One possible approach to achieve powerful channel error control is to use concatenated codes [15], [lG],[17].In particular, Reed-Solomon outer and convolutional inner concatenated (RS/CC) coding is known to be capable of providing high error-correction capability, especially when combined with an ARQ protocol. For this reason, the RS/CC concatenated coding method is considered in this work. In particular, type-I hybrid ARQ protocol using RS/CC concatenated coding is considered with a retransmission mechanism that can be used to achieve different transmission requirements. In real-time services such as video and some image transmission applications, an acceptable transmission delay has to be guaranteed. In this work, special attention is devoted to this requirement. The transmission delay depends mainly on the channel error control used. Therefore, the typo1 hybrid ARQ protocol is used with limited ARQ retransmissions in conjunction with limited interleaving. As high image reproduction quality is of concern, the error control protocol is used in addition to error-resilient source decoding. For the purpose of investigating the system performance, two channel conditions are considered: a memoryless channel where errors are uncorrelated and a highly correlated quasi-static channel. Under low-delay requirements, the error-control protocol performance is studied as a function of maximum number of allowable retransmissions and interleaving. The proposed analysis is based on Markov modeling to derive performance metrics, taking into consideration the emphasized parameters. The metrics that are used to evaluate the performance of the hybrid protocol are the throughput, protocol error probability and average transmission delay.

1Introduction

9

Prevention of error propagation is essential in increasing error resilience. After identification of the effect of channel errors on the source decoder performance, error resilient tools are proposed to provide robustness to transmission errors. Errors left uncorrected by the channel decoder are localized and corrected by the source decoder. Within the encoded image bit-stram, sensitivity to channel errors can vary according to the importance of the bits to be transmitted. If the image information to be transmitted could be classified according to error sensitivity, reasonable error control with minimum redundancy can be implemented. This can be accomplished by offering more protection to the most sensitive bits rather than using uniform protection across the bits. Providing this unequal error protection is proposed in this work in a joint source/channel coding scheme that operates for the purpose of providing low-delay image transmission.

1.4 Structure and contributions of the dissertation This research addresses three issues: the proposal of a new analysis-by-synthesis coding scheme, performance evaluation of a truncated type1 hybrid ARQ protocol using concatenated coding in DS-CDMA Rayleigh fading channels, and robust transmission of images over the wireless channel using error-resilient source coding. Therefore, the contributions of this research can be summarized as follows: 1. Development of an efficient analysis-by-synthesis based image coding scheme based

on block-adaptive prediction and variable code-vector size. 2. 'Ikansmission of VB 2D-CELP coded images over CDMA Rayleigh fading channels. To meet high reliability requirement, we investigate type-I hybrid ARQ error control in a concatenated Reed-Solomon/Convolutionalcoding (RS/CC) scheme.

3. In an effort to develop a more efficient coding system, we propose error-resilient tools that are implemented in the source coding system to mitigate the effect of uncorrected channel errors and to limit error propagation. 4. For the purpose of providing low-delay transmission, a truncated RS/CC type-I hybrid protocol is used and its performance analysis and evaluation is accomplished.

1Introduction

10

5. Finally, for the purpose of providing low-delay transmission, a coding control technique that dynamically adapts the source coder rate and channel error control to the channel condition is proposed based on the image coded bit-stream error sensitivity. An outline of the remainder of this dissertation is as follows:

Chapter 2 provides a review of fundamental image compression coding principles with an emphasis on transmission over wireless channels. This chapter considers a review of the main approaches adopted to providing image wireless communication. Chapter 3describes our proposed analysis-by-synthesis coding scheme. Design issues are detailed and results regarding coding performance are provided. Chapter 4 considers the wireless transmission environment under consideration. It argues the need of channel error control, reviews the conventional approaches to it, and presents our choice of error-control protocol based on image transmission requirements. Chapter 5 studies the performance evaluation of the hybrid protocol for the case of a memoryless channel as well as a highly-correlated quasi-static channel. The study considers the special case of delay-limited applications. It investigates a modified version of the control protocol and approaches to reduce transn~issiondelay. In particular, performance analysis of the truncated protocol is performed and results are given for two CDMA Rayleigh fading channels. Chapter 6 investigates transmission of VB 2D-CELP coded images over the noisy channels considered. It describes aspects relating to the design of a robust coding scheme. The performance of error-resilient coding is evaluated and image transmission results are compared under different channel conditions and QoS requirements. Chapter 7 covers the design of a combined source/channel coding scheme for robust transmission with low delay through coding with dynamic rate control and unequal error protection. Chapter 8 presents a summary of this work and some proposals and ideas for future research.

Chapter 2 Wireless Image Communications In order to design error-resilient image coding systems, it is first necessary to understand some of the basic and most common techniques used for compression coding as well as current approaches for transmitting images over noisy channels. It is in this context that this chapter presents a brief review of image coding techniques in general and coding for transmission over noisy channels in particular.

2.1 Image coding techniques 2.1.1 Digital communication systems and compression

In recent years, motivated by the advantages of digital technology for reliable transmission and efficient storage, and by decreasing costs of available Very Large Scale Integration (VLSI) technology, digital transmission of images and video has become increasingly p o p ular. Efficiency of transmission and storage makes it very important to use compression of the digitized signals, that is, to reduce the amount of information needed to reproduce the signal at the decoder. While, arguably, transmission and storage capacities will be available more cheaply in the future (through improvements in wide-band communication technology), it is also true that the increase in demand for digital communication mill still make it necessary to compress the signals to be transmitted. This is particularly important in view of the fact that wireless communication and multimedia are the leading trends in future technology and that capacity and efficient bandwidth utilization are of major concern. Two important classes of compression techniques are lossy and lossless coding. Due to

2 Wireless Image Communications

12

the compression limitation of lossless compression, we have t o resort to the use of lossy coding. In lossy compression, the decoded signal is an approximation to the original signal fed into the encoder. While lossy compression u,ould obviously be unacceptable for data coxnmunication, signal transmission is different in that judiciously administered "losses" to the input signal may be acceptable. In this context, acceptable losses are those that a user would not be able to detect (in the case of lowcompression systems) or those that the user would not object to, even if they are perceptible (for lower quality, higher compression systems). As will be seen later, the source coding technique used in this work uses a combination of lossy and lossless coding for the purpose of increasing the system efficiency in terms of compression gain. The interest in compression for both images and video has motivated international standardization efforts, such as JPEG for image compression [la], MPEG-1 [19], MPEG2 [20], H.261 and H.263 for video compression over a wide range of applications. An upcoming standard, MPEG-4, addresses the new demands that arise in a world in which more and more audiovisual material is exchanged in digital form. These needs go much further than achieving even more compression and even lower bit rates. The new MPEG-4 standard does not only aim to achieve efficient storage and transmission, but also to satisfy other needs of future image communication users [Zl]. To reach this goal, MPEG-4 will be fundamentally different from its predecessors and will utilize new techniques to obtain improved compression as well as added functionalities such as error resilience [22], and scalability [23]. 2.1.2 Conventional techniques Current algorithms for image coding generally employ one or more of the following techniques: Transform coding (TC), Predictive coding (PC), Sub-band and Wavelet coding, and Vector Quantization (VQ). Among the most widely used image coding techniques is transform coding using the DCT. The block-DCT that is currently one of the most common transforms used has been adopted by most image and video compression standards, as in the most popular image coding technique, the block-DCT based JPEG standard for still image compression. The main problem with DCT is that at low bit rates or high compression it suffers from block artifacts as a result of quantization errors in coding the coefficients.

2 Wireless Image Communications

13

The Discrete Wavelet Transform (DLVT) has recently emerged as a powerful technique for image compression because of its flexibility in representing images and its ability in adapting to the characteristics of the human visual system [24], [25]. The advantage of using the DWT over the DCT lies in the fact that the DWT projects high-detail image components onto shorter basis functions with higher resolution, while low-detail compcnents are projected onto larger basis functions, which correspond t o narrower subbands, establishing a trade-off between time and frequency resolution. In addition, the wavelet transform coding provides a superior image quality at low bit rates, since it is free from both blocking effects and mosquito noise. As Vector Quantization is known to be capable of achieving near rate-distortion performance, it has been widely applied to image coding. Extensive research in improving the performance and exploiting new architectures have been carried out and are still undenvay to take advantage of VQ and other methods [26], [27]. Among these techniques is the CELP [28] which is one of the most successful speech coding schemes. In image coding, 2D-CELP has been shown to be a promising method 1291, [30]. Recently, a region-adaptive CELP image coder for still images has also yielded good performance on synthesized images [31]. 2.1.3 Block-based image coding Whether combined with transform coding, predictive coding, Vector Quantization or a combination of these, block-based image coding techniques have been extensively used in many applications. This is motivated by the fact that block-based coding allows efficient, adaptive, and parallel processing of picture information. In a block-based coding system (Figure 2.1), image information is first structured into many small blocks, which are processed through transform or predictive coding, quantized, and coded into a sequence of bit streams that contain the compressed image data as well as the synchronization overhead and other side information. Since variable length coding is usually used in block-based compression schemes, the encoded bit-stream is vulnerable to transmission impairment. 2.1.4 Advanced techniques The biggest limitation of conventional block-based methods is the blocking effect which results in unacceptable image quality at very low bit rate. Therefore, in order to further

2 Wireless Image Communications

Image

14

Block Coding

blocks Spatial blocks

Multiplex Coding

Variable length block data Block Decoding

1-

Coded data

Multiplex Decoding

Channel

I

Fig. 2.1 Block-based image coding.

conipress the image signal, additional measures have to be taken. Among them, are the promising 2D content based methods. The basic idea of 2D content-based methods are to analyzr the content of the image signal and compress it according to its particular propzrties. In current research, the content that is being considered are edges of objects, areas with high energy, background and foreground, special interest areas like faces or some combinations of these. According to the type of content used in the methods, these 2D content-based methods are sometimes called object-based, region-based or segmentationbased. Conventional coding methods assume that coding and decoding processes are carried out automatically in real-time. This means that a user cannot interact with the coding process. In contrast, if we assume a non-realti~necommunication channel, a new coding paradigm which enables interactive operations with the coding process can be considered. This is a great change of view in the field of image coding. There are several important consequences of user interaction with the coding process. One is that it allows a user to help the coding hardware with difficult processes, such as feature extraction, for which current image processing algorithms do not have sufficient performance. More importantly, the interactive process will allow users to control the contents of coding and incorporate their own creativity, individuality and intention into the coding process. In the basic framework of interactive image coding, a user interactively adjusts the value of coding parameters and edits the image before transmitting or storing it.

2 Wireless Image Communications

15

2.2 Wireless image communication systems 2.2.1 Performance evaluation: is comparison possible? Image transmission services should be supported with a high wireless transmission bit rate within a wider allowable bandwidth. However, due to limited spectrum, only a limited number of radio communication channels can be shared by mobile users. As a result, image data should be compressed before transmission in order to efficiently use the radio channel. The image compression algorithm for the purpose of wireless applications does not only have to provide good compression, but has to also offer robustness, fidelity as well as a low complexity. Most existing image compression algorithms are concerned with bit rate reduction while improving the quality of the coded image. Many coding schemes have been studied and developed, in which Peak Signal-to-Noise Ratio (PSNR) and compression ratio are used for performance evaluation. Comparative studies of these coding schemes have already been published from this viewpoint. Systems which have been proposed assume that compressed codes are transmitted over noiseless channels and codes sent are correctly received by receivers. On the other hand, the requirements (e.g., in terms of quality or allowable delay) of the different services involving image transmission over wireless channels are not the same. These requirements, when combined with the particular characteristic of the image data, imply for each case a target bit rate which is the minimum bit rate able to provide an overall acceptable quality with the present state-of-the-art image coding techniques. Hence, it is not easy to say whether or not a coding algorithm developed for a particular application and designed for a noiseless channel would be suitable for use in different applications under different channel conditions,e.g., in the wireless environment. Investigations have to be carried out in order to analyze the error sensitivity of a particular coding method. Moreover, the same channel conditions have to be considered in order for a comparison of the performance of different coding schemes to be accomplished. 2.2.2 Effects of channel errors The effect of channel errors on the image coded bit-stream varies according to the compression method used. Before discussing coding methods that are resilient to channel errors or approaches to provide reliable or acceptable image communication over noisy channels,

2 Wireless Image Communications

16

an understanding of the error sensitivity of the different types of compression methods is necessary. Transform coding Data compression in transform coding occurs after the transform coefficientsare quantized. Some of the coefficients, especially those of high spatial frequency often have wry small magnitudes, and consequently, are quantized very coarsely or omitted completely. Compression results from encoding the block of coefficients with fewer bits than the original image block a t the loss of some image fidelity by the quantization. Before transmission through the channel, the coefficients may be entropy encoded for additional compression. Because of the block nature of the transform, distortions due to qumtization and channel errors tend to be distributed throughout the block. If the block size is relatively small, these distortions are confined to small portion of the image and may not be noticeable. However these transforms tend to make edges "b1ocky"and since the quantization tends to weight low over high spatial frequencies, edges and areas of texture may be blurred Predictive coding Predictive coding allows significant compression for highly correlated areas of the picture where good predictions can be made. Using two-dimensional predictors can offer high compression by utilizing correlations in more than one direction. However, it is found that channel errors can propagate seriously in the feedback loop. There are more sophisticated predictors which allow slightly better conipression and significantly better performance in noisy channels. Channel errors are not confined to the portion of the image in which they occur. The predictive process in the decoder tends to smear the error to other parts of the image. Vector Quantization Vector Quantization is a method of efficiently quantizing and coding vectors that have a significant correlation between components. For each vector to be coded, the best codevector from a predefined codebook is chosen and its index is coded. The codebook is usually designed using a training algorithm performed on a training set of images.

2 Wireless I m a g e Communications

17

One of the problems with VQ is that as the dimensionality of the vector increases, the complexity of both the codebook design and the encoding procedure increases. This is why VQ schemes of interest typically rely on relatively small vector dimension. Given the complexity of generating a codebook, typical schemes maintain a fixed one, though recent work has explored different strategies for updating the codebook while encoding the source. VQ has the advantage that it can be used to provide fmed length codewords which are well suited for error resilient purposes. However, most good VQ schemes use some form of entropy coding in addition to VQ. Therefore, the main problem that VQ exhibits in the presence of channel errors is the extreme sensitivity of variable length codewords to channel errors, as will be discussed in detail later. 2.2.3 Preventing channel errors

When transmitting images, encoded by algorithms designed for noiseless channels, over a wireless channel, the reconstructed image quality can be substantially degraded by channel errors. This is due to the fact that the outputs of the encoder contain less redundant information than the original images and are more sensitive to channel errors. Thus, it is important to combat the degradations on the image introduced by channel errors. This is usually done by one or more of the following three methods: applying error-detection and/or error-correction methods. a

modifying the source coder so that it is more robust to channel errors.

o

concealing the effect of errors from the received, corrupted data.

The first method may require a large overhead since it adds some redundancy to the data. Hence, a proper choice of the channel error control strategy has to be made. The second method requires modifications to the source encoder designed for a noiseless channel to make it more error-resilient. The third method can be used even if the first two methods are utilized for combating errors as it does not increase the total bit rate. 2.2.4 Approaches for image transmission over wireless channels

Using one or more of the above mentioned methods, many coding schemes have been proposed for image transmission over mobile channels in specific wireless transmission en-

2 Wireless Iniaee Communications

18

vironments often with a predefined multiple access technique [32], [33], [34], [35], [36], i371, [381, [131, [391, [401, [411. Many of the schemes investigated deal mith spatial coding. In fact, spatial coding is known to be capable of providing robustness to transmission errors. It is even proposed to not use inter-frame coding of image sequences, but rather intra (spatial) coding to prevent enor propagation. As a standard technique in spatial coding is block-DCT mith Huffman and run-length coding, many authors have investigated the deployment of the JPEG codec for image transmission over wireless channels [36], [42], [43], [44], [37]. Others have employed subband coded schemes [45], 1381, [40] [46], discrete wavelet transform based systems [47], [33], [48], [49], fractal based [50], or other modified discrete cosine transformed systems [39], [51], [52]. A common feature of most previously proposed systems is that the source codec is not developed specifically for transmission over hostile fading channels. Generally, a coding technique that yields good reconstructed image quality with a high compression ratio over noiseless channels is selected, and a choice of the source codec parameters to match the wireless channel error control technique is employed. Despite the high performance that source coders have on noiseless channels, the wireless channel fading effect is unavoidable. Therefore, it is desirable to design robust source coding techniques that provide resilience to channel errors, and satisfy the transmission requirements of the wireless environment. This source coding should be designed with the wireless channel impairment and image data characteristics taken into consideration, and once designed, an appropriate channel error control technique that matches the bit-stream to be transmitted can be implemented. It is in this vein, that we consider both source error control and channel error control. We propose an unequal error protected, error-resilient analysis-by-synthesis coding scheme for use over wireless networks with DS-CDMA multiple access. It is a well-known fact that, in general, the more efficient the source coding is, the more sensitive it will be to channel noise, unless some corrective measures are taken [53], [54], F51, (5% P I , 1581. Many of the popular image coding techniques, especially when combined with variable length codes to yield high compression ratios, have been shown to be very sensitive to random channel errors [59], [60], [€ill, [56]. This error sensitivity is even more severe in wireless channels [14], [45], [37], [39].

2 Wireless Image Communications

19

This has been a motivation for research in developing error-resilient source coding schemes for still images as well as video sequences. Error resilient source coding techniques mitigate the effects of channel errors by restricting their spatial and temporal effects. In one technique, the receiver alerts the channel encoder that some of the previously sent information is received in error. The source encoder, by intelligently coding the subsequent information tries to minimize the effect of these errors [62]. This method can cause excessive delay both at the encoder as well as the decoder and also results in complex source coding and decoding algorithms. Another approach is the design of a source coder which is inherently robust to channel errors. To obtain the best image quality, many state-of-the-art coding systems utilize variablerate coding such as entropy coding and run-length coding in VQ based schemes. In fact, variable length coding provides better compression than fixed-rate coding. However, VLCs are highly susceptible to bit errors, often resulting in catastrophic errors [61], and therefore require complex re-synchronization control. Another way to provide error resilience is to use fixed rate VQ. Using VQ, there is also the question of how to use the indices so that if an error occurs, the signal maps to a vector which is close to the code-vector that it should be mapped into. Pyramid VQ in conjunction with subband coding has been proposed in [63], [32] with an error-resilient Predictive Vector Quantization (PVQ) codeword indexing scheme to form a robust video codec that works even under poor channel conditions. An alternative way to provide robust source codecs is to use layering. By using layered coding, the source coder can anticipate congestion due to the retransmission requirements under severe channel conditions. Layering allows the classification of the source information to be transmitted into several classes with different importance. Thus, it can achieve improved robustness by allowing the discarding of low priority information [14], and by using unequal error protection for the different layers [45], [33], [37], [64]. Layered coding tries to distribute the errors among the less important data while protecting more significant information. Thus, it can be effective and should be used if no extra delay and complexity are introduced at the source coder. Therefore, tiot all the state-of-theart source coding algorithms will be efficient for use with layering. The coding algorithm has to meet a number of performance criteria, among which are good rate-distortion performance and low-complexity.

2 Wireless Image Communications

20

2.3 Summary This chapter presented a brief review of some of the most common image coding techniques with an emphasis on some existing techniques for coding of images over noisy channels. A basic review of the effect of channel emors on the basic compression techniques that can be used for image transmission over noisy channels has been provided. Approaches to avoid and mitigate the effects of channel errors on the coding performance have also been summarized.

Chapter 3 Analysis-by-Synt hesis Coding System 3.1 Introduction Spatial coding techniques are required for both still picture transmission and periodic intraframe coding in time-varying image transmission. The two mzin classes of techniques are predictive coding and transform coding. The transform coding methods are prone to block effects and "mosquito noise" in blocks containing high contrast edges. This is particularly true for the DCT-based JPEG standard.

3.2 An overview of JPEG The JPEG (Joint Photographic Experts Group) standard was developed under the auspices of IS0 (IS0 10918-1 JPEG Draft International Standard) and CCITT (CCITT Recommendation T.81), and supports both lossy and lossless compression. The lossy methods are based on the Discrete Cosine Transform. The standard specifies four modes of operation: sequential, lossless, progressive, and hierarchical encoding [65],[18]. The progressive and hierarchical modes allow for decompression of a partially received signal. Even though this standard was mainly developed for still images, it is also used for video transmission by providing intra-frame compression only (often referred to as motion JPEG). Even though htra-frame compression provides much lower compression ratios for video than combined intra and inter-frame compression, it has many other advantages, particularly as far as

3 Analysis-by-Synthesis

Coding S y s t e m

22

error resilience is concerned, because of the independence between frames. Therefore, the deployment of the JPEG standard is extensively studied for communications over wireless channels. The JPEG baseline system uses a combination of transform and variable length coding on a block-based structure. It uses DCT, zig-zag scanning, run-length coding and Huffman coding to provide variable-length data for each block. Further compression is achieved by applying 1D-DPCM to the DC coefficients of the transform. The image dimensions, quantizer and Huffman code tables are transmitted first as header information. The variable length data blocks are then transmitted sequentially, with an occasional synchronization codeword (known as restart intervals in JPEG). 3.2.1 Problems w i t h JPEG

At low bit rates or high compression, DCT-based JPEG coded images suffer from block-like artifacts as a result of quantization errors in coding the coefficients. Because the DCT basis functions have support on disjoint rectangular blocks, quantization errors will cause sharp discontinuities between adjacent blocks (block edges) to occur in the reconstructed signal. On the other hand, the JPEG standard for image communication assumes: a robust transmission channel where errors are minimized, a design strategy in which p o w r consumption is not an issue. It has not been designed for the channel impairments that occur in a wireless transmission link. For transmission over noisy channels, JPEG has some serious drawbacks and providing reliable transmission necessitates sufficient error correction to correct virtually all channel errors. It has been shown that tools to combat the effect of channels errors are necessary in order to provide acceptable transmitted image quality [HI. As channel error correction coding can result in excessive bandwidth loss, the deployment of JPEG for wireless image communication calls for advances in error resilient coding. I t is true that many alterations to the coding scheme have been proposed to overcome these problems but our motivation is mainly in developing a coding technique that yields better performance than JPEG, and provides provisions for error resilience through its inherent coding characteristics.

3 Analysis-by-Synthesis

Coding System

23

3.3 Analysis-by-Synthesis coding Conventional DPCM predictive coding is not very efficient at low rates due to quantization error feedback. This can be overcome by combining Vector Quantization with predictive coding. One approach is Predictive Vector Quantization (PVQ). Originally proposed for speech coding [66], it was immediately applied to image coding when 2D-PVQ was introduced [G7]. In PVQ, the Vector Quantization is embedded in the predictive feedback loop. The image to be coded is processed by blocks: the prediction performed on a block is based on the previously reconstructed ones, and the corresponding prediction error block is a p plied to a vector quantizer. Emphasizing different aspects in the PVQ coder, namely codebook design, robust codebooks, computational complexity and entropy constraint, many methods and designs have been proposed [Gal, [69],[70]. Although PVQ has been extensively studied for image coding, i t has several fundamental problems: 1. The efficiency of vector prediction is lowered with increase in distance between the predicted samples and the vector used to form the prediction. Thus, a high prediction error is obtained for those samples in the predicted vector which are located far away from the input vector to the predictor. 2. Adaptation of the vector predictor according to the block statistics is difficult. Consequently, a fixed vector predictor or a few ad hoc predictors are used. 3. Residual codebook optimization for different predictors is possible only through designing a separate codebook for each predictor, and rapidly becomes a formidable task as the number of predictors is increased.

In the VQ scheme, the prediction residual vector is mapped t o a best matching vector in a given codebook. The noise introduced by quantization is amplified by the inverse filtering at the decoder and the quality of the output tends to drop dramatically. A way of controlling the amount of noise introduced is to use an analysis-by-synthesis approach to efficiently perform coding of a prediction residual image by using VQ. As illustrated in Figure 3.1, the analysis-by-synthesis approach consists of processing each code-vector in a given codebook through the synthesis filter to produce a reconstructed image vector. This reconstructed vector is then compared to the corresponding vector in

3 Analysis-by-Synthesis

Coding System

24

the original image. The vector minimizing the distortion measure is chosen and its address in the codebook is transmitted t o the decoder. Original imaae

Distortion measure

Synthesis Filler

Codeword selection

+ I

Symbol address

Fig. 3.1 Analysis-bysynthesis procedure.

In this work, we are interested in the analysis-by-synthesis method proposed in [29], which extends the successful CELP approach for speech coding [28]. The analysis-bysynthesis method was also presented in [71] as a way of controlling the amount of noise introduced by the VQ. The Two-Dimensional Code Excited Linear Prediction Coding (2DCELP) is an analysis-by-synthesis method that combines the advantages of predictive coding with VQ. Because of combination of PC with VQ, ID-CELP belongs to the class of recursive VQ. 2D-CELP also belongs to the class of adaptive VQ due to the block-adaptive prediction feature. A predominant approach to predictive coding of images has been to develop predictor structures that are fixed and therefore poorly responsive to image non-stationarities. In order to overcome these limitations, it is desirable to consider adaptive predictors which can help to reduce the transmission rate, or to achieve better picture quality. Whether used with DPCM, CELP coding or PVQ, the underlying adaptation techniques vary from simple switching of predictors [72], [73] to the use of special algorithms to adapt the predictor coefficients [74],[75],[76]. The synthesis filter in the 2D-CELP is the inverse prediction error filter that uses an adaptive switched predictor. In the coder proposed in [30], an adaptation of the scalar predictor to each block is proposed and the prediction coefficients are quantized before being transmitted to the decoder. In [29], the 2D-CELP was reported to give good performance in comparison with DCT coding a t low rates, although the prediction algorithm did not

3 Analysis-by-Synthesis

Coding S y s t e m

25

give good prediction on edges of certain orientations. Recently, in an attempt to improve the performance of the promising CELP coding for still images, a two-dimensional linear prediction model has been proposed in 1311. This low-hit-rate adaptive CELP image coder has also been reported to yield good performance in comparison to a JPEG DCT algorithm. In this work, we propose a new approach to improve the 2D-CELP coding of still images by designing a finite set of adaptive predictors using a clustering method. The adaptive predictors are then applied in a 2D-CELP system with a codebook using fixed code-vector size. However, when a 6xed block size is used for the two-dimensional prediction, we don't exploit the fact that larger blocks could be used for the low-detailed regions in the image and smaller ones could be used in the high-detailed parts. To do so a Variable Block-size (VB) 2D-CELP coding method is proposed. The choice of the appropriate size for a block being processed is made after analyzing the mean-squared prediction error inside it. As will be seen later, our simulations show the advantage of varying the block size in comparison with the use of a 6xed block-size in the original 2D-CELP.

3.4 2D-CELP coding system To describe the coding system under study, we start by introducing some useful notation. In order t o simplify the explanation, we will first describe the structure of the decoder and all of the operations involved. The encoder will then be introduced as a bank of decoders running in parallel. 3.4.1 Definitions a n d notation We assume that the input image z(m, n) is of size A41 x hf2, defined on a rectangular lattice, 0 5 m 5 ?dl - 1, 0 n 5 M2 - 1. The upper left pixel in the image has (0,O) coordinates. The image is divided into disjoint blocks B, consisting of L pixels. The coordinates of the pixels in block B, are given by:

<

where (oi,,ot2) is the origin of iLh block and B is a set of size L = 2b x 2b that determines the geometry of the block. We denote the iLh block of pixels (intensity) xi = {z(m,n) : (m, n) E Bil.

3 Analysis-by-Synthesis

26

Coding S y s t e m

The coding scheme uses a finite set of K linear predictors Hk,nrith twc-dimensional (2D) region of support F. We use block-adaptive prediction, where a predictor is selected at the encoder for a block of image samples, and the information required to specify the predictor is sent as side information. The kth predictor is given by:

where F is the region of support of the 2D predictor impulse response [email protected],q).

3.4.2 2D-CELP decoder

Fig. 3.2 2D-CELP decoder. The 2D-CELP decoder is identical to that of a standard DPCM system with adaptive prediction, except that a single codeword is used to represent a block of residual prediction error values. A block diagram of the decoder is shown in Figure 3.2. Associated with each image block Bi is a transmitted binary codeword ui, from which we derive both the index ki of the predictor and a vector ei = {ei(bl,b2) : (bl,b2) E B ) of residuals, chosen from the codebook, to drive the decoder. The vector ei is selected from the sub-codebook Ck, corresponding to the predictor with index ki. The reconstructed image block denoted by kiis given by:

hk,(p,q) f ( m - p, 71 - p)

f ( m ,n ) =

+ ei(m - oil, n - 04,( m ,n ) E Bi

(3.3)

@d€F

Note from the previous equation that the reconstruction is the same as in DPCM. This implies that the total coding error is equal to the quantization error.

3 Analysis-by-Synthesis

Coding System

27

The predictor support F and the block geometry B must be chosen so that ?(m, n) is causally computable from previously reconstructed samples in the same block and in previously reconstructed blocks. Examples ill be given later. Since with no extra transmission cost, we can use a different residual codebook for each predictor Hk, the residual codebook is partitioned into K sections, each corresponding to one predictor. These sub-codebooks are sequentially organized into a global codebook of size N = K N,, where N, is the number of code-vectors in each sub-codebook Ck, k=l;.-,K. A binary codeword u, identifies an index v,, (v, E {1,2,. ..,N)) from which we get the prediction filter index ki (3.4) and also an index in sub-codebook Ck,, vi - (ki - 1)1V,, that specifies the vector of residuals selected for the block under the decoding process.

where

1

] denotes the integral part of a number.

3.4.3 ID-CELP encoder

I block

Predictor selection

1

k

W "s"ua1 Codebook

Pred loop

I

I

' Decision

Fig. 3.3 2D-CELP encoder.

The block diagram of the encoder is shown in Figure 3.3. The role of the encoder is to select the predictor index b and residual vector ei to minimize the coding error d ( x i , k i ) , where d is an appropriate distance measure. We have simplified this by using a two step

3 Analysis-by-Synthesis

Coding S y s t e m

28

procedure. The predictor is first selected based on the input signal alone to minimize the total squared prediction error for the block. Define the total squared residual for block Bi using predictor H as

Then, ki = argmin ri(Hk)

(3.6)

k

The residual vector ei is then chosen to minimize the coding error using predictor ki:

where hk(p,q) Z(k-e)(m-p, n -q)

~ ( ~ * ~n) )(= m,

+ e(m - oil, n - oi2) , (m, n) E Bi

(3.8)

(PdEF

Note that if (m - p, n - q) @ Bi, then ~ ( ' * ~ ) (-mp,n - q) is assumed to belong to a previously reconstructed block, and is independent of (k,e). 3.4.4 Residual Codebook design

The design of the residual codebook for the 2D-CELP system is based on a successive clustering method proposed in [77]. The input data to the algorithm is a training set of images. Proper choice of this set is important since the representative code-vectors are directly obtained from it. As previously mentioned, in the 2D-CELP with fixed codevector size 2' x 2' and K predictors, the codebook consists of I( sub-codebooks of equal or different sizes: C = UF==lCk, where Ck = {ek,; j = 1,. .,L,} and Lk is the size of the subcodebook corresponding to predictor class k. The algorithm consists of updating a code-vector in the codebook every time a block of the training sequence is coded. Suppose that block xi is being processed, with best predictor Hk, such that the code-vector in the current codebook resulting in the minimum distortion is e k j . Then, if e is the actual prediction error, the code-vector is updated using:

.

3 Analysis-by-Synthesis

Coding System

Nk, = Nk,

29

+1

(3.10)

where N k j is the number of training sequence vectors that have been coded using the codevector e q (actually, with the (k, j) code-vector, since ek, is constantly changing). The final codebook is the result after the entire training sequence has been processed. So far, the most popular method for determination of the codebook has been the one proposed by Linde, Buzo and Gray (LBG) [78]. However, this method still experiences many drawbacks, namely, its dependence on the initial code-vectors and its rate of convergence. As has been shown in [77], the method of successive clustering as shown above converges at least two times faster than the method of LBG in determining the levels for a quantizer without memory. The codebook for a predictive vector quantizer contains r e p resentative vectors for the prediction errors with a quantizer in the feedback loop. This sequence of errors depends on the code-vectors of the codebook and therefore is not available for clustering. In most of the previous work in predictive vector quantization 1791, [28], people have reported difficulties in designing a codebook by clustering vectors of prediction errors due to the presence of the feedback loop. In [66], it has been reported that a modified version of the LBG method has succeeded in determining a codebook for their system. In our application, we found that the method of successive clustering gave the best results and was adopted for the codebook design.

3.5 Block-adaptive prediction Block-adaptive prediction is proposed in this work due to its advantages in terms of error resilience compared to pel-based continuously-adaptive prediction. It is known that in predictive coding, errors can propagate seriously unless the predictor is carefully designed. This will be discussed in more detail later. 3.5.1 Predictor design algorithm We propose to choose the set of I( predictors based on analysis of a training set. Assume that a number of images are concatenated to form a new large image. Let B I , B2, --.,BN

3 Analysis-by-Synthesis

Coding System

30

be the blocks in this training image that will be used to design the predictors. Not all blocks need to be used and specifically blocks near the edges can be ignored. An iterative clustering method is used to design the predictors starting from an initial set of predictors. For a given set of predictors, each block is assigned to a class corresponding to the predictor giving the lowest total squared prediction error for that block. Then, for a given classification, a set of predictors that minimizes the total squared prediction error for all blocks within each class is found. This is continued iteratively until convergence is achieved. The algorithm is as follow: Initialize the K predictors HI('),-..,H K ( O ) in some arbitrary fashion, 1 = 0 Set I = 1 + 1

where r , ( H ) is defined in eq (3.5)

3. Fork = l , . - - , K

H ~ ( ' )= argmin If

r,(H) i:c.W=k

If some stopping criterion based on E(') and E('-') is not satisfied, go t o 1, else stop. The minimization in Step 3 is a least squares problem. Let the predictor order (number of elements in F) be P, and arrange the elements h(p, q) into a P x 1 column vector h in some arbitrary order. Define the P x 1 column vector X ( m , n ) containing the values z ( m - p, n - q ) in the order implied by the ordering of h. Then,

3 Analysis-by-Synthesis

Coding System

31

Define

By the orthogonality condition of least squares, the vector h minimizing Jk(h) is given by solution of the normal equations:

where

and

3.5.2 Design issues

Prediction mask As previously stated, the prediction mask must be chosen so that ?(m,n) is causally computable from previously reconstructed blocks. Assuming that computation proceeds according to line-by-line scanning, a suitable prediction mask is shown in Figure 3.4. The mask is specified by integers PI, P2,and Q:

The predictor order is P = PI+Q x (PI + P2). We want to choose the prediction mask so that good prediction is obtained on a wide variety of image structure.

Block geometry The condition that ?(m,n) be causally computable from previously reconstructed blocks imposes a relation between the block geometry B and the prediction mask. For example,

3 Analysis-by-Synthesis

Fig. 3.4

Coding S y s t e m

32

Geometry of twc+dimensional predictor support.

if P2 > I , rectangular blocks cannot be used because to be predicted, pixels on the right hand boundary of the block would require samples from blocks as yet uncoded. To avoid this problem, a non-rectangular block geometry can be used. Figure 3.5 shows suitable 16-element block shapes for P2 = 1, 2, 3 that ensure causal computability.

Fig. 3.5 Block shapes that ensure causal computability for Pz = 1,2,3.

Block size The choice of block size 2b x 2b, results from a compromise. Since the index of the predictor must be transmitted for each block, a large block size will minimize this overhead. On the other hand, the reason for using adaptive prediction is to respond to changes in local structure, so the block size must be small enough to achieve this. Tile block size used is chosen experimentally based on the coding performance.

3 Analysis-by-Synthesis

Coding System

33

N u m b e r of predictors The number of predictors also affects the overhead required to transmit the prediction index. More predictors are required to adapt to different types of local structure. The number of predictors should also be based on overall coding performance. Initial predictors The choice of initial predictors H~(O),...,HK(O) is important, since a poor choice may result in convergence to a local optimum. We haw chosen the initial predictors based on the assumption that each block contains either no specific directionality or one dominant orientation. Thus each of the initial predictors favors one of the orientations shown in Figure 3.6. For example, for the case of five predictors, the initial predictors we have used ~ +zl-'+ izl-'z2-' iz2-I izlz,-', corresponding to are: q-', ~1-~.z2-',~ 2 - l z122-', predicted values f (m, n) =D, A, B, C, (A+B+C+D)/4.

+

................

+

current line

Fig. 3.6 Orientation of initial predictors

3.5.3 Open loop predictor design results In order to test the algorithm under controlled conditions, experiments aere first conducted in an open loop scheme on a synthetic image modeled as the output of an autoregressive filter (AR) excited with white noise. Different sets of predictors with randomly chosen coefficients aere used to initialize the design algorithm. At convergence, the same value of Mean Squared Prediction Error (MSPE) was obtained for each starting point and the optimum predictors were close to the AR filter coefficients of the tcst image. Computer simulations of the algorithm described above were also carried out on real images. Five images of size 256 x 256 were used in the training set constructed for determining the adaptive predictors. The algorithm was tested for I( = 7, a prediction order equal to four and a block size of 3 x 3 with three different starting points. In the first two

3 Analysis-by-Synthesis

Coding S y s t e m

34

starting points, each predictor had arbitrarily selected coefficients, subject to the constraint that the sum of the coefficients was one. In the third set, each of the seven predictors was chosen to favor one of the seven orientations represented in Figure 3.7.

(A+DYZ

D

Aw...c ......... p v i w s line

................

x(ms)

current line

Fig. 3.7 Oriented predictors

For measuring the performance of each set of predictors found a t convergence of the algorithm, the MSPE was calculated as well as the prediction gain in dB given by 10 l o g l o ( o ~ / u ~ ) , where o: is the variance of the input image and o: that of the prediction error. The results of the prediction of a 256 x 256 window of image "boat" (Figure 3.9), which contains many lines and edges, with the predictors found at convergence of the algorithm are given in Table 3.1. The results show that the starting point has a small impact on the final solution obtained. In this test, the difference was a t most 0.34 dB in prediction gain. In further tests we restrict our choice of the initial predictors to oriented ones such as those of set 3 as they yield the highest prediction gain and the lo~vestMSPE. Table 3.2 shows the MSPE resulting from applying the sets of adaptive predictors to a 512 x 512 window of the image "boat". In all cases the predictor order is 4 ( P I = 1, P2= 2, Q = 1). The tests were done with K = 1,5,7 and 8 predictors, and with block sizes of 2 x 2 up to 6 x 6. As expected, the MSPE decreases as the number of predictors is increased and increases as the block size is increased. The advantage of using more than one predictor is clear. However, the choice of optimal number of predictors and block size must be made in a context of a coding system based on the overall bit rate for residual and predictor selection information.

3 Analysis-by-Synthesis

Set

Coding System

35

Initial predictors Final predictors MSPE Gain (dB) hlSPE Gain (dB) 14.48 16.37

37.12

31.21 28.89

17.12 17.46

Table 3.1 InEuence of initial conditions on final predictors No. of red. l Pred. order Block size I

1

1 4

5 4

7 4

8 4

Table 3.2 MSPE for different sets of predictors

3.6 Variable Block-size 2D-CELP coding 3.6.1 Variable block-size coding

The 2D-CELP system described assumes that the image to be coded is partitioned into non-overlapping blocks of fixed size for both the prediction and the quantization. However, most natural images contain both high-detailed and low-detailed regions. The low-detailed regions are almost homogeneous, where the prediction gives residuals that are not only of low amplitude but also of a certain uniformity. The high-detailed segments are characterized by edges of different orientations and large transitions in the gray levels. The 2D-CELP decoder thus needs more information for the blocks belonging to these regions in order to reconstruct them well. This suggests the use of larger blocks for the vector quantizer in homogeneous areas and smaller blocks in more detailed areas. Based on this observation, we propose a 2D-CELP system with variable VQ block-size. The block size for predictor switching is equal to the VQ block size. The decision on subdivision of a quantization block is based on an analysis of the prediction error. During the coding process, starting with the largest block-size considered, an image

3 Analysis-by-Synthesis

Coding S y s t e m

36

block can be coded in its original size or split into four congruent blocks. Each sub-block can be further subdivided into four sub-blocks. The process of subdivision continues until the smallest allowable block-size is attained. The overhead of the variable block-size coding method is that first, a partition map for the image has to be transmitted. This can be efficiently done using the quad-tree structure. Second, different codebooks are needed for the different coding block sizes implemented. However, these codebooks are determined only once when the coder is designed. 3.6.2 Notation

So far, the input blocks have been denoted by B,.For simplicity and for future convenience we will refer to input blocks by the block of pixels indexed by its corresponding size. Given that the block size is 2b x 2b,the input block will be denoted by x!)), where i denotes the time index. Let S(b)be the class of blocks coded with size 2b x 2b;S(b)would indicate the spatial position of the blocks belonging to it. Therefore, when we want to refer to blocks belonging to class S(b)we will use x!)) E S(b). The resulting coded quad-tree is denoted by S. It corresponds to the fanlily of S(b)for b = bo, -..b,,: S = U:~CStb). A codebook corresponding to block size 2b x 2' will be denoted by db) to which is associated an index set I ( ~ and ) an entropy codebook U(b)that contains the codewords, as will be detailed later. Similarly, as different sets of adaptive predictors are used for the different block sizes, prediction filters will be denoted using their corresponding block size. Therefore, for each block size 2b x 2b we have a set of K predictors denoted by { ~ f ) ) f = ~ . 3.6.3 Variable block-size coding concept

Using b,, - bo+ 1 different block sizes implies the design of more than one codebook. Each corresponds to a specified block size 2b x xb, and is partitioned into K subcodebook db) codebooks, where I( is the number of predictors used for each block size: db) = [email protected] ) , b = bo, ...,b,,. A flowchart of the algorithm is shown in Figure 3.8. The operation of the method is illustrated with a system using three block sizes. The image is initially partitioned into disjoint blocks x!)"") consisting of L(bma=) = ZbmO=x 2bmo= pixels. These blocks are referred

3 Analysis-by-Synthesis

Coding System

37

to as base blocks. Once the predictor resulting in the minimum total squared residual is selected a s explained by Eq.(3.5) and Eq.(3.6), the quantity ri(Hfm")) is compared to threshold A1 where

If ri(Hfm")) is less than A1, the block size is maintained at L(&*=)and xj""') is coded by means of 2D-CELP where the sub-codebook c?' is selected to give the code-vector. If ri(~Em'))is greater than A,, ~j""'~=' is subdivided into four congruent blocks { d ~ - l ) } ~ = , , each of size L(".=-') = 2bma=-1 x 2bma=-1. For each block d p - ' ) , using the predictor that results in the minimum prediction error, the total squared residual rij(HCm"'-'I) is compared to a threshold A2. Then, each sub-block xj~"'"'-') is maintained at its size and coded by the 2D-CELP scheme using sub-codebook or subdivided in turn into four blocks (x$)}f=, of size L(h1 = 22s x 2h, each coded using sub-codebook c,!:). Here, kt denotes the index of the predictor that results in the minimum prediction error for a given block xi?).

cP-')

Subdivide qlnto

(size 8 x 8)

4 blocks xql

Fig. 3.8 Variable Block-size 2D-CELP coding flowchart. Note that in the coding flowchart represented in Figure 3.8, subdivision blocks inherently include a predictor selection procedure for the block size under consideration.

3 Analysis-by-Synthesis

Coding System

38

3.6.4 Threshold of block subdivision

A basic blockis represented with the fewest bits if it is not subdivided. Thus, a block should be subdivided only if it cannot be coded sufficientlyaccurately at the given size. This could be accomplished by coding the block a t that size and basing the decision on whether or not to subdivide on the coding error. Because the coding is costly, we have based the decision on the prediction error. The threshold values used are based on an empirical determination of the relationship between squared prediction error residual and coding error. Thresholds that maximize PSNR value are selected. 3.6.5 Quad-tree s t r u c t u r e encoding The overhead of the VB 2D-CELP method compared to the originally proposed method is that a partition map for the image has to be transmitted. Information on how a basic block x(~-) has been subdivided may be transmitted as side information. This can be efficiently done using the quad-tree structure. However, this may require a considerable part of the bit rate, especially a t low bit rates. The bit rate needed for the transmission of the segmentation structure, or in other words the quad-tree data structure, can be determined using different approaches. In the following, we briefly describe the different approaches. A more detailed discussion will be presented in future chapters when dealing with error resilience. The simplest approach consists of binary encoding the quad-tree using one bit per node: "0" indicating a leaf node and "1" indicating a parent node. In the case of two block sizes, a single bit can specify whether or not the base block has been subdivided. In the case of three sizes with two levels of subdivision, a simple approach consists of using a fixed length five bit codeword to provide this side information, since there are seventeen possible ways to subdivide the base block. As the segmentation can be very detailed which at low bit rates would require a considerable part of the total bit rate, variable length coding can be used t o minimize this side information.

3.6.6 Codeword multiplexing In the original 2D-CELP scheme, a single codeword is used for each block processed. As the block size is fixed, only the predictor index and the code-vector index are multiplexed in the codeword. In the variable block-size coding system, a single codeword is also used for each

3 Analysis-by-Synthesis

Coding S y s t e m

39

block processed. In addition to both information items mentioned above, the codeword also has to indicate the block size used. Therefore for each block $) under coding process the index of the code-vector selected will be multiplexed with the filter index ki and block size b to form the codeword ui for the block. Variable block-size coding implemented in the VB 2D-CELP coding scheme offers the potential of a better allocation of the number of bits spent per unit area according to the local detail in the image. Using 6xed length codes, however, would not be efficient in terms of rate distortion performance or achieving the benefits of variable block-size coding. Therefore, Huffman coding is used in our image coding scheme. In each Huffman code three types of information are multiplexed: block size, prediction filter index, and code-vector index. Using b,,, - bo 1 different block sizes implies the design of b, bo 1 codebooks. Each codebook C(b)is partitioned into K subcodebooks, where K is the number of predictors H?). If the codebooks C(&)are of the same size and are organized sequentially into a global codebook C and if the number of code-vectors in the subcodebooks are all equal to N,, then, the extraction of the information needed to derive the decoder can be described as follows. Given that vi denotes the code-vector index extracted from codeword ui corresponding to the image block under decoding process, the extraction of the codebook and therefore the block size to be used is given by:

+

+

Once the index b pointing to the block size to be used is obtained, the extraction of the prediction filter index is simply:

3.7 Image transmission over a noiseless channel This section presents results of application of the 2D-CELP and VB 2D-CELP algorithms to image coding, and compares them with the block-DCT based JPEG algorithm. In our approach, the training sequence used for codebook design is different from the one used for the determination of the adaptive predictors. The training sequence used for the

3 Analysis-by-Synthesis

Coding System

40

codebook design of the 2D-CELP system is compsed of five images of size 512 x 512 each. The codebook is divided into K sub-codebooks of equal size; ' h corresponds to the total number of predictors used. In order to generate results at different bit rates, codebooks of different sizes are generated. The block size for prediction 2b x xb and the one for VQ can be different but remain constant during the coding. To evaluate the predictor performance numerically, the Peak Signal to Noise Ratio (PSNR) between the original image z ( m , n ) and the coded image 2(m,n) is calculated, where

25s2

PSNR = 10 loglo

,Il x

,

(3.23)

( z b , n ) - ?(m, 4)'

For analyzing the advantages of the blo&-adaptive prediction method proposed, the test image "boatn (Figure 3.9) is used because it is rich in many edges with different orientations. The advantag of designing more than one predictor can be seen on the enlarged windows of "boat" (3.10) in Figure 3.11 and Figure 3.12 and corresponding to the coded image at about 0.4 bpp by means of the 2D-CELP scheme using respectively one and five predictors and b = 2, that is a block size L = 4 x 4. Although the use of 5 predictors leads t o a small decrease in PSNR, the subjective quality is better than in the case of one predictor. This decrease in PSNR is mainly due to the increase in bit rate due to the predictor side information. The VB 2D-CELP has been tested with different combinations of block sizes, threshold values and codebook sizes. The results have shown that the larger blocks are also of importance since they may contain single edges. Hence, the option of designing codebooks with equal size has been selected. Experimental results have shown that decreasing the threshold increases the number of smaller blocks used. Here a question can be asked: which is better, increasing the codebook sizes for a given threshold or decreasing the threshold for a given codebook size? Using two possible sizes 4 x 4 and 2 x 2 and testing for different values of the threshold A, we found that it is preferable to increase the size of the codebooks while keeping h constant rather than decreasing A while maintaining the same codebook sizes. To show the result of the subdivision process we represent the segmented images using different coding block sizes. Figure 3.13 shows the variable block-size segmentation for the image "boat" using two block sizes of 4 x 4 and 2 x 2. Figure 3.14 shows similar results for

3 Analysis-by-Synthesis

Coding System

41

the image "lenan. In the figures, the high-activity region (block size of 2 x 2) is represented by the gray level 0 and the low-activity region (block size of 4 x 4) is represented by a light shade. It is o b s e d that the detailed areas are effectively coded using the smallest block size, and that the largest size is used for areas of the images that can be considered as uniform. Similar images are shown for the case where three block sizes are used, namely, 8 x 8, 4 x 4 and 2 x 2. The segmented images are shown in Figure 3.15 for "boatn and in Figure 3.16 for "lena". For the image "boatn, PSNR results as a function of the coding bit rate are given in Figure 3.17 for 2D-CELP with a fixed code-vector size as well as with a variable codevector size. For 2D-CELP, a coding block size of 4 x 4 and a set of K = 5 adaptive predictors of order four have been used. For the purpose of comparison, the other curves are also generated with sets of five adaptive predictors of order four corresponding to the appropriate coding block size. Using variable block-size coding, two VB 2D-CELP curves are represented for two combinations of block sizes. One curve corresponds to a subdivision from 4 x 4 to 2 x 2 and the other also allows blocks of size 8 x 8. Figure 3.17 shows the improvement accomplished by the VB 2D-CELP when comparing the results with those obtained for coding "boat" by the original 2D-CELP scheme. In this figure, the coding bit rate is obtained based on Huffman variable length coding. As previously mentioned, each codeword indicates all information for the block being coded regarding its selected codevector, coding block-size and prediction filter. For future convenience and as will be discussed later, coding bit rate based on separation of the index bit rate and side information is also provided. In this method, different Huffman codes are used in coding the code-vector indices and the predictor indices. The information about the block size is indicated by the code used for each predictor corresponding to a certain block size. For example, in case of two coding block sizes, two sets of adaptive predictors, each containing 5 predictors are used. This results in 10 predictors for which a Huffman based variable length code can be generated. Based on the above mentioned coding bit rate calculation, the PSNR performance results corresponding to image "boat" are provided using the VB 2D-CELP coding scheme and the JPEG algorithm. Figure 3.18 shows the improvements of our proposed method over the standard JPEG method. This improvement is even clearer when comparing the quality of the coded images. We compare the coded images at approximatively 0.54 bpp and 0.51 bpp for the different systems considered. Figure 3.20 shows the VB 2D-CELP

3 Analysis-by-Synthesis

Coding System

42

coded image at 0.545 bpp with a PSNR of value 33.58 dB. This image is coded using two block sizes: 4 x 4 and 2 x 2. Allowing the 8 x 8 block size too, results in a bit rate of 0.51 bpp and a PSNR of value 34.45 dB. The corresponding image is shown in Figure 3.21. The JPEG coded images at these two rates are given in Figure 3.22 and Figure 3.23. The enlarged window in Figure 3.24 corresponds to VB 2D-CELP with block sizes of 4 x 4 and 2 x 2. In Figure 3.25, me give the window corresponding to uboat" coded with VB 2D-CELP where a size of 8 x 8 is also allowed. The enlarged window of the coded image (at the two rates for the previously mentioned cases) by means of DCT based JPEG algorithm are also given in 3.26 and Figure 3.27. The 8 bpp image "lena" of size 512 x 512 is also used in our experiments. Given that the best improvement was obtained by the VB 2D-CELP system, two comparisons with JPEG are presented in Figure 3.19. IVe also show the coded images at two coding rates for the VB 2D-CELP and JPEG systems. Figure 3.28 shows the coded image at 0.517 bpp with PSNR=34.89 dB. This image is coded with two block sizes, 4 x 4 and 2 x 2. The JPEG coded image at 0.51 bpp has a PSNR of value 34.74 dB and is shown in Figure 3.29. Using three coding blocks sizes in the VB 2D-CELP scheme results in a PSNR of value 35.19 dB at 0.459 bpp. This image is represented in Figure 3.30 and the JPEG coded image at 0.462 bpp with PSNR=34.26 d B is represented in Figure 3.31 Finally, it is important to mention that regarding the computational complexity of the algorithms, the VB 2D-CELP has been seen to need less computational time than the 2D-CELP for a given bit rate. In fact, once the block size is selected, the corresponding subcodebook from which the code-vector has to be chosen has a smaller size than the one in the fixed block size 2D-CELP codebook.

3.8 Summary We have described a clustering approach for the design of block-adaptive predictors to be used in 2D-CELP coding schemes. The choice of block size, number of predictors and predictor order is made to balance prediction gain with the side information required to specify the predictor for each block. We have presented the results of an empirical optimization over a range of bit rates of about 0.45-0.85 bpp. Although fixed block size 2D-CELP is superior to ADPCM, and allows much lower rates, it remains inferior to DCT methods such as JPEG. We have introduced a variable block-size 2D-CELP algorithm that

3 Analysis-by-Synthesis

Coding System

43

improves performance over the fixed block size version by up to 3 dB in PSNR and exceeds JPEG by up to 1.5 dB, with no visible block artifacts and better rendition of certain oblique details. As VB 2D-CELP is shown to be a promising algorithm for still image compression, we consider the method as our basic source coding technique. In order to implement a robust error resilient 2D-CELP based source codec, further work remains on error sensitivity analysis in the wireless fading channel, and robust design in order to provide reliable VB 2D-CELP coded image transmission over wireless channels.

3 Analysis-by-Synthesis Coding System

44

Fig. 3.9 Original image "boatn.

Fig. 3.10 Enlarged window of 'boat".

Fig. 3.11 Enlarged window of 2DCELP coded "boatn with K = 1, block-size=4 x 4: bit rate=0.4 bpp, PSNR=31.39 dB.

Fig. 3.12 Enlarged window of 2DCELP coded "boat" with K = 5, block-size=4 x4: bit rate=0.39 bpp, PSNR=30.89 dB.

3 Analysis-by-Synthesis Coding System

45

Fig. 3.13 Variable block-size segmentation for image "boatn: highactivity region (block size 2 x 2) r e p resented by 0 gray level and lowactivity region (block size 4 x4) r e p resented by light shade.

Fig. 3.14 Variable block-size segmentation for image "lend: highactivity region (block size 2x 2) r e p resented by 0 gray level and lowactivity region (block size 4 x4) r e p resented by light shade.

Fig. 3.15 Variable block-size distribution for image 'boat": large blocks are those in light shade, the three regions from brighter to darker represent 8 x 8, 4 x 4 and 2 x 2 sizes.

Fig. 3.16 Variable block-size Cistribution for image 4enan: large blocks are those in light shade, the three regions from brighter to darker represent 8 x 8, 4 x 4 and 2 x 2 sizes.

3 Analysis-by-Synthesis Coding System

46

I

0.60

0.70

0.SO

0.90

1.m

Bit Rate Cbpp]

Fig. 3.17 PSNR performance for image 'boatn coded with 2D-CELP using fixed blo&-size of 4 x 4, and with VB 2D-CELP using variable blo&-size coding.

I

I 500.00

Fig. 5.18 Performance of VB 2DCELP coding for the image "boat" versus JPEG.

~

I

.

m

Bh Rate [bpp]

I 7M.W

I

x x 1c3

Fig. 3.19 Performance of VB 2D-CELP coding system for ixnqe versus JPEG.

3 Analysis-by-Synthesis

47

Coding System

Fig. 3.20 Image uboatn coded with VB 2D-CELP using two block sizes 4 x 4, and 2 x 2: bit rate=0.545 bpp, PSNR=33.58 dB.

Fig. 3.21 Image "boat" coded with VB 2D-CELP using three b l o c k s i z e s 8 ~ 8 , 4 ~ 4 , a n d 2 bit ~2: rate=0.519 bpp, PSNR=34.45 dB.

Fig. 3.22 JPEG coded "boatn: bit rate=0.543 bpp, PSNR=33.40 dB.

Fig. 3.23 JPEG coded "boat": bit rate=0.51 bpp, PSNR=33.0 dB.

3 Analysis-by-Synthesis C o d i n g System

48

Fig. 3.24 Enlarged window of 'boat" coded with VB 2D-CELP using two block sizes 4 x 4, and 2 x 2: bit rate=0.545 bpp, PSNR=33.58 dB.

Fig. 3.25 Enlarged window of " k t n coded with VB 2D-CELP using three block sizes 8 x 8, 4 x 4, and 2 x 2: bit rate=0.519 bpp, PSNR=34.45 dB.

Fig. 3.26 Enlarged window of JPEG coded %atn: bit rate=0.543 bpp, PSNR=33.40 dB.

Fig. 3.27 Enlarged window of JPEG coded uboatn: bit rate=0.51 bpp, PSNR=33.0 dB.

3 Analysis-by-Synthesis Coding System

49

Ffg. 3.28 Image Yenan coded wlth VB 2D-CELP using two blodc sizes 4 x 4, and 2 x 2: bit rate=0.517 bpp, PSNR=34.89 dB.

Fig. 3.29 JPEG coded "lena": bit rate=0.51 bpp, PSNR44.74 dB.

Fig. 3.30 Image Yenan coded with VB 2D-CELP using three block sizes 8 x 8 , 4 x 4 , and 2x2: bit rate=0.459 bpp, PSNR=35.19 dB.

Fig. 3.31 JPEG coded " h a " : bit rate=0.462 bpp, PSNR44.26 dB.

Chapter 4 The Wireless Transmission Environment and Channel Error Control 4.1 Introduction A DSCDMA system has been accepted as a digital cellular standard (IS-95) and is o p erating in North America [80].Transmitting images with high reliability over a system patterned after the IS-95 standard has been one of the motivations of this research. In this chapter we first provide a brief description of CDMA system operation with an emphasis on the IS-95 communication system. We then provide an overview of error control techniques for wireless communications, and describe the error control protocol adopted in this work to provide different QoS requirements for image transmission over CDIvlA Rayleigh fading channels. This is followed by a description of the transmission system considered herein. In particular, details of the simulation setup of the transceiver and fading channel model are provided. The transmission system described and our choice of error control form the basis of the transmission environment and conditions used in later chapters.

4 T h e Wireless Transmission Environment and Channel E r r o r Control

51

4.2 DS-CDMA transmission environment 4.2.1 Direct-Sequence C D M A

The system operation for CDMA cellular systems, such as IS95 [80], is based on DirectSequence (DS) spread-spectrum signal, whereby each message bit is represented by a large number of coded bits called chips. Direct-Sequence spread spectrum [81] [7] begins with digital modulation of the signal using standard methods, e.g. Quadrature Phase-Shift Keying (QPSK) modulation. This modulated digital signal is further modulated by a spreading code, for which the chip rate is much greater that the bit rate of the signal. As a result, the narrow-band digital signal is spread out to become a wide-band signal, with a bandwidth typically greater than 1 MHz. If the spreading code is widcband and the message is narrow-band, the resulting signal bandwidth will nearly be the same as that of the spreading sequence. At the receiver, to retrieve the original signal, the received signal is modulated with the same spreading sequence. This signal is then demodulated. In DSCDMA, different users are allocated different spreading codes, so that interfering signals still appear as wide-band interference after this processing. It is important to note that the message component of the signal is a narrow-band signal, while the icterference component is a widehand signal. This allows the CDMA system to operate correctly, by filtering out most of the widcband interfering signals [82]. The efficiency of the bandwidth utilization with DS-CDMA communications in fading multi-path channels has heen the major concern of its use. Coding and diversity combining can improve the bandwidth efficiency. Two types of diversity can be used: external or implicit. External diversity can be achieved through the use of multiple antennas for example. Implicit diversity makes use of the inherent diversity from the multi-path reception and can be realized through the use of a RAKE receiver. Both types of diversity can be combined to further improve performance. 4.2.2 Transmission impairments

The major problem in wireless communications is the quality of the radio channel. In the mobile radio propagation channel, the following impairments are experienced: multi-path fading, path loss, and signal shadowing [83]. Multi-path fading, also known as short-term

4 T h e W u e l e s s Transmission Environment a n d Channel Error Control

52

fading, can be defined as the rapid fluctuations of the received signal strength as a result of multi-path propagation. Path loss is due to the relative rate of signal degradation with distance and is different in outdoor and indoor environments. Signal shadowing occurs in all environments when the signal path is blocked due to buildings or furniture. In general the quality of a CDMA channel is subject to drastic changes. First of all, the fading channel characteristic is constantly fluctuating due to multi-path interference and shadowing. Secondly, there is more than one mobile active at any time instant, resulting in interference between different users. Thii kind of impairment is called the Multiple Access Interference (MAI). .4t best, one can describe the behavior of such a channel using statistical models like the widely accepted additive Gaussian noise, Rayleigh fading channel. Thii channel model involves a random transmission gain expressed as a(t)ej*('), where a ( t ) is the corresponding amplitude having a probability density function such as Rayleigh, and $(t) is the phase offset having a uniform distribution on [O,27~],plus an additive noise with a Gaussian distribution which models the combined effects of thermal noise and MAI. 4.2.3 Transmission system requirements

In the CDMA cellular environment, the characteristics of the uplink (reverse link: mobileto-base) and the downlink (forward link: base-to-mobile) are different [84]. In the downlink, there is one transmitter and many receivers so usually the signals sent to mobile terminals can be multiplexed and a coherent reference (pilot signal) can be economically inserted among the multiplexed signal. Therefore, synchronous transmission can be implemented and coherent demodulation can be performed. As a result, the interferences from other users can be canceled, at least at the same multi-path components, and hence we get a performance gain compared to asynchronous transmission. In the uplink, there are many transmitters and one receiver. Therefore, it is much easier to design a system operating in the asynchronous mode. Also, inserting in each individual user's signal a pilot whose power is greater than the data-modulated portion of the signal reduces efficiency to less than 50% [85]. Hence, without phase and amplitude estimation, non-coherent or differentially coherent reception is required. M-ary orthogonal modulation is proven to yield good non-coherent performance compared to DPSK modulation. Therefore, hf-ary orthogonal modulations are often used in the CDMA systems on the uplink [SO], in order to reduce the degradations caused by multi-path fading and other

4 T h e Wireless Transmission Environment and Channel E r r o r Control

53

users interferences. When using DSCDhIA for multiple access, performance is affected by the nearlfar problem. The consequence is a dramatic decrease in the system capacity. Power control is essential in any DSCDMA system in order to mitigate the nearlfar problem and minimize the power transmitted by each mobile [86], [7]. Furthermore, the nature of fading channels causes power variations that must be controlled if possible. Power control in DSCDMA systems attempts to equalize the levels of the received pourer from all the mobiles within a cell. Perfect power control would keep them equal to a level that delivers the required performance (effective value of Eb/No) at the output of the receiver. In the complete absence of power control, with all users transmitting at the same power lewls irrespective of distance from the base station or fading in the mobile channel, far users will frequently find that the processing gain of the system is insufficient to suppress the interference from other users adequately, particularly those close to the base station and therefore far-user received data quality will be poor. It is relatively easy to show that in the absence of power control, a CDMA system will support fewer users than an FDMA system [6]. The answer then is to control transmit power levels carefully to ensure the best possible data quality for all users simultaneously [87], [86]. Power control on the uplink attempts to adjust the transmitted power of each mobile station such that nominal received power from each mobile station at the base station is the same [88]. To achieve this goal, a power control scheme is required to monitor and control the power transmitted by each mobile. As the path loss and slow fading can be assumed to be identical on both the reverse and fonvard links, the open loop power control is used to estimate the path loss and slow fading from a CDMA pilot signal transmitted on the downlink. That is, the open loop pou7ercontrol can cope with the very slow shadow type fading, while the effect of the fast fading is compensated by a closed-loop pou7er control scheme. In a fixed-stepsize closed-loop power control system, the base station measures the received power from the mobile and compares it with a reference target power. A command bit is generated and sent to the mobile station. A logical '1' command bit signifies that the received power is less than the target power, while a logical '0' bit signifies the power is greater than the target power. The mobile station adjusts its transmitted power according to the dictates of the command bit with a fixed incrementldecrement of power. The closed-

4 T h e Wireless Transmission Environment a n d Channel E r r o r Control

54

loop power control scheme used in IS95 is a fised-step power control with the transmitted power changing by f1 dB in response to each base station command bit.

4.3 Error control techniques for the wireless channel In mobile radio communications in which the signal undergoes transmission impairment, the issue of error control is unavoidable. There are two basic categories of error control methods: Forward Error Correction (FEC) [89],and Automatic Repeat reQuest (ARQ) [go], [91]. The primary advantage of FEC is that it does not require a feedback channel. However, it has the drawback of overheads introduced by codes, which may lower the effective data throughput. ARQ does not impose large overheads, but it requires a feedback channel and must be able to tolerate propagation delays in both the fonvard and reverse directions. FEC can also be used in conjunction with ARQ. The combination of FEC and ARQ for error detection and correction is known as Hybrid ARQ. In the following sections, the principles of FEC, the basic ARQ schemes, as well as Hybrid FEC/ARQ are described. The main advantages and disadvantages of the different techniques, and their applicability for mobile communications are also summarized, with an emphasis on QoS requirements. 4.3.1 Forward e r r o r correction i n fading channels

IVith every transmission/modulation technique, there is an associated error probability, which is dependent on the transmitted signal energy per bit (Eb),and the noise encountered (NO)[Y2]. Increasing the signal energy to noise ratio per bit (EbINo) reduces the probability of error in transmission. However, practical considerations place a limit on Eb/No. For a fixed Eb/No, the only way to lower the probability of error is to use coding. The use of coding techniques introduces coding gain which is defined as the reduction in the required signal power for a given error probability when coding is in use, compared to the signal power required for the same error probability without coding. The reduction in the required signal power is often exploited to reduce transmission power at the expense of reduced throughput, due to coding overheads. In an FEC error control system, an error correcting code (block or convolutional) is .used for combating transmission errors due to the limitations of the channel, such as noise and fading. Parity-check bits are added to each transmitted message to form a codeword

4 T h e Wireless Transmission Environment a n d Channel E r r o r Control

55

(or a code sequence) based on the code used by the system. When the receiver detects the presence of errors in a received word, it attempts to locate and correct the errors. After the error correction has been performed, the decoded word is then delivered to the user. A decoding error is committed if the receiver either fails to detect the presence of errors or fails t o determine the exact locations of the errors. In either case, an erroneous word is delivered to the user. Since no retransmission is required in an FEC system, no feedback channel is needed. The throughput of the system is constant, and is equal to the rate of the code used. When a received word is detected in error, it must be decoded, and the decoded word must be delivered to the user regardless of whether it is correct or not. Since the probability of decoding error is much greater than the probability of undetected error, it is harder to achieve high system reliability with FEC schemes. In order to attain reliability, a long powerful error correcting code must be used and a large collection of error patterns must be corrected. This makes decoding hard to implement and expensive. For these reasons ARQ schemes are often preferred over FEC. Of great importance to mobile communications due to their error correction capability, Reed-Solomon (RS) codes have been implemented in a variety of forms. RS codes are a subclass of non-binary BCH codes. Being good for bursty channels, RS codes have been adopted as popular FEC codes for many applications for mobile communications. On the other hand, Convolutional encoding, originally developed for deepspace communications, has also found applications in mobile communications. More recently, convolutional encoding has been implemented in digital cellular systems for the protection of speech [SO]. The performance of block and convolutional codes in the mobile radio channel is often enhanced by interleaving. In typical communication systems, the interleaver is often placed between the FEC encoder and the modulator. Most block and convolutional codes are designed to combat random independent errors, which occur in memoryless or Additive White Gaussian Noise (AWGN) channel. However, for channels with memory, such as the mobile channel, burst errors are observed to be dependent on signal transmission impairment (fading, Doppler, etc.). The technique of interleaving is intended to disperse burst errors encountered when the received signal is in fade. This effectively reduces the concentration of the errors that must be corrected by the channel code. The aim of the interleaver is to make the channel appear random or memoryless to the decoder.

4 T h e Wireless Transmission Environment a n d Channel E r r o r Control

56

Convolutional codes have an advantage in correcting random errors, whereas RS codes are good at correcting both random and burst errors if interleaving is used, and have reliable error detection capability. In general, longer codes possess the advantage of better detection in correctable words over shorter codes, hence a higher coding gain can be achieved; however, the penalty is increased complexity and processing delays. With convolutional codes, soft-decision decoding is also an effective way of increasing the coding gain. However, it is important to mention that coding gains that can be obtained depend very much on the code rate, channel characteristics (which also depend on the environment-outdoor or indoor), decoding techniques, interleaving depth, etc. 4.3.2 Automatic R e p e a t reQuest schemes

ARQ schemes [91] are an alternative form of error control to FEC, and can be more reliable than FEC schemes where data applications in the mobile radio channel are concerned [89], but at the expense of greater delay. ARQ protocols employ an error detection code and a feedback channel so that the receiver can request retransmission of the erroneous packets, or it can use the feedback channel to acknowledge the correctly received packets. The most common of error detection codes used in ARQ schemes is the Cyclic Redundancy Check (CRC). The CRC performs a check on the integrity of the data packet received and signals to initiate an appropriate acknowledgment. FEC codes are also used at times in ARQ systems to perform the error detection instead of the CRC. FEC codes can also be used for error protection in ARQ systems as will be seen later. There are three types of ARQ: the StopAnd-Wait (SAW) ARQ, the Go-Back-N (GBN) ARQ, and the Selective-Repeat (SR) ARQ. Non-continuous R e p e a t reQuest The non-continuous Repeat reQuest protocol is the StopAnd-Wait ARQ. In a SAW ARQ data transmission system, the transmitter sends a single frame to the receiver and waits for an acknowledgment. A positive acknowledgment (ACK) from the receiver signals that the frame has been successfully received (i.e., no errors being detected), and the transmitter sends the next block. A negative acknowledgment (NAK) from the receiver indicates that

'

'A frame of data is referred to as block or packet.

4 T h e Wireless Transmission Environment and Channel E r r o r Control

57

the frame has been detected in error, and the transmitter resends the same frame. Retransmissions continue until an ACK is received by the transmitter. The SAW ARQ protocol is simple; however, it is inefficient due to the idle time spent miiting for an ACK for each transmitted frame. The time that the protocol remains idle depends on the propagation delays, transmission times, and protocol processing delays. Continuous Repeat reQuest Continuous Repeat reQuest (RQ) systems such as GBN and SR, send information frames continuously before receiving any acknowledgments. These slstems are more efficient than SAW, but there must be a limit on the number of frames transmitted or the buffers required will overflow. Therefore, some form of regulation must be introduced. A window is often used t o limit the maximum number of frames transmitted. This window (also known as the sliding window) is essentially a buffer containing a l i t of frames waiting to be acknowledged (the retransmission list). In a GBN ARQ system, a series of frames is transmitted continuously. The transmitter does not wait for an ACK after sending a frame; as soon as it has completed sending one, it begins sending the next frame. The acknowledgment for a frame arrives after a round-trip delay. The round-trip delay is defined as the time interval between the transmission of a frame and the receipt of an acknowledgment for it. During this interval, N - 1 other frames have also been transmitted. When a NAK is received, the transmitter stops sending new frames. It backs up to the frame that is negatively acknowledged and resends that block and the N - 1 succeeding blocks. At the receiver, the N - 1 received frames following an erroneously received block are discarded regardless of whether or not they are error-free. The size of the buffer at the receiver is one frame, as it contains the sequence number of the next expected block to keep the order. It will discard all blocks following an erroneous block until the block is correctly received [go]. Due to the continuous transmission and retransmission of data blocks, the GBN ARQ is more efficient than the SAW ARQ. Its throughput efficiency depends on the round-trip delay N. I t performs effectively on channels where the data rate is not too high and the round-trip delay is small. However, it becomes inefficient for channels where the data rate is high and the round-trip delay is large. Its throughput efficiency drops rapidly as the channel error rate increases. The inefficiency of the GBN ARQ protocol is caused by the

4 T h e Wireless Transmission Environment a n d Channel E r r o r Control

58

retransmission of many error-free frames following an erroneous block. This inefficiency can be overcome by using selective retransmission strategy. Selective-Repeat is another continuous ARQ scheme, where blocks are continuously transmitted. In an ideal SR ARQ system, the transmitter only resends those frames that are detected in error. This ARQ technique, unlike GBN, accepts blocks out of sequence but reorders the received blocks and delivers them in sequence to the higher layers. Timers are also used in the SR scheme, but during retransmissions, which can be initiated by the reception of a NAK or a time-out for a particular block, unlike GBN, only that block is retransmitted. The receiver buffer stores out-of-sequence blocks in the buffer, so that when an in-sequence block arrives, it can be relayed t o the higher layers. Hence, buffering is an essential requirement to the SR protocol, with buffers at both ends of the link, unlike GBN. SR ARQ maintains a high throughput over a wide range of bit error rates. However, to achieve this ideal throughput efficiency, extensive buffering (theoretically infinite) is required at the receiver since ordinarily blocks must be delivered to the user in correct order. If a finite buffer is used at the receiver, buffer overflow may occur which would reduce the throughput of the system. However, if sufficient buffer store is provided at the receiver and if the buffer overflow is handled properly, the SR ARQ still significantly outperforms the two other types of ARQ in systems where the data transmission rate is high and the round-trip delay i; large. In the radio channel, where error rates are high, the increase in retransmissions due to erroneous blocks results in large number of duplicates, decreasing the efficiency of the protocol. Nevertheless, SR still yields the highest throughput of the three protocols outlined. 4.3.3 H y b r i d ARQ

ARQ methods are indispensable in providing highly reliable communications in data transport systems. However, when channel conditions are poor, systems that use only ARQ suffer a degradation in throughput performance due to an increase in the frame error rate. Accordingly, in recent years, research h s been successfully performed to merge FEC coding and ARQ into Hybrid ARQ systems to provide reliable communications with high throughput [93], [15], [94], (951. In hybrid ARQ protocols, the purpose of FEC is to reduce the frequency of retransmissions by correcting error patterns that frequently occur such as small error bursts. When a

4 T h e Wireless Transmission Environment a n d Channel E r r o r Control

59

large burst of errors occur, it is then left to the ARQ mechanisms to pass the information across the channel. Hybrid ARQ systems are divided into two main classes called type-I hybrid ARQ and type-I1 hybrid ARQ systems. Type-I Hybrid ARQ Type-I systems pack all coded bits (both information and redundant bits) into single packets for transmission to the receiver. The aim of the type-I scheme is to detect and correct errors using FEC. When a codeword is detected to be in error, the receiver attempts to correct the errors. However, if the error pattern is uncorrectable (such as a large burst), the receiver discards the received codeword and sends a request for a retransmission. The contents of the first and any repeat frames are identical. In typo1 systems, the amount of parity check bits is higher than that of pure ARQ schemes, as it is required to perform both error detection and correction, unlike conventional ARQ where only detection is performed. Hence, when the channel error rate is low, type-I systems will be inefficient, carrying unnecessary overheads. However, when the channel error rates are high, they have the advantage over pure ARQ systems through reduced retransmissions, as FEC is used. Type-I1 Hybrid ARQ Type-I1 systems pack information and redundant bits into separate packets (the information bit packet includes bits for error detection) and sends only the information bit packet on the first transmission. If errors are detected at the receiver, then a repeat packet of redundant bits is transmitted. In this way, the contents of the first packet differs from the contents of all subsequently transmitted repeat packets which minimizes overheads. At the receiver, information and redundant bit packets are combined to perform error correction decoding. If correction is not successful, a second retransmission is requested, which may be a repetition of the original codeword or another block of parity-check bits; depending on the algorithm adopted. Type-I1 systems have the advantage that subsequent retransmissions allow the error correction capacity of the received code to adjust to varying channel conditions. Iience, type-I1 hybrid ARQ can be seen as an adaptive hybrid ARQ system, adapting to the channel characteristics. When the channel is good, the system

4 T h e Wireless Transmission Environment a n d Channel E r r o r Control

60

behaves like a pure ARQ system, and when the channel degrades, extra parity bits are included to cope with the change of BER. This makes type-I1 systems effective in improving throughput and, in particular, selective-repeat t y p d I hybrid ARQ systems have been noted as providing especially high throughput [go],[91].

4.4 Transmission requirements and choice of error control

protocol 4.4.1 E r r o r control for QoS requirements

-4major concern in wireless cellular systems, is the control of transmission errors under multi-path fading and multi-user interference. In order to ensure reliable communications and reasonable capacity of these systems, error control must be applied. Error control on the other hand, has to satisfy the application requirements. In fact, voice, video, image and data transmission have different bit error rate (BER) and delay requirements. Some typical services and their requirements are given in table 4.1 [96]. Table 4.1 Examples of QoS requirements. Maximum BER Service LBR Speech Asvnchronous data lo-" Facsimile lo-1 Packet data II lo-" LBR low resolution video 11 10lo-5 LBR Image

Delay sensitive insensitive 1 insensitive l insensitive I sensitive 1 dependent ~~

~

-~ -

-

~ -~- -

-

~

Services are divided into two main categories, stream and packet, differentiated mainly by the delay requirements. Stream services, such as voice and video, are sensitive to delay. Therefore, the transmission has to be continuously maintained during the communication (call). Unlike video, some applications of still image transmission through wireless channels can tolerate delays. Video is usually a delay constrained service. Since video is a real-time service, timely arrival of video source data at the decoder is required, that is, the delay is limited and the error control technique to be used has to satisfy this condition. In applications such

4 T h e Wireless Transmission Environment and Channel E r r o r Control

61

as vide+phony or teleconferencing, video data has to be delivered to the user even if some frames are not successfully decoded. This may result in black spots in the reconstructed sequence, but the communication may remain acceptable as the image remains intelligible. Still image transmission can still be considered as a stream service. However, applications that require high-quality decoded images, may not tolerate poor reconstructed image quality as much as they may tolerate delay. Therefore, the need to support image transmission with different QoS requirements makes it necessary to implement adequate error control. since we are concerned with both requirements, compromise is a must. Hence, our choice of the error control protocol has to be made with two goals to be satisfied: high-quality decoded images, and acceptable delay. Taking into account the characteristics of the CDMA wireless environment, a variety of coding schemes may be applied to meet the desired requirements. In an interference limited cellular environment, the number of users a CDMA system can accommodate, i.e, the CDMA capacity, is proportional to the processing gain and the inverse of the bit energy to interference power spectral density ratio Eb/No. In order to achieve high system capacity in the wireless access of personal communication systems employing CDMA, low values of Eb/No are necessary. The wide-band nature of CDMA allows the use of powerful low-rate codes in conjunction with spreading sequences to achieve both low Eb/lVoand bandwidth spreading. For example, rate 112 and rate 113 convolutional codes with 256 states are used in the IS-95 CDMA system for digital cellular and personal cellular networks [SO]. Originally proposed for voice transmission, the IS-95 system is capable of providing a high grade of service of Pr(BER < Unlike voice transmission, image and video applications require a higher degree of reliability. On the other hand, real-time video and some image transmission applications are delay sensitive. These factors justify the need to use an error control technique suitable for delay-constrained traffic and capable of providing reliable transmission. 4.4.2 Choice of e r r o r control protocol

In the mobile radio channel, errors tend to occur in bursts due to the error mechanisms present. An error burst is often characterized by a region of consecutive bits in error followed by a stream of consecutive error-free bits. In most applications, the error bursts

4 T h e Wireless Transmission Environment and Channel E r r o r Control

62

often impose a limit on the intelligibility of the information transmitted. Very often, a BER limit is used (on average 1 bit error in every 1000 bits). This limit is typically used for voice applications. A more stringent limit is imposed for applications such as image and video which usually require very low transmission error rates (< lo-'). However, achieving these low error rates in wireless channels is very challenging. As previously mentioned, FEC codes are suitable for channels with a consistent BER. Hence, for the mobile channel characterized with burst errors, it is very difficult to obtain very low transmission error rates using FEC alone. On one hand, FEC codes are designed optin~allyfor a range of BER, and on the other hand, if interleaving is used intolerable delay may take place. Therefore, additional error control such as ARQ is needed to ensure reliability. However, ARQ schemes provide very low transmission rates at the expense of large delays which are intolerable in reliable (high quality) image and real-time video transmissions. Additional parity check bits may also be required t o ensure reliability. The use of lower rate codes may be useful for error correction, but the overhead involved must also be taken into account. In this context, implementing a low-rate code to ensure data reliability may result in the user data being unbearably low, due to excessive overhead. On the other hand, using a high-code rate may leave too many errors in the data stream uncorrected. Hence, in order to strike a compromise, the channel aspects need to be considered with further detail. That is, the fading characteristics must be considered in order to implement suitable error control protocol. The average fade duration of a signal depends on the propagation frequency and speed of the mobile. Deep fades are commonly encountered in slowly fading channels such as Micro-cellular and indoor radio channels. The signal strength at a deep fade is very small and it varies very slowly. Therefore, in slowly fading channels, long bursts of data errors can occur. Hence, a portable in an indoor environment, or a mobile moving at low speed, suffers considerably larger error bursts due to the longer time it spends in a fade than does a mobile moving a t high speed in an outdoor environment. These conditions render FEC coding alone unreliable and ARQ required [14]. In fact, with FEC only, large bursts of errors will not be correctable. On the contrary, FEC mill be inefficient, serving only to impose additional overheads. ARQ based protocols are more efficient in terms of the overall throughput, but if ARQ only is used, unacceptable delay will take place. Without ARQ on the other hand, the bursty nature of the channel errors requires extensive amount

4 The Wireless Transmission Environment a n d Channel E r r o r Control

63

of FEC overhead to lower the Frame Error Rate (FER) adequately, resulting in a very low throughput, which results in reduced reconstructed signal quality. A compromise could be accomplished using hybrid ARQ error control. In mobile data applications, hybrid ARQ protocols have been proven to provide better reliability than pure FEC and a higher throughput than the system with retransmission only. It is in this vein that we consider, a type1 hybrid ARQ protocol. It is important to mention that type-I1 hybrid ARQ is known to be more efficient, but is also more complex than type1 ARQ. In a slow fading environment where the BER of an erroneously received frame is high, powerful channel coding is required. But in this case it might be more efficient to retransmit the data itself rather than parity bits, and furthermore this is much less complex. Therefore, we did not consider type-I1 hybrid ARQ in this work. 4.4.3 Type-I RS/CC hybrid ARQ error control In the hybrid scheme, an ARQ protocol is used to obtain a desired error rate. FEC coding is used to correct low-weight error patterns in each message, reducing the number of retransmission requests. However, in order to increase reliability a large number of retransmissions will still be required which would result in unacceptable delay for delayconstrained applications. In particular, the throughput efficiency and the mean queuing and block delay times of hybrid ARQ systems are known to be far from satisfactory when the E&/Nois low [go], which is essential in order to increase the system capacity. On one hand, taking into account the delay constraint, the number of retransmissions required to acknowledge a frame has to be minimized and on the other hand setting stringent constraints on reliability in an environment characterized by bursts of errors requires the use of powerful channel error control. To achieve these requirements, a powerful error correcting code can be used with a powerful error detection capability of the FEC codes. One possible approach to achieve this goal is the use of concatenated codes. Concatenated coding was first proposed by Forney [97] to utilize multiple levels of coding, for the purpose of achieving very low error probabilities. In a two-level concatenated coding scheme, two levels of coding and two levels of interleaving are used to combat channel bursts errors. The level of the coding and interleaving closer to the channel is called the inner layer, whereas the level outside the inner layer is known as the outer layer. The

4 T h e Wireless Transmission Environment a n d Channel E r r o r Control

64

inner and outer FEC codes can be convolutional codes or block codes. At the receiving end, the demodulator may produce either hard or soft decisions. In either case, these decisions are fed to the inner deinterleaver. This latter disperses the channel burst errors into random patterns. The inner FEC decoder is desigaed to combat the random errors. If the inner FEC decoder cannot correct the word or erroneously decodes the word, the decoding errors are bursty in nature and the outer interleaver is used to disperse the errors into adjacent codewords of the outer code. The outer FEC decoder then attempts to correct the remaining errors. Block-based image compression algorithms such as VB 2D-CELP are sensitive to transmission errors. In fact, the corruption of one block of pixels may cause corruption of several adjacent blocks. Apart from considerations of robustness of the coding scheme to transmission errors, it is desirable to have a coded channel which produces a very low BER (as previously mentioned) without a corresponding loss in the data rate, i.e., a coded channel which offers a high coding gain while being compatible with the use of ARQ strategies. In particular, Reed-Solomon outer and convolutional inner concatenated (RS/CC) coding is known to be capable of providing high error-correction capability [El, [98],especially when combined with ARQ protocol [17].Therefore, RS/CC concatenated coding is utilized for the communication system considered in this work. In order to enhance the performance of the RS/CC concatenated coding scheme, a simple type-I hybrid ARQ protocol is implemented. Another benefit of ARQ in a lossy environment is the potential of providing flexible error protection to different bit-stream classes of different priorities. This is particularly important for image transmission. In fact, most image coding techniques are not loss tolerant. For example, if layered coding is used, and if the higher layer is highly protected, some packets can be lost while still guaranteeing acceptable quality. However, when a header information is needed to drive the decoder, the loss of this data would likely lead the decoder to fail in decoding the right information. 4.4.4 Delay-limited coding

As previously mentioned, in real-time service, an acceptable delay has to be guaranteed through transmission. As excessive use of ARQ not only increases the transmission delay, but inevitably reduces the number of users that can be supported, for delay-constrained

4 T h e Wireless Transmission Environment and Channel Error Control

65

applications, it is proposed to truncate the number of retransmissions of the protocol. The delay-limited truncated t-ype-I hybrid ARQ is a special case of hybrid ARQ with a limited number of retransmissions. In comparison with FEC, and other hybrid ARQ schemes, the truncated t y p ~ RS/CC I hybrid ARQ protocol offers several advantages as will be seen in the nest chapter.

4.5 The system and its model The system under consideration is patterned after the IS-95 standard [SO]. Only the uplink is addressed in our study [99]. However, we consider an enhanced version of the IS-95. The modified version has been adopted by several researchers because of improved bit error rate performance. Moreover, further coding and channel error control techniques are proposed in order to improve the system performance. The main difference in terms of coding is that here RS/CC concatenated coding is used. We assume a frequency non-selective slowly Rayleigh fading channel model. 4.5.1 Uplink transceiver description

A simplified block diagram of the uplink transmitter is depicted in Figure 4.1. The baseband part of the portable transmitter consists of an RS/CC concatenated encoder with an outer interleaver, a mapper, an inner interleaver, and PN spectrum spreader. The user data is first coded by a RS encoder. Hence, the incoming data bits are grouped into bbit symbols and encoded first by an RS (n,k,b) code defined over GF(2"). The output of the RS encoder is then interleaved on a symbol-by-symbol basis to provide burst error protection (outer interleaver). This is followed by rate 113 convolutional encoding with constraint length 9 and generating functions (557, 663, 711) [SO]. The resulting coded symbols are further spread by mapping groups of six symbols onto 64-ary orthogonal symbols prior LO lValsh symbol interleaving to combat fading (inner interleaver), and QPSK PN spreading. The resulting spread spectrum sequence is then transmitted over a frequency non-selective slowly Rayleigh fading channel in the presence of multiple access interference (MAI) and background noise. The baseband part of the uplink receiver is shown in Figure 4.2. The received signal for a particular user, which is corrupted by noise and interference from other users, is the input to a non-coherent receiver. It is well known that diversity techniques are efficient

4 T h e Wireless Transmission Environment and Channel E r r o r Control

65

in mitigating the effects of multi-path fading [85], [loo]. Two branch antenna diversity is considered herein which permits two independent obsermtions at the receiver to combat the deep fading in the mobile radio channel. Therefore, a number of 64Hadamard correlators after the PN despreading are assigned to each diversity branch. The purpose of assigning a number of correlators to each receiver antenna is to capture different multi-path components in order to minimize the effect of multi-path dispersion. The output of the correlators from each diversity branch are then square-law combined (weighted with equal gain). Since 64ary orthogonal mapping is used, we obtain a group of 64 decision variables. These decision \ariables are deinterleaved on a symbol-by-symbol basis. This is followed by soft-decision Viterbi decoding. The output of the Viterbi decoder is grouped in %bit symbols and deinterleaved prior to RS decoding. The outputs of the square-law combiner may be used for power control. A correlator receiver provides optimal non-coherent detection for an hf-ary orthogonal signaling system. A detailed description of the receiver can be found in [92]. In addition, reference [loll provides a detailed description of a non-coherent CDMA receiver and the underlying mathematical theory. The outer interleaver considered is a ( N :I ) block interleaver. After RS encoding, I consecutive codenords are stored row-by-row in an N x I matrix with I rows and N columns, and then read out column-by-column in symbols. Thus two consecutive symbols before interleaving are separated by I - 1 other symbols. The deinterleaver performs the inverse operation, where symbols are written into the deinterleaver columns and read out by rows. N should be larger than or equal to the block code length in order to avoid the wraparound effect. As previously mentioned, the motivation for using interleaving is to break up the correlation between adjacent channel symbols, therefore presenting the decoder with an independent sequence. Due to the constraints on delay, interleaving will not be perfect and the channel symbols will not be completely uncorrelated. In fact, corrLation will still exist if the interleaving matrix is not made infinite but the interleaver/deinterleaver size has to be such that some of the effectiveness of the interleaving is traded for smaller interleavingldeinterleaving delay.

4 The Wireless Transmission Environment and Channel Error Control

67

....... ............................................................ *

:~nalP%t%~d~~ lj+j-l

Outer

er

.

Symbols

RSlCC concatenated encoder

Fig. 4.1 Uplink transmitter block diagram.

Decoder

-

Power Control Command to Transminer

Fig. 4.2

Fl + Decoder

I Uplink receiver block diagram.

Decoded Data

4 T h e Wireless Transmission Environment a n d Channel E r r o r Control

68

4.5.2 Channel Model hilicro-cellular applications a t a carrier frequency of 2 GHz are of interest in this work. We consider slowly moving portables. This results in a Doppler shift much smaller than the carrier frequency and the bit rate. The channel is modeled as a flat fading channel. The popular Jakes model [I021 is used to generate time-laving Rayleigh random variables. Only the impact of the Rayleigh fading on the link performance is considered in this work. The combined effect of other channel imperfections is modeled by an AWGN. The combination of two-branch antenna diversity and power control have been shown in [14] to be effective in mitigating the effect of Rayleigh fading on the uplink, particularly when slow fading is considered. To equalize the received powers, a combination of open and closed loop controls may be used. However, we confine our attention to the closed loop control because it is the crucial component of any effective scheme to combat Rayleigh fading [85], and consider two antennas at the base. Symbol-by-symbol inner interleaving is implemented. This interleaving has been shown to give a 1 dB improvement in the BER over the conventional bit-by-bit interleaving [loo]. Moreover, in slow fading conditions, as power control has been reported to be more effective than inner interleaving [loo], the inner interleaver size is kept fixed. The limiting case of the memoryless channel is also considered in this work. In this case, assuming ideal interleaving is used to make the channel look memoryless, the combination of the inner interleaverldeinterleaver with the non-selective slowly Rayleigh fading channel is modeled by a sequence of independent Rayleigh distributed random variables with mean squared value normalized to one. In addition, the thermal noise and the effect of the spreadingldespreading operation on multiple access interference is modeled as a white Gaussian noise. 4.5.3 Simulation parameters

A data rate of 76.8 Kbps is considered. The data frame duraiion has to be chosen so that it is sufficiently short to allow rapid retransmission but not too short in order to avoid the retransmission occurring during the same fade. Examination of the distribution of the length of error bursts in the Micro-cellular CDMA channel a t 2 GHz [85] reveals that the mean fade duration is less than 30 msec for portables moving at 1 Kmlh. Considering the same portables speed, a data frame duration of 5 msec is considered herein, and a shortened

4 T h e Wireless Transmission Environment and Channel E r r o r Control

69

RS code with 48 data bytes that can correct up to 7 bytes is used. We consider a CDMA system with closed-loop power control only. The power controls are generated a t the rate of 800 bps. The mobile adjusts its transmit power by a fixed stepsize of 0.5 dB depending on the received power control bit. The performance of the transceiver was studied for two power control step sizes 0.5 dB and 1 dB. The 0.5 dB power control step size was found to result in smaller required SNR for the slowly fading channel considered here. The impact of possible channel errors on the performance of the power control scheme is not considered. In considering the required SNR (per antenna), the effect of fast power control gain is not taken into account. In other words, the signal power used in computing the SNR is taken before the fast power control gain. In order t o express the delay in seconds, practical values of the delay components are needed. For convenience, we make the following numerical assumptions for the computation of delay. The propagation delay, depends on the entire system including the network. Howvever, as our motivation is in studying the effect of varying the maximum number of retransmissions and the outer interleaving depth, wve neglect the propagation delay. The acknowledgment time T . is considered as the time taken to decode a frame and generate a NAK or ACK, without including deinterleaving delays. We assume T,=20 msec.

4.6 Summary This chapter reviewed the wireless transmission environment under consideration. It argued the need of channel error control, reviewed the conventional approaches t o it, and presented our choice of error-control protocol based on image transmission requirements.

Chapter 5 Protocol Performance Analysis in CDMA Rayleigh Fading Channels 5.1 Introduction In this chapter, we study the performance of an error control scheme capable of providing low delay reliable transmission over DSCDhIA Rayleigh fading channels. To this end, a truncated type-I hybrid ARQ protocol based on the selective repeat strategy is used in a concatenated Reed-Solomon/Convolutional coding scheme. To enhance the performance of the concatenated scheme, interleaving is used. However, in order to reduce the transmission delay, partial interleaving may be more attractive to be used along with a finite number of ARQ retransmissions. For the purpose of investigating the system performance under lowdelay requirements, the error control protocol performance is studied as a function of the emphasized parameters for a memoryless channel as well as for a highly correlated quasistatic channel. In the presence of non-independent channel errors, the analysis is based on Markov modeling to derive performance metrics, taking into consideration interleaving and number of ARQ retransmissions. In a DS-CDhIA environment, contributions dealing with performance of hybrid ARQ schemes have been mostly concerned with channels where errors are independent [17], 1941, [103]. In contrast to the study of performance of hybrid ARQ schemes in memoryless channels, only a limited amount of work has been accomplished for channels with dependent errors. In [104], the non-independent channel is reviewed, noting the interaction between

5 P r o t o c o l Performance Analysis in C D M A Rayleigh Fading Channels

71

channel modeling and error control with a focus on FEC. The performance analysis of a type-I1 hybrid ARQ protocol has been presented in [I051 for a non-stationary channel considering the interdependence between packets and not symbols. Therefore, these models are difficult to match with the mobile channel parameters. Using concatenated coding in [106], an analytical solution for the throughput of a hybrid Selective-Repeat (SR) ARQ scheme is derived for point-to-multipoint communication over broadcast channels. Recently, the throughput performance of GBN and SR protocols in hlarkov channels with unreliable feedback has been evaluated [107]. This study is not concerned with FEC coding, as in [log] where the exact throughput of GBN protocol is evaluated using Renewal theory. The performance of RS codes with dependent symbol errors has been relatively widely studied. In [log], based on some channel statistics, analytical formulae have been derived for the word, symbol, and bit error rates of a RS coding system in a bursty environment, where no ARQ is used, and only the case of ideal interleaving or no interleaving is considered. In [110], where no ARQ is employed, the performance is determined from the measured errors at the input of the RS decoder, and comparisons are made between the cases of random and nonrandom errors. Considering a typo1 hybrid ARQ protocol, the throughput evaluation was presented in [103], where partially interleaved RS codes in a power-controlled DSCDMA system are used. The performance of RS codes on a bursty-noise channel has also been studied in [ill], where ideal code symbol interleaving is assumed. Interleaving on bursty channels and/or in concatenated RS/CC schemes have been studied in different transmission systems and channels conditions [98], [log], [112], [113], [114], [115], [116]. The purpose in previous work was not to investigate the effect of interleaving from a delay point of view, and results are sometimes based on simulations only. Moreover, few authors have investigated the effect of interleaving on transmission delay when ARQ is also used. Usually, a fi..ed finite degree of interleaving is used, and is sometimes considered enough to randomize the error bursts. On the other hand, the concept of truncating retransmissions in hybrid ARQ protocols has previously been used to reduce the time delay [105], [117], [118]. However, this concept has not been used for the RS/CC type-I hybrid ARQ protocol. In particular, the interesting issue of interleaving versus retransmission truncation has not been previously investigated. The contribution of this thesis is based on interdependency between FEC/ARQ and reliable low-delay transmission requirements in a DSCDMA environment. It consists of the study of performance of the RS/CC hybrid ARQ protocol over a non-independent fading

5 Protocol Performance Analvsis in C D M A Ravleieh Fadine Channels

72

channel, taking into consideration the effect of RS symbol interleaving, and maximum number of allowable retransmissions. In our study, the investigation of the aforementioned issues and parameters is conducted for the uplink of the communication system under consideration. Results of throughput, average transmission delay and protocol error probability are provided for different combinations of the parameters, over a wide range of channel SNRs for the two channel conditions considered: memoryless and quasi-static.

5.2 Type-I RS/CC hybrid ARQ error control 5.2.1 Principle of t h e transmission protocol

.4s previously mentioned, the ARQ protocol investigated in this work is a typcI hybrid .4RQ which uses the concatenation of RS code and convolutional code for continuous error correction, and the RS error detection capability to request retransmissions. The ARQ scheme is based on the SR strategy. The data transmission is frame-based. After the Viterbi decoder, the bounded distance RS decoder gives an estimate of the transmitted data fiame. If the estimate is found to be error-free, it is delivered to the source decoder and a positive Acknowledgment (ACK) is sent to the transmitter. If the RS decoder detects uncorrectable errors, the data frame is discarded and a negative acknowledgment (NAK) is sent as a retransmission request. When used with unlimited number of retransmissions, the protocol is referred to as the uutruncated protocol. In this case, retransmissions continue until the decoded frame is assumed to be error-free and is delivered to the source decoder. Therefore, extensive buffering is needed. If a finite buffer is used at the receiver, as is the case in practical systems, buffer overflow may occur which reduces the performance of the system. In delay constrained applications, the untruncated protocol may lead to unacceptable delay. Therefore, the constraint of the delay imposes a limit on the maximum nunroer of retransmissions. In the truncated protocol, the process of retransmission continues until the allowed maximum retransmission number is reached or the data frame is successfully accepted. By contrast to the untruncated protocol, the receiver buffer design for the truncated protocol is much simpler because the required buffer size can be precisely predetermined. If the receiver buffer can accommodate the number of frames sent during a round-trip

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

73

transmission time period, there is no possibility of buffer overflow. 5.2.2 Interleaving and ARQ truncation

As previously mentioned, two degrees of interleaving are used, namely at the inner and the outer symbol interleavers. The motivation for using inner interleaving is to break up the correlation between adjacent channel symbols into the decoder, ideally presenting the decoder with an independent sequence. Due to the constraints on delay, interleaving cannot be perfect and the channel symbols will not be completely uncorrelated. In fact, correlation will still exist if the interleaving matrix is not made infinite but the interleaver/deinterleaver size has to be such that some of the effectiveness of the interleaving is traded for smaller delay. In the RS/CC concatenated scheme, as the length of the output error bursts from the Viterbi decoder are widely distributed, we interleave the RS code symbols so that the error bursts are spread among RS codewords. This increases the probability likelihood that errors can be corrected by the RS decoder; othenvise, a long block code should be used. Sufficient interleaving has to be used without yielding t o an increase in the number of frames that need to be retransmitted. In fact, interleaving can make the problem worse because errors will be spread over more frames which means that more frames have to be retransmitted. Using a limited number of retransmissions sets limits on the error statistics, namely on the frame error rate after retransmission. If there are too many error bursts, the FER is too high, and consequently an excessive number of retransmissions is needed. In order to bring the FER down, when only a limited number of retransmissions is allowed, the bursts of errors must be spread using interleaving. On one hand, interleaving techniques suffer from the fundamental problem of increasing delay, and on the other hand only a limited number of retransmissions is allowed for delay constrained applications. The question is how to choose these parameters in order to provide high system performance with low transmission delay. It is in this vein that we study the effect of interleaving and retransmission truncation on the system performance. The focus of this chapter is twofold: (i) presenting an analytical method for evaluating the performance of the hybrid ARQ scheme using RS/CC concatenation, in the presence of non-independent errors, and (ii) studying the effect of interleaving versus ARQ retrans-

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

74

mission truncation on the system performance. However, before studying the case of the non-independent channel we start with the memoryless channel.

5.3 Performance evaluation criteria We measure the capability of the hybrid ARQ scheme with performance metrics, namely, throughput, average transmission delay, and protocol error probability. Our study assumes that the feedback channel is noiseless. 5.3.1 Reliability In general, the performance of various ARQ and hybrid-ARQ error control protocols is measured by reliability statistics. In an ARQ or hybrid ARQ system, the receiver commits a decoding error whenever it accepts a received block with undetected errors. The reliability of the communication system is quantified by its protocol error probability denoted as Pr(E). The protocol error probability is the probability that the receiver delivers a message block with undetected errors. PT(E) =

probability of the occurence of undetected errors probability that decoding succeeds

(5.1)

By "decoding succeeds", we mean that the decoder delivers a packet to the user. Clearly, for an error control system to be reliable, the protocol error probability P r ( E ) should be made very small. 5.3.2 T h r o u g h p u t Throughput is a measure of the effectiveness of a system and can be defined by the average number of information frames delivered t o the source decoder per unit time over the total number of frames that could be transmitted per unit time. Let E[T] be the average number of frame transmissions (including the first transmission and retransmissions) required for a frame to be delivered, the throughput q is defined as

5 Protocol Performance Analysis in C D M A Rayleigh Fading Channels

75

5.3.3 Transmission delay a n d queuing delay

Delay is defined as the amount of time between the input of uncoded information to the transmitter and the output of decoded information from the receiver. Delay in hybrid ARQ systems consists of +no components, queuing delay and transmission delay. Queuing delay is the delay between the time the message is assigned to a transmission queue (buffer) and the time it starts being transmitted. 0

Transmission delay is the delay between the time the message starts being transmitted and the time it is successfully delivered t o the user.

When a aessage block is ready for transmission and the system queue is not empty, the block must wait in the queue until all previous blocks are transmitted. This waiting time is the queuing delay. If the queue is empty, when a message block arrives, the block transmission will take place immediately.

5.4 Performance of FEC scheme over memoryless channel 5.4.1

BER performance

The convolutional encoder under consideration is combined with orthogonal Walsh signaling and non-coherent detection. Squarclaw metrics are employed in the decoder. This is optimal for Rayleigh fading with symbol interleaving [loo]. The bit error probability is upper bounded by 1921:

where k = log~hffor h4-ary modulation. The free distance of the convolutional coder df is the minimum weight, in terms of the number of nonzero Walsh symbols, of the nonzero path. Pd is the total number of nonzero information bits for all the paths that have path weight equal to d. On the Rayleigh fading channel, assuming full interleaving is obtainable to make successive symbols independent in the fading variable, the probability of error in painvise comparison of the all-zero path with a path that has d nonzero symbols (6 bits/symbol for M=64) is given by 1921:

5 Protocol Performance Analysis i n CDMA Rayleigh Fading Channels

76

&

where, p = is the error probability for binary decisions between orthogonal signals on the (non-coherent) Rayleigh channel. It corresponds to the probability of error for binary orthogonal signaling on a fading channel without diversity. For m-branch spatial diversity employed to mitigate the effect of fading, P2(d) is given by [loo]:

&,

where y, = E, is the total energy per orthogonal signal from all diversity branches, and y = $ = my, is the total SNR.

Fig. 5.1 BER performance of the Rayleigh fading channel with idnite interleaving and one or two-branch diversity: comparison between simulations and analytical bounds.

5 Protocol Performance Analvsis i n C D M A Ravleieh Fadinet Channels

77

In order to test the performance of our transmission n-.odel, we compare simulations and analytical bounds on BER for the fading channel with infinite interleaving with no RS coding and for one and two-branch diversity. BER results are provided as function of Eb/No,that is the total bit-energy to noise spectral density on one receive antenna. Figure 5.1 illustrates the bit error probability obtained through simulations and the upper bounds based on (5.3) with and without antenna diversity. When compared with the results provided in [LOO] we found that results highly correspond. I t is clear that with diversity combining, the two observations from each receive antenna diminish the variance of the channel fading effect. Being more steady and having fewer confrontations with deep fading, this combined signal from both receivers enables better system performance. We note that for computation of the upper bounds, the first four weights P d [loo] have been used. clearly, bounds become looser at low Eb/No if more weights are used in the computation. 5.4.2 Concatenated coding scheme performance

Consider the performance of maximum distance separable (MDS) codes when they are used for error detection in coding schemes with retransmissions. The most important MDS codes are q-ary RS codes of length n = q - 1. The RS codes make highly efficient use of redundancy, and block lengths and symbol sizes can be readily adjusted to accommodate a wide range of frame sizes. RS codes also provide a wide range of code rates that can be chosen to optimize performance. In particular, any shortened RS code is also an MDS code. In addition, efficient decoding techniques are awilable for use with RS codes. A decoding procedure that corrects all error patterns of weight $ t, where t is the largest integer equal to or less than ( d - 1)/2, is called a bounded-distance decoding. An RS(n, k, b) code allows the correction of at most t = b-bit symbols. When an RS codeword c is transmitted, channel noise may corrupt the transmitted signal. As a result, the receiver receives the corrupted version of the transmitted codeword c e, where e is an error pattern of some weight u. The decoder is a bounded distance decoder, that is, it looks for a codeword within a distance $ t of the received word; if there is such a codeword, the decoder finds it, and if not, the decoder reports "failure". Thus, if u 5 t, then the decoder detects and corrects the error e and recovers c (correct decoding). On the other hand, if u > t there are two possible scenarios:

+

5 Protocol Performance Analvsis in CDMA Ravleieh Fadine Channels

73

1. the decoder detects the presence of the error pattern but is unable to correct it (decoder failure). 2. the decoder decodes the received word incorrectly to some other codeword (decoder error). Thus, the performance of a RS code can be described by two parameters: the probability of decoder error (incorrect decoding) and the probability of decoder failure [119]. Let us consider the following probabilities: 0

0

PC : probability that a received block contains no error or the error is corrected by the decoder. Pg

: probability of decoding failure, that is the probability that a received block contains an uncorrectable but detectable error.

P,, : probability of undetected decoding error, that is the probability that a received block contains an undetectable error pattern. The probability PC depends on the channel error statistics, and the probabilities Pd/ and P,, depend on both the channel error statistics and the choice of the error-detection code. P,, is normally called the undetected error probability of the code. We denote by Pt the sum of P4 and P,,. Obviously, the probability of correct decoding and the probability of total error add up to one: PC Pdf Pue= 1. Let P, denote the &bit symbol error rate (probability of symbol error) at the input of the RS decoder. The probability of total error Pt a t the output of the RS decoder is given by [119]:

+ +

A closed-form formula is available to calculate the probability of decoder error Pa,. However, its evaluation is computationally intensive for large values of codeword length n. In [119], a tight upper bound for the decoder error probability of RS codes has been derived, that is,

5 Protocol Performance Analvsis i n C D M A Ravleieh Fadine: Channels

79

The symbol error probability at the input of the RS decoder is upper bounded by

where Pb is the bit error probability at the output of the Viterbi decoder. Using the bit-error probability bound for the independent (infinite interleaving) Rayleigh fading channel (5.3), and the upper bound on the symbol error rate at the input of the RS decoder (5.8), we can obtain from equations (5.6) and (5.7) tight upper bounds on Pt and P,, as a function of Eb/No. These two tight upper bounds characterize the overall error rate performance of the concatenated coding scheme. Figure 5.2 shows the performance of the concatenated coding scheme for the transmission system under consideration. The computation of the upper bound is based on (5.3) in conjunction with (5.6) and (5.8). Simulation results are given with or without the use of outer interleaving. As previously mentioned, in the RS/CC concatenated scheme, the effect of outer interleaving is to randomize the bursts of errors at the output of the Viterbi decoder, and consequently make them more amenable to correction with the RS decoder. If a codeword is interleaved to degree I, then two consecutive symbols of a codeword are spacect apart by I - 1 symbol times. Simulation results in Figure 5.3 show how interleaving reduces the probability of total error, and hence increases the error correction capability of the RS decoder. However, as will be seen later, the interleaver increases the system complexity and may not be necessary to maintain a required performance.

5.5 Performance of the hybrid ARQ protocol on a memoryless

channel In this section, the performance of the type-I hybrid ARQ with RS/CCconcatenated coding is evaluated assuming that the forward channel is memoryless and that the feedback channel is noiseless. The performance criteria considered are throughput, protocol error probability and average transmission delay.

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

Fig. 5.2 Concatenated coding scheme probability of total error Pt: comparison between simulation results and upper analytical bound.

80

Fig. 5.3 Concatenated scheme probability of total error Pt as function of RS interleaving depth I; I = 1 corresponds to no interleaving.

5.5.1 Throughput efficiency The evaluation of the throughput for a hybrid ARQ system is known to be easy compared to the evaluation of other performance measures. We consider the SR strategy with infinite buffer at the receiver t o store the error-free codewords when a received block is detected in error. The probability of retransmission request P, is the probability that a given transmission will cause the generation of a retransmission request because of uncorrectable detected error pattern. Hence, P, is given by the probability of decoder failure Pq. Then, the average number of transmissions needed for a frame to be delivered to the source decoder is:

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

81

Hence, the throughput efficiency of type-I hybrid ARQ is:

Notice that, unlike other multiple access techniques, for a CDMA link no normalization of the throughput is needed, especially that no comparison with aoother protocol is performed. We see that the throughput does not depend on the round-trip delay factor. However, high throughput performance can be achieved at the expense of extensive buffering. If a finite buffer is used at the receiver, as in the case of most practical systems, buffer overflow may occur and that leads t o reduction in the throughput performance of the system. In a hybrid ARQ system, the throughput efficiency of the system is affected by the length of the message frames, the error correction capability of the code, the retransmission protocol, and the size of the buffers at both the transmitter and the receiver. For delay sensitive applications, if the protocol is used with a maximum number of retransmissions L, then the average number of transmissions is given by:

Throughput follows by simple inversion of E[T],that is

5.5.2 Protocol error probability The protocol error probability is the probability that the receiver delivers a packet with undetected errors when the error protocol is in use. For applications where power and transmission delay are not critical, the protocol error probability P r ( E ) must be minimized because error-free transmission is a basic requirement. For applications sensitive to delay, the protocol error probability may be traded-off subject to a grade of service requirement. Releasing any constraint on the transmission delay, P r ( E ) has to be minimized in the protocol under consideration. A decoded packet is delivered to the source decoder only if it contains no errors or contains an undetectable error pattern. Since an undetectable error

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

82

patten can occur on the initial transmission of a parket or on any retransmission, Pr(E) is given by:

We see that the reliability of the protocol is determined by both the error detection capability and the error correction capability of the RS code. If the error-detection code is properly chosen, P,, can be made very small relative to P, and hence P r ( E ) can be made very small. Notice that when a detectable error pattern occurs, the received packet is not accepted by the receiver and a retransmission is requested. For delay sensitive applications we can consider the same scheme with L retransmissions only.

+

+ PBP,, PiP,,, + ...+ P ~ P , , 1 - P'+' B = Pu,1 - pdf 1= P", ----Pu. + P C With retransmission truncation, if the maximum number of retransmissions is reached the packet may be delivered even with uncorrectable errors. Therefore the protocol error probability is given by Pr(E) = P,,

PF'

Hence, reliability considerations for the truncated protocol differ from those for the untruncated protocol and FEC coding. In the untruncated protocol, reliability is mainly determined by the undetected error probability. In FEC only, only uncorrectable errors

5 Protocol Performance Analysis i n CDMA Rayleigh Fading Channels

83

are considered. In the truncated protocol, both undetectable and uncorrectable errors are taken into account. 5.5.3 Average transmission delay analysis

In this analysis, we focus on the average transmission delay. The effect of the inner interleave~is not considered, and the purpose here is to study the effect of the outer interleaver/deinterleaver, and protocol truncation on the transmission delay. Recall that the average transmission delay is defined as the average time needed for a frame to be successfully delivered to the user, with correct or incorrect decoding, and the corresponding acknowledgment time. This delay depends on several components: the transmission time of a frame (Tj), propagation time (T,) and acknowledgment time (T.). Tf depends on the frame length in bits, interleaving, coding rate, and baud rate. The transmission time Tf includes the interleaver/deinterleaverdelays, as well as the processing time. Given that the frame length in bits is n6, and denoting the baud rate by &d, TI = Dmt, where, Dmt is the delay caused by the outer interleaver/deinterleaver operation with depth I. Assuming block interleaving is used, the time delay caused by this operation is given by

&+

where Tb is the bit duration, and a symbol corresponds to an RS symbol of length 6 bits. Define Ai as the first (initial) transmission delay or the time taken to transmit a frame if it is received correctly the first time. In this case, after Ai the data frame is forwarded to the source decoder. On the other hand, define A, as the time between the reception of an incorrect frame and the reception of the associated retransmission. As an acknowledgment for the frame can be detected after a round-trip delay 2Tp+T., these delay times are given by:

It follows, that for the type-I hybrid protocol the average transmission delay is:

84

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

Fig. 5.4 Effeet of RS outer interleaving depth I on the Protocol error probability of the untruncated protocol.

Fig. 5.5 Protocol error probability of the truncated protocol a s function of maximum number of retransmissions L and outer interleaving depth I.

D~=A~+P~,A,+P;A,+... = Ai + A,(E[T]- 1 )

(5.18)

We define the normalized transmission delay as Dt normalized by the first transmission delay Ai. This quantity can be seen as the average number of transmissions including the first transmission needed to deliver the error-free estimated frames to the source decoder. 5.5.4 Results a n d discussion

0

Figure 5.4 shows the protocol error probability of the untruncated protocol versus Et./No for different values of RS interleaving depth I. It is shown that interleaving increases the reliability. However, there is a trade-off involving reliability. The cost of high-reliability error-free transmission is that the transmission delay is increased. For applications where

5 Protocol Performance Analysis i n CDMA Rayleigh Fading Channels

85

transmitted power and transmission delay are critical, we may reduce the reliability subject to a grade of service requirement. The truncated protocol represents such an approach. Reliability of the truncated protocol is represented in Figure 5.5. Two values for the mavimum number of retransmissions are considered: L = 1 and 3. Curves corresponding to the no-retransmission case are also provided and referred to by L = 0. When Eb/No is high, the channel is in good condition. Thus, the performance of the truncated protocol approaches the performance of the untruncated protocol. This is because in most cases no retransmissions are required when the channel is good. When Eb/No is low, the performance of the truncated protocol approaches the performance of FEC,providing improved power saving and reduced transmission delay.

5.6 Untruncated protocol throughput performance: comparison between simulation results and upper theoretical bound.

Fig.

5.7 Tkuncated protocol throughput performance as function of maximum number of retransmissions L and outer interleaving depth I.

Fig.

In Figure 5.6, we have plotted the throughput performance of the untruncated protocol without RS interleaving and with an outer interleaving depth I = 64, as well as the analyt-

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

86

ical bound on throughput. The computation of the bound is based on bounds provided in Section 5.42. It is clear that the throughput of the protocol approaches zero a s the channel error rate increases, that is at wry low values of Eb/No. The effect of RS interleaving depth variation on the performance of the truncated protocol is shown in Figure 5.7 for the throughput and in Figure 5.5 for the protocol error probability. The effect of finite interleaving on the probability of total error of the concatenated scheme was also shown in Figure 5.3. A value of I = 1corresponds to no interleaving. In regard to the effect of outer interleaving we have the following comments: There is a certain degradation in performance resulting from finite interleaving. The cuwes cross-over at a certain Eb/No value. This shows that when the channel is in poor conditions, interleaving makes the problem worse because symbols in errors out of the Viterbi decoder, spread over more frames which means that more frames need to be retransmitted. For Eb/No values greater than the cross-over point it is impossible for the truncated protocol to achieve the same reliability as the untruncated protocol even if interleaving is used. However, a reasonable amount of interleaving along with one or two retransmissions can achieve reasonable reliability. This is because the extra transmissions in the untruncated protocol are simply wasted if the goal of the system design is to provide reasonable reliability (say lo-') instead of error-free performance (say < 10-lo). The normalized delay versus Eb/No is shown in Figure 5.8 where comparison with the analytical bound is performed. In Figure 5.9, results are provided for different outer interleaving depth. It is shown that interleaving does not significantly decrease the number of retransmissions needed for a frame to be accepted a t the receiver. The normalized delay of the truncated protocol is also shown in Figure 5.10. The average transmission delay in seconds versus Eb/No is shown in Figure 5.11. The results clearly show that the transmission delay of the truncatcd protocol is bounded by the maximum delay due to the limited number of retransmissions. By contrast, the transmission delay for the untruncated protocol (Figure 5.12) can be very long when the channel error rate is high, which occasionally occurs in time-varying fading channels. Finally, in Figure 5.13 we show the average transmission delay of the truncated protocol as function of the reliability. In order to achieve high communication reliability, the

5 Protocol Performance Analysis in CDMA Rayleigh Fading Channels

Fig. 5.8 Average normalized delay of the untruncated protocol: comparison between simulation results and upper theoretical bound.

Fig. 5.9 Average normalized delay of the untruncated protocol as function of RS interleaving depth I.

tm

I 2m

301

1UI

[dB1

Fig. 5.10 Average normalized delay of the truncated protocol as function of maximum number of retransmissions L and outer interleaving depth I.

87

Fig. 5.11 Average transmission delay of the truncated protocol a s function of maximum number of retransmissions L and RS interleaving depth I.

!

5 Protocol Performance Analysis in C D M A Rayleigh Fading Channels

Fig. 5.12 Average transmission delay of the untruncated protocol as function of RS interleaving depth I.

88

Fig. 5.13 Average transmission delay of the truncated protocol as function of reliability represented for different values of RS interleaving depth I.

untruncated protocol may impose a very long time delay. On the other hand FEC coding provides a limited delay at the expense of a reduced reliability. The truncated protocol with concatenated coding may offer a good mix of ARQ-based protocol and FEC coding. A compromise can be made between the buffer size and the error correction and detection parameters to achieve maximum performance with minimum complexity. Specifically, the truncated protocol can offer better performance than both the untruncated protocol and pure FEC in delay-limited applications. Varying the number of rraximum retransmis sions and or correction capability of the RS code can be used to provide the desired QoS requirement, namely high reliability and/or limited delay.

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

89

5.5.5 S u m m a r y

In order to achieve communication reliability over a noisy channel, various ARQ and hybrid ARQ protocols may require an unbounded time delay. On the other hand, FEC schemes use codes of reasonable length to obtain limited delay at the expense of reduced reliability. The delay-limited truncated type-I hybrid ARQ is a special case of hybrid ARQ with a limited number of retransmissions. In comparison with FEC, and other hybrid ARQ schemes, the truncated type-I RS/CC hybrid ARQ protocol ofTers the following advantages: Decreased complexity: the scheme completely eliminates the buffer overflow which might occur with unlimited retransmissions. Bounded delay: most ARQ and hybrid ARQ protocols result in a long time delay in a noisy environment, while a truncated protocol always has a limited delay. Good reliability: using concatenated coding along with a limited number of retransmissions can provide reasonable reliability which in turn depends on both, the allowable SNR and truncation level. The results discussed in this section suggest that the transmission delay of the untruncated protocol can be greatly reduced by protocol truncation. Specifically, the RS/CC truncated protocol has bounded delay, which is determined by the delay constraint imposed on the coding design. Therefore, the truncated protocol can offer better performance than both the untmncated protocol and pure FEC where a required reliability cannot be achieved. Outer interleaving has been shown to be unnecessary especially due to the fact that the channel is assumed memoryless.

5.6 Protocol performance evaluation in the presence of

non-independent errors 5.6.1 Markovian analysis

The performance of the hybrid ARQ scheme in the presence of non-independent errors is based on a quasi-analytical method that uses two-state Markov modeling of the channel behavior in terms of errors at different levels of the link. The simplified Gilbert-Elliott (GE) model is employed. This model is commonly used to model symbol error bursts

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

90

[i13]. Based on performance evaluation of the Viterbi decoder in terms of symbol error (b bit symbol) behavior over the correlated channel, a Markov model is used to derive the RS decoder performance. Two main parameters are emphasized in the analysis. These are the level of ARQ truncation and the RS interleaving depth. The exaluation of protocol error probability, throughput and average transmission delay is investigated. Unlike simulation based performance evaluation, the use of Markov modeling is not run-time consuming. Therefore, the investigation of the performance metrics as a function of system parameters, can be easily conducted for a wide range of parameters, and conclusions regarding trade-offs can be derived for delay-constrained transmission. The analysis based on simplified GE models to describe the error sequences at different levels of the link is summarized as follows: first, a model is used to describe the error sequences a t the output of the Viterbi decoder. In this model, one state represents a symbol in error, and the other an error-free symbol. From this model, a second one can be built to model the error sequences at the output of the RS decoder. In this model, one state indicates that a codeword is in error, and the other that the codeword is error-free or correctable by the RS decoder. Finally, a Markov chain that represents the receiver's decoding status is used to derive the performance criteria of the hybrid protocol. 5.6.2 RS decoder performance For the RS-coded hybrid ARQ, the receiver accepts a received data frame (after RS decoding) when the received codeword is correctiy decoded, or decoder error occurs. Then the probability of the retransmission request is given by Pd/. The probability of total error Pt is defined as the probability of occurrence of received words with more than t erroneous symbols. Given that P(m, n) is the probability of m syn~bolsin error in an RS codeword of length n, Pt is given by

Pl =

x

P(m,n)

(5.19)

m=l+l Based on calculation of the probability of total error Pt, the probabilities of detected and undetected errors [I191 can be written as

5 Protocol Performance Analysis i n CDMA Rayleigh Fading Channels

91

In order to calculate the probability of total error Pt, we use a simplified Gilbert-Elliott model (GE)[113]to describe the error sequences at the output of the Viterbi decoder. In this model, a good state G represents an bbit error-free symbol and the other bad state B represents a symbol in error. Given that the symbol error rate is P, and the two-symbol error rate is P2,, the transition probability matrix can be written as:

where,

The effect of RS symbol interleaving with depth I (I 2 I), implies two consecutive symbols to be separated by I symbols. I = 1 corresponds to the case of no RS interleaving. The model now treats the interleaved symbols, and the transition probability matrix becomes

where p denotes the GE channel model memory, that represents the correlation between two consecutive non-interleaved symbols, defined by:

Because the GE channel model has the channel memory parameter, the effectiveness of the RS interleaving can be evaluated. Given that P(m,n) = PE(m,n) Pi(m, n), the steady state probabilities of being in state G and B are used to calculate P(m,n) using a recursion method, where the initial probabilities are given by Pi(0,O) = Pa, and P&(O,O) = 1- P,

+

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

PA(m,n) = c ( m ,n - 1) PI

+ Pc,(m,n - 1) (1- a,) + Pg(m - 1, n - 1) (1 - p,)

Pk(m,n ) = a m - 1, n - 1) LY,

92

(5.26)

P&,(m,n) represents the probability of m errors in n symbols with the channel ending in state = G orB. Hence, using Equation 5.26, and the initial probabilities given by P$(O,O) = P,, and P&(O,O)= 1 - P,, we can find the probability of total error Pt. Now we aggregate the symbol-level model into a codeword model. We write a two-state hlarkov chain where the codewords are of size n symbols. In state G' the codeword ha= less than or equal to t symbols in error, while in state B' it has more than t symbols in error. By definition, the transition probabilities are defined as:

I 5 t errors in codeword ( i - 1)) PBIWA P(> t errors in codeword i I > t errors in codeword (i- 1)) PB'G,e P ( 5 t errors in codeword i I > t errors in codeword (i- 1)) Pew P(< t errors in codeword i I 5 t errors in codeword (i - 1)) PQBIA P(> t errors in codeword i

(5.27)

Following the analysis in [120],one can derive the transition probabilities of the model. To this end, using the symbol error model, the probability of m errors in n symbols with the channel starting in state G or B are needed; these are given by:

with initial probabilities Pi(0,O) = 1, and P&(O,O)= 1. From the definitions of P&,,(m,n) and P&(m,n), one can finally get the transition probabilities of the codeword error model:

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

93

= 1 - PGtQ

Pa,c. = 1 where PC = P ( < t errors in a codeu.ord). Using the transition probabilities of the codeword level model, the problem now reduces to determining the performance metrics of the protocol, used in evaluating the global system performance. 5.6.3 Reliability, Throughput a n d Transmission Delay

The metrics that are used to evaluate the performance of the hybrid protocol are the throughput, protocol error probability and average transmission delay. In order to derive these performance metrics, the receiver decoding status is modeled as a t\vo-state Markov chain. State s is the absorbing state, that corresponds to successful decoding with P,, = 1. If upon the reception of a retransmission the decoder results in an unsuccessful decoding, retransmissions will continue (the Markov chain remains in state r, with probability P,,) until successful decoding occurs, or the maximum number of retransmissions is reached. The transition probability matrix of the model (Eq. 5.30) is basically determined by P,.

Throughput analysis The average number of transmissions and retransmissions (not exceeding L), needed before a data frame is delivered to the user is given by

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

94

Finding the throughput expression reduces to the evaluation of the probability P(T 5 1). Let Al be the number of frames that can be transmitted during one round-trip delay period. Using, the M-step transition probability of being in state r, h l time frames after being in state r, E[T]is obtained as:

where.

The throughput formula follows by simple inversion of E[T]. Reliability analysis Reliability of an ARQ scheme is characterized by P,,, which is the probability that the receiver delivers a frame with undetected errors. Therefore, the higher the probability of undetected errors P,,, the lower the reliability and vice versa. In the case where retransmissions are not truncated, the protocol error probability is simply the product of P,,, by the average number of retransmissions. However, as retransmissions are limited to L, one has to treat the last retransmission differently from previous ones in the calculation of the protocol error probability. In fact, at the last retransmission, even if errors are detected, the frame is delivered (even with uncorrectable errors). The protocol error probability is therefore given by:

where, P,, is given by Equation 5.33. Delay Analysis Given that the frame length in bits is nb, and denoting the baud rate by Rbaudrthe frame transmission time including interleaving/deinterleaving delays is given by TI = nb Di, D,t. Where, D,t is the delay caused by the outer interleaver/deinterleaver operation with depth I, and Din the delay caused by inner interleaver and its corresponding

+ +

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

95

deinterleaver, with degree J. Assuming block interleaving is used, the combination of the interleavingldeinterleavingoperations have the following property. the time delay caused by the outer block interleaverldeinterleaveris Dm, = 2 n b I Tb, where Tb is the bit duration, and a symbol corresponds t o an RS symbol of length b bits. the time delay caused by the inner block interleaverldeinterleaver is Din = n b J Tb, where a symbol corresponds to a Walsh symbol. The interleaving delays could be further reduced if periodic interleaving is used rather than block interleaving. Delays Ai and A, are given by

The average transmission delay is given by 5.18. As, an acknowledgment of a frame can be detected after a round-trip delay A,, During this period of time, M = frames can Tf be transmitted. This value corresponds to the parameter A f in Equation (5.33). 5.6.4 Results and discussion Our focus is on studying the effect of the outer interleaver and ARQ retransmissions on the system performance. As previously mentioned, the inner interleaver depth is kept constant throughout the study. The Walsh symbols are interleaved on a group basis, where a group of symbols corresponds to the frame size. The inner interleaver matrix is of size 32 x 8. The truncated protocol has been simulated for three values of the maximum number of retransmissions (L = 1, 2, and 3). The untruncated protocol has also been simulated. Results are mainly expressed as a function of EbINo. The results can be represented as a function of number of users using an approximating formula to calculate K, the number of users in a single cell. One important practical consideration is how large the outer interleaving depth should be in order to be considered infinite. This idealized assumption may result in excessive memory requirement and delay. Therefore, it is not possible to eliminate the memory

5 Protocol Performance Analysis i n C D M A Rayleigh Fading Channels

96

entirely, but only to reduce the burst severity. For investigating the effect of outer interleaving, a wide range of interleaving depth values I has been used. These values range from I = 1 corresponding to the case of no interleaving, to I = RS codeword length, that approximates the case of ideal inner interleaving. When L = 0, the system performance corresponds to that of the concatenated scheme. This performance, as a function of the RS interleaving depth I, can be seen in Figure 5.14, where the channel memory varies as a result of using different values of Eb/No. The curves show that interleaving improves the performance by decreasing the total error rate P,. Furthermore, it is shown that the Eb/No required to achieve a desired value of Pt decreases as the interleaving depth I increases. If the purpose is increasing reliability, excessive interleaving has to be used to achieve low values of total error rates, which on another hand will not be sufficient for applications with high reliability requirement. The hybrid ARQ protocol is capable of increasing reliability. Measured in terms of the protocol error probability, re1iabilit.y is represented in Figure 5.15, where no limit is set on the number of retransmissions performed. First we note the negative effect of interleaving at very low Eb/No values, where as expected, interleaving will only cause the errors to be spread over more frames. This effect is also clear on the throughput performance (Figure 5.16). Beyond the point where curves intersect, increasing the interleaving depth shows significant increase in reliability. This increase in reliability is expected to result in degradation in the transmission delay. Figure 5.17shows the average transmission delay as a function of Eb/No.It is shown that the untruncated protocol has unbounded delay. The figure shows the average transmission delay. However, on timovarying fading channels, the BER can occasionally be beyond a certain limit which in turn results in requests for more retransmissions. Thus, delay can be very long for certain frames, resulting in intolerable values for delay constrained applications. For the purpose of comparing simulation results with quasi-analytical ones, we basically consider the case of no interleaving ( I = 1) and the case referred to as ideal interleaving ( I = 64). Simulation results are based on delivering 10000 correctly received data frames to the user. The throughput performance of the untruncated protocol (L = oo), is compared in Figure 5.18. Two degrees of comparison are provided: simulation versus analytical results, and the effect of RS interleaving. Similarly, the reliability performance is provided in Figure

5 Protocol Performance Analysis in CDMA Rayleigh Fading Channels

3 s

4m

W

Fig. 5.14 Concatenated scheme performance: probability of total error Pt as function of RS interleaving depth I.

4 s

rm i.J

O [dB]

Fig. 5.15 Protocol error probability of the untruncated protocol as function of RS interleaving depth I.

19

I rm

i

uo

EMIO[dB1

Fig. 5.16 Untruncated protocol throughput performance as function of RS interleaving depth I.

97

Fig. 5.17 Untruncated protocol delay performance a s function of RS interleaving depth I.

I

sm

I

5 Protocol Performance Analysis in C D M A Rayleigh Fading Channels

98

5.19, and the average transmission delay performance is provided in Figure 5.20. Fixing the number of correctly received frames in the simulations is the reason why we note a difference between simulation and analytical results when the interleaving depth is small. Considering the case when no RS interleaving is performed, simulation and analytical curves intersect. Below the intersection point, the throughput obtained with simulation is a little bit higher than analytical values (Figure 5.18), the protocol error probability is lower (Figure 5.19) and the awrage transmission delay is also lower (Figure 5.20). Therefore, a higher number of frames needs to be simulated in order for very good estimates of the performance metrics to be obtained via simulation. However, this is one of the motivations in using the Markovian analysis, namely to avoid excessive time consuming simulations. The Gilbert-Elliott model does not consider the wraparound effect [87]that might occur when the interleaving depth is small. On the contrary, fixing the interleaver depth value, the model considers that the interleaver ideally separates two adjacent symbols before transmission by I - 1 symbols after interleaving. Therefore, the model gives optimistic results for low values of interleaving depth in comparison to simulation results. When the interleaving depth is high, results highly correspond, and the differences between simulation and analytical results are mainly due to the fact that more frames need to be simulated in order to obtain good estimates for the simulation results, and especially that simulation effectively implements the interleaving. In order to compare simulation and analytical results obtained for the truncated protocol we basically consider the case of no interleaving. In Figures 5.21, 5.22 and 5.23, we respectively show the throughput, reliability, and delay performances of the truncated protocol. Conclusions similar to the comparison provided for the untruncated protocol can be derived. It is important to add that results correspond even better than in the untruncated case. This is due to the fact that truncation is used and hence better estimates are obtained through simulation. We also show the correspondence of results for reliability when the ideal interleaving case is considered (Figure 5.24). In conclusion, the high correspondence between analytical and simulation results justify why we can rely on the analytical method to evaluate the protocol performance for a wide range of parameters, namely the interleaving depth and maximum number of retransmissions for the truncated protocol. If the purpose is increasing reliability, degradation in throughput and delay is expected. Figure 5.27 provides a clear look at the effect of interleaving and ARQ truncation on the

5 Protocol Performance Analvsis in CDMA R a v l e i ~ hFadine Channels

:

I

3.01

3 s

4m

4

sm

E M 0 [dB]

Fig. 5.18 Throughput of the untruncated protocol: comparison between simulation and analytical results.

3.01

Fig. 5.19 Protocol error probability of the untruncated protocol: comparison between simulation and analytical results.

3m

4m

4.a

sm

EWNO [dB]

Fig. 5.20 Average transmission delay of the untruncated protocol: comparison between simulation and analytical results.

99

5 Protocol Performance Analysis in CDMA Rayleigh Fading Channels

+ -

3m

I

3

am E m 0 [dB]

I rul

I I

~m

I

3%

rm E W N O [dB]

100

I 4x1

Fig. 5.21 Throughput of the truncated protocol: comparison b* tween simulation and analytical results for the no-interleaving case.

Fig. 5.22 Reliability of the truncated protocol: comparison between simulation and analytical results for the no-interleaving case.

Fig. 5.23 Average transmission delay of the truncated protccol: comparison between simulation and analytical results for the nointerleaving case.

Fig. 5.24 Protocol error probability of the truncated protocol: comparison between simulation and analytical results for the ideal interleaving case.

rm

5 Protocol Performance Analysis in CDMA Rayleigh Fading Channels

Fig. 5.25 The system :hroughput for the different allowed maximum number of retransmission attempts L.

101

Fig. 5.26 Effect of the outer in-

terleaving on the system throughput for the different allowed maximum number of retransmissions L.

system reliability. As expected, the reliability of the truncated protocol is bounded by the untruncated one. However, it is impossibie to achieve reliability at very low Eb/No values. The untruncated protocol is capable of doing so but at the expense of intolerable delays and extremely low throughput as can be seen in Figure 5.25. Figure 5.25 shows the effect of varying the maximum number of retransmissions on the throughput. It is shown how the minimum value of throughput decreases with increasing L. For L = w, the throughput may approach zero when the channel conditions are very poor. The effect of interleaving is presented in Figure 5.26. As we have seen in the case of the untruncated protocol, interleaving leads to higher throughput only when the channel is not in very poor condition because in this case, interleaving only causes the errors to be spread over more frames that cannot be corrected by the RS decoder. To achieve a certain reliability, in other terms a certain protocol error probability, different combinations of the parameters can be selected at the expense of a possible increase

5 Protocol Performance Analysis in CDMA Rayleigh Fading Channels

Fig. 5.27 Protocol error probability of the truncated protocol for the different allowed maximum number of retransmissions L.

e

102

Fig. 5.28 'Ituncated protocol delay performance for the different allowed maximum number of retransmissions L, and two degrees of interleaving depth I.

in the required Eb/No. For example, a protocol error probability of lo-' can be achieved at approximatively 4 dB using a mavimum of two retransmissions and ideal interleaving. In order to achieve this reliability with no interleaving, a value of Eb/No=4.5 dB is required. The same reliability can also be obtained with no interleaving, with L = 1, and using Eb/Noof a value of approximatively 5 dB. Delay degradation follows as can be seen in the delay performance results shown in Figure 5.28. Hence, for a certain value of Eb/No,in order to increase reliability, either interleaving should be used or more retransmissions. In general, using more retransmissions decreases the throughput but yields lower delay values. Fixing retransmissions to only one for example, and interleaving to higher degrees, would increase reliability at the expense of additional delay. This degradation is not as severe as for the untruncated protocol since the truncated protocol always results in a bounded delay. However, reasonable capacity can still be achieved with no interleaving while avoiding the memory requirement and the interleaving and deinterleaving delays. Hence, me confine ourselves to the use of no RS interleaving

5 Protocol Performance Analysis i n C D M A Ravleieh Fadine Channels

Fig. 5.29 Average transmission delay as function of the probability of total ertor for the different allowed maximum number of retransmissions L and no interleaviug.

103

Fig. 5.30 Average transmission delay as function of reliability for the different allowed maximum number of retransmissions and no interleaving.

with limited number of retransmissions. In summary we show for the no-interleaving case, the average transmission delay as a function of the probability of total error in F i y r e 5.29, and as function of the protocol error probability in Figure 5.30. These results are similar to showing what will be the system performance if the source decoder relies only on the channel error control. In our work, uncorrected or undetected errors are left to be taken into account by the source decoder. This means that the system reliability we considered here will increase by the error resilient coding tools implemented in the source coding scheme, as will be seen later. It is important to mention that the use of the truncated protocol is important when no antenna diversity is employed. In fact the channel FER is higher in the case of one receive antenna at the base, and more retransmissions are required. As a consequence, the truncated protocol ensures improved throughput and delay performance at the expense of an expected degradation in reliability.

5 Protocol Performance Analysis i n CDMA Rayleigh Fading Channels

104

5.7 Summary We studied the performance of a hybrid ARQ protocol using concatenated coding in a power-controlled DSCDMA cellular system. For the purpose of providing reliable transmission, a type-I hybrid ARQ protocol is used in a concatenated Reed-Solomon/Convolutional coding scheme. However, for delay-constrained applications of interest in this work, the protocol is used with a limited number of retransmissions in addition to partial interleaving. In order to investigate the system performance under low-delay requirements, the error control protocol performance was studied as a function of the RS interleaving depth and maximum number of allowed retransmissions. In the case of a highly-correlated quasistatic channel, the analysis is based on Markov modeling to derive performance metrics, taking into consideration interleaving and number of ARQ retransmissions, in the presence of non-independent channel errors. A modest degree of interleaving along with limited number of retransmissions can enhance the system performance without yielding excessive increase in the transmission delay. However, we found that for the quasi-static channel, it is preferable not to use interleaving in order to increase the system capacity while ensuring high throughput, reasonable reliability, and acceptable transmission delay through protocol truncation. In the next chapter, investigation of the performance of the hybrid protocol for transmission of coded images over the fading channels considered, with low-delay and high quality requirements will be addressed.

Chapter 6 Transmission of VB 2D-CELP Coded Images over Noisy Channels 6.1 Error sensitivity analysis In order to implement coding tools that are resilient to channel errors, an understanding of the ways in which the errors can affect our coding scheme must be gained. However, before describing in detail the effect of channel errors on the VB 2D-CELP compressed bit-stream, we provide a general description of the effects of channel errors on the different coding techniques that are implemented namely, predictive coding, block coding, VQ, and VLC. The term error propagation is used to describe how errors in an individual codeword can cause incorrect decoding of following codewords. Error propagation can also occur in the spatial domain where an isolated error will affect a spatially localized group of pixels. 6.1.1 E r r o r propagation due t o loss of vital information

In most practical image coding schemes, some vital information needs to be transmitted. This typically consists of the image size and details of the coding process like quantizer settings. Generally, this information is included in a header and sent at the beginning of transmission. Any channel error in this information is often catastrophic and can cause the decoder to completely fail to decode anything meaningful, e.g, if it tries to decode an image of the

6 Transmission of VB 2D-CELP C o d e d Images over Noisy Channels

106

wrong size. The only way to combat these effects is to effectively prevent them by using error correction coding with high redundancy and interleaving to cope ~ i t burst h errors. In our transmission scheme we suppose that the vital information, namely codebooks and sets of adaptive predictors, is available a t the decoder side. The only vital information that is image dependent is the size. However, in order to limit the channel effect on this information, it is inherently included in the coded data. This means that at certain positions of the compressed bit-stream, the width or length of the coded image can be extracted using the block-based coding principle and codeword synchronization.

6.1.2 E r r o r propagation due t o incorrect predictions In predictive coding the value of each coefficient is predicted from previous coefficients. If these previous coefficients are incorrectly decoded then the prediction will also be wrong and thus the errors will propagate to the current and future coefficients.

6.1.3 E r r o r propagation due t o loss of codeword synchronization Many image coding systems use variable length coding strategies where individual codewords are of different lengths. Variable length codes such as Huffman codes are very vulnerable to transmission errors that occur on the wireless channel. Any bit error can lead to desynchronization of the decoder and cause error propagation. In fact, a channel error within a variable length code may cause the decoder to decode the next codeword in the wrong position and thus potentially all following codewords may be affected. While error-correcting codes can reduce the number of errors, uncorrected errors will still result in decoder desynchronization and error propagation which will worsen the quality of the decoded images. Prevention of error propagation is hence essential in increasing error resilience.

6.1.4 E r r o r propagation due t o loss of coefficient synchronization The loss of codewvord synchronization due to errors in VLC schemes is often only temporary and the decoder will usually regain codeword synchronization eventually. However, even when codeword synchronization is regained the decoder is likely to be decoding the wrong coefficient. In our coding scheme this coefficient refers to a certain block with a specific size at a given position. Loss of coefficient synchronization will cause the decoder to reconstruct

6 Transmission of V B 2D-CELP Coded Imaees over Noisv Channels

107

an image block with the wrong size, at a wrong position, and the decoder will therefore be in the wrong position for decoding the next codeword. In general, for block coding methods this coefficient synchronization can be subdivided into two forms. Intra-block synchronization refers to which coefficient within a block is being decoded. In our coding scheme a single codeword is used for a block. The loss of the original codeword used will only cause the loss of the block size used and the information about the predictor selected for the block. Therefore, no intra-block synchronization is suffered at the decoder. On the contrary, Inter-block synchronization that refers to which block is being decoded is of major concern a s the effects can be a shift of regions of the pictures giving annoying boundaries between regions with different shifts. 6.1.5 Effect of channel e r r o r o n variable block-size coding Variable block-size coding implemented in the VB 2D-CELP coding scheme offers the potential of a better allocation of the number of bits spent per unit area according to the local detail in the image. Using fixed length codes, however, would not be efficient in terms of rate distortion performance or achieving the benefits of variable block-size coding. Therefore, Huffman coding is used in our image coding scheme. In each Huffman code three types of information are multiplexed: block size, prediction filter index, and code-vector index. In general, when coding schemes that implement Huffman coding are used over channels that suffer from uncorrected errors, several problems will be encountered. As the VB 2DCELP coding scheme is block based and uses Huffman coding, it suffers from the same or similar problems. If an error occurs in a series of VLC data, the codenord boundary of coded data will be identified incorrectly by the decoder. Consequently, the decoder decodes subsequent codewords improperly. In fact, all information multiplexed in the codeword will be wrongly identified. Even when codeword synchronization is regained, the image block positional information is corrupted, resulting in the following data to be incorrectly decoded. In fact, the decoder may have an inaccurate sequence of variable block sizes and will decode blocks in the wrong positions. This results in portions of the image to be shifted. This form of error propagation can be limited by use of End of Block (EOB) codes. However, if the largest block size used is still relatively small (say 4 x 4), this would yield a loss

6 Transmission of

VB 2D-CELP C o d e d Images over Noisy Channels

108

in compression performance as EOB codes would be inserted frequently. Instead, a larger block may be considered where several blocks constitute a group of blocks. Typically, a synchronizing codeword is inserted in each boundary between group of bloclrs. By the nature of the constraints on synchronization codewords, they need to be long and thus to avoid excessive redundancy they can only be used infrequently.

6.2 Resilience to channel errors 6.2.1 R o b u s t predictive coding

In predictive coding it is known that errors can propagate seriously unless the predictor is carefully designed. Figure 6.1 shows the typical feedback loop of a predictive decoder in the presence of channel errors e,(m, n). In the absence of channel errors, e,(m, n),ZJm,n) and B,(m, n) will all be zero. If a single error is introduced (e,(m,n) = b(m,n ) )then this will produce a corresponding error in the output B(m,n). This error will also cause the predictor to give false predictions for future coefficients as Zc # 0, and thus affect future outputs 6,.

Fig. 6.1 Predictive coder feedback loop. For linear predictors, the effect of channel errors can be modeled by considering the impulse response of the corresponding synthesis filter. Since is an all-pole recursive filter, it is not guaranteed to be stable in general.

&

A stable decaying impulse response is necessary to avoid error propagation. If predictors with unstable synthesis filters are used, then channel errors will cause catastrophic propagation while predictors with stable synthesis filters will cause the propagation of channel

6 Transmission of VB 2D-CELP Coded Images over Noisy Channels

109

errors to decay to zero. The synthesis filters corresponding to multi-dimensional predictors are generally more stable than the filters corresponding to one-dimensional predictors. This is particularly true for the twc-dimensional predictors implemented in our coding scheme. As an example, the impulse response of each of the synthesis filter is given for one block size. Figures 6.2-6.6 refer to predictors corresponding to a 4 x 4 block-size. It is shown that the impulse response of the synthesis filters decays quickly. 6.2.2 D a t a frame s t r u c t u r e As an ARQ frame based protocol is used and because of the previously detailed problems of VLCs, we consider the bit-stream as a sequence of data frames of fixed size rather than a sequence of bits. When a frame is not acknowledged by the ARQ protocol after the maximum number of retransmissions is reached, the data frame is considered to be unavailable. This allows us to avoid decoding a data frame in error and thus avoid error propagation and shifting in the decoded data. This way, erroneous frames need only be spatially situated and concealed. Recall that the VB 2D-CELP system uses variable block size coding where the image to be coded is partitioned into base blocks of size equal to the largest block size considered. Each base block can be further subdivided into sub-blocks. The image can further be partitioned into a number of large blocks referred to as slices. Slices consist of a fixed number of base blocks arranged horizontally from the left hand side to the right hand side of the image. A number of base blocks grouped together is what we refer to as Group of Blocks (GOB). A GOB may contain blocks belonging to more than one slice, but it is possible to consider GOBS and slices of equal sizes. A synchronizing code followed by the GOB position is inserted at the beginning of each GOB. Each GOB is detected at the source decoder using this synchronizing code. However, inside the GOB, variable block size coding with variable length codes is used. In this way, the effect of transmission errors can be restricted to the GOB where transmission errors occur. The data frame is of fixed size. However, since variable block size coding is implemented using VLC, to each data frame there is a corresponding shape in the image that varies according to the image context. This is illustrated in Figure 6.7 where the original image

6 Transmission of VB 2 D - C E L P Coded Images over Noisy Channels

110

Fig. 6.2 Impulse response of synthesis filler corresponding to predictor H?).

Fig. 6.3 Impulse response of synthesis filter corresponding to predictor H?'.

Fig. 6.4 Impulse response of synthesis filter corresponding to predictor H?).

Fig. 6.5 Impulse response of synthesis filter corresponding to predictor H?).

Fig. 6.6 Impulse respouse of syw thesis filter corresponding to prcdictor

HP).

6 Transmission of

VB 2D-CELP Coded Images over Noisy Channels

Slice :1 Slice :2

111

Base blodc

Data frame sync code

info'bits paddingits payload

Fig. 6.7 Framing of the encoded data.

is partitioned into slices. For simplicity, a GOB is of the same size as a slice. The image is further partitioned into blocks relating to the corresponding data frames. These blocks are the result of organizing the coded bits into data frames of fixed length that constitute the bit-stream. Organizing the bit-stream into frames necessitates special arrangement in order to avoid the, situation where a codeword belongs to two adjacent frames. Therefore, during the coding process, before sending a codeword to the bit-stream the number of bits in the current frame is counted. As soon as a codeword is found to go beyond the current frame and fall on the boundary of two adjacent frames, padding bits are used in order to complete the data frame and the codeword is written into the next frame. In Figure 6.7, the current frame is referred to as data frame "n" and the codeword corresponding to the last 2 x 2 block in the current frame is written in frame "n+l". At the decoder side, whenever the decoder fails t o get the last codeword in a frame, it knows that thc frame has rejected the codeword and that the code is to be read from the

6 Transmission of VB 2D-CELP C o d e d Images over Noisy Channels

112

next frame. In this way, we ensure codeword synchronization for all frames, which means that each frame starts with a complete codeword and is independent of the previous one. As expected, padding yields an increase in the source bit rate. This increase is directly related to the data frame size used. Therefore, reasonable data frame length should be considered in order for this increase to be insignificant. One important issue that needs to be considered is synchronization after an error or string of errors has resulted in the loss of synchronization in the decoder. Since 2D prediction is used, even momentary loss of synchronization between the encoder and decoder can have disastrous effects on the decoded image. Therefore, when the compressed image bit-stream is being transmitted over an unreliable channel, it is extremely important that the decoder have the ability to quickly resynchronize. This ability to quickly resynchronize or to localize the errors is equally important to the effectiveness of other error resilient tools, such as error concealment. Based on this observation, our simulations are based on the assumption that a slice is of the same size as a GOB. Therefore, the GOB width corresponds to the image width and its height is equal to the base block height. 6.2.3 Decoder error detection Assume that two possible block sizes are used (say 4 x 4 and 2 x 2); a 4 x 4 block is either coded as a single block or subdivided into four 2 x 2 blocks. Therefore, the coding scheme has certain constraining conditions. For example, if an image block is subdivided, four codewords derived from the 2 x 2 sub-codebook have to be detected in sequence. Therefore, if coded data contains an error, the decoding becomes inconsistent. These inconsistencies can be used for error detection and concealment. Our simulation of error detection at the decoder utilizes the following redundancies resulting from the constrained source coding conditions: A series of coded data other than VLC words, or a prohibited codeword appears. Codewords from the 2 x 2 sub-codebook do not come in series of four codewords. The number of blocks in a GOB exceeds the determined number. In the decoding process of VLCs, the whole GOB data is discarded when one of the synchronization codes at the beginning or the end of a GOB is lost. The GOB is treated

6 Transmission of VB 2D-CELP C o d e d I m a e e s over Noisv Channels

113

as uncoded and concealment is performed on it. Incorrectly decoded data are substituted for lost data. When detected, they are omitted (set to 0) on the basis that missing data when concealed is much less visible than false data. 6.2.4 Decoder e r r o r concealment In order to mitigate the effect of erroneously received blocks on the two-dimensional prediction used for the neighboring image blocks, we propose to use only the image blocks that are reconstructed in predicting a block under decoding process. In this way, the effect of errors does not propagate through the image. Once the decoder restricts the area of what is supposedly unamilable data, error concealment is employed to minimize the effect of undecoded data on the decoded neighboring blocks. This is accomplished by using the correctly decoded data effectively. This concealment is necessary as 2D prediction is used and allows us to minimize visual distortion when errors occur within a GOB. In order for the unavailable data which we refer to as uncoded to be effectively replaced by areas of previously decoded image blocks two approaches are considered. G O B concealment: once the GOB in error is spatially delimited, it is replaced by the same area of the previously decoded slice. s Line concealment: this approach consists of substituting the GOB with duplicates

of the last line in the last correctly decoded slice. The Line concealment approach is motivated by the fact that in the case of consecutive frame losses, a large area of the image blocks can be considered as unavailable and replacing it by blocks of spatially distant slice blocks can be very apparent when looking a t the decoded images. 6.2.5 Backward a n d Forward decoding In the previous decoding process of VLCs, the whole GOB data is discarded when one of the synchronization codes at the beginning or the end of a GOB is lost. This approach is suboptimal in the sense that some data frames inside the GOB may be correctly received but discarded as they cannot be spatially located due to the variable block-size coding.

6 Transmission of

VB 2D-CELP Coded Images over Noisy Channels

114 Sync codes

4 Slice :n-2 Slice :n-1 Slce :n Slice :n+l Slice: n+2

I

I

I

[7 Forward decoding recovery

Backward decoding recovery NAK frame (lost) Synchronization failure

1

Fig. 6.8

frames

Synchronization and error concealment.

In this section w e propose to increase the decoding performance based on the use of synchronizing codes. In the method proposed, if a data frame is not acknowledged at the channel decoder, the source decoder discards that data frame but not all of the remaining data in the GOB under decoding process when this data is contained in a correctly received frame. This is accomplished by recovering the variable block sizes, going backward in the bit-stream and initiating the VLC decoding at the right position. Taking benefit from the use of synchronizing codes, if an error is detected during the decoding process, decoding is immediately interrupted and the next synchronization code is sought. Once the next synchronization code is located, the decoding process is resumed starting from the first correctly received frame in the sequence of correctly received bits preceding the synchronization code. However, only the block sizes are extracted. Using the position indicated by the synchronization code, the decoder now knows where to place the blocks and the image blocks are reconstructed. This is referred to as Backward Decoding (BD). The GOB decoding process is carried out as follows: 1. If some error is detected during fonvard decoding process, the decoding is stopped, and the next synchronization code is searched prior to backward decoding. 2. During fonvard and backward decoding, errors can be detected in the following cases.

6 Transmission of VB 2D-CELP Coded Images over Noisy Channels

Fig. 6.9

115

An image example of the error detection algorithm.

There appears a codeword which is not listec! in the codeword tables. Too many blocks are decoded in a single GOB.

GOB synchronization code is corrupted. GOB extends outside the image. Under these error conditions in the bit-stream, the decoder should resynchronize at the next suitable resynchronization point in the bit-stream and missing blocks should be concealed. An example that illustrates the decoding process is given in Figure 6.8. Image slices are now represented by their corresponding coded data frames. When the synchronization code is correctly decoded, recovery of the coded data is possible through backward decoding. However, when the synchronization codes at the beginning and the end of a slice are lost, all the data between them is discarded as it is impossible to properly position the correctly received blocks in between. In the figure, data frames representing this case are referred to as "synchronization failure" indicating the reason for the loss of this data. An image example is also given in Figure 6.9. Finally, a simplified flowchart of the error detection and concealment algorithm is given in Figure 6.10.

6 Transmission of

VB 2D-CELP Coded Images over Noisy Channels

116

Belong to code-wordor frame

Error concealment

Fig. 6.10 The flowchart of error detection and concealment algorithm.

6.3 Results of transmission of coded images In this section, we present simulation results of transmission of the VB 2D-CELP compressed bit-stream over the communication channels under consideration with truncated type-I hybrid ARQ protocol. The case of an independent channel is considered first, followed by the highly correlated Rayleigh fading channel. Figure 6.11 illustrates the block diagram of the simulation system which is partitioned into different blocks based on their functionalities. These blocks have been detailed in the previous chapters. The 8 bpp image "lena" of size 512 x 512 is used in the experiments (Figure 6.12). The

6 Transmission of VB 2D-CELP C o d e d Images over Noisy Channels

117

... ... .. Receiver

Fig. 6.11 Overview of the simulation blodts. VB 2D-CELP compressed bit-stream is obtained using two possible coding block sizes, 4 x 4 and 2 x 2. On a noiseless channel, the image is coded at 0.567 bpp with a PSNR of 34.88 dB (Figure 6.13). The start of GOB codeword used corresponds to the longest Huffman codcword in order to avoid mismatch with any codeword used for the image blocks (4 x 4) or sub-blocks (2 x 2). This results in an overhead of 0.01 bpp, including the padding bits, yielding an overall source bit rate of 0.577 bpp. This result provides an upper bound on the system PSNR over noisy channels, as no additional overhead is introduced by the error resilient tools implemented.

6.3.1 T h e limiting case of t h e memoryless channel The uplink transmission model, where the channel is memoryless, is used to transmit the coded image bit-stream. A number of data frames corresponding to the transmission of 25 images is simulated. Simulation was done for channel SNRs ranging between 1 d B and 7 dB, and no RS interleaving is used. Following the experiments done in Chapter 5, the truncated protocol has been simulated for four values of maximum number of retransmissions (L = 0, 1, 2, and 3). The untruncated protocol has also been simulated. E r r o r detection a n d concealment: a comparative s t u d y

In order to analyze the performance of the different error resilient tools, the case of no retransmissions is considered (L = 0). In this case, the system performance corresponds to

6 Transmission of

Fig. 6.12

V B I D - C E L P C o d e d Images over Noisy Channels

Original image "lena".

118

Fig. 6.13 Reconstructed "lena" over a noiseless channel: source bit rate=0.5G7 bpp, PSNR=34.88 dB.

that of the FEC (RSJCC) concatenated scheme only. Image transmission performance is measured in terms of mean PSNR (Mean-PSNR), minimum PSNR (Rfin-PSNR), and maximum PSNR (Ma-PSNR) in dB measured over the 25 transmitted images. Transmission of the original VB 2D-CELP coded image bit-stream over the noisy channel suffers from error propagation due t o the use of variable length codes that severely corrupts the decoded images. As the data frames that are not acknowledged are considered to be unavailable to the source decoder, the results we provide with no concealment may be interpreted as a lower bound on the system performance. Better results in terms of PSNR could be obtained if the data frames received in error are still decoded. However, we found that concealing the non acknowledged frames provides better results when looking a t the quality of the reconstructed images. In order to investigate the performance of the different concealment tools implemented, results are first given when no backward decoding is implemented. Analysis of the coding results in terms of Min-PSNR, Max-PSNR, and Mean-PSNR reveal the substantial improvement of the GOB concealment (Table 6.2) over the case of no concealment (Table

6 Transmission of VB 2D-CELPCoded Images over Noisy Channels

119

6.1). The difference between Line concealment (Table 6.3), and GOB concealment (Table 6.2), even if it is not clear in terms of PSNR values, is subjectively significant as will be seen later when displaying decoded images. The use of backward decoding also provides improvements over the results provided by fonvard decoding only of the VLCs. This can be seen in Table 6.4 where no concealment is performed, Table 6.5 corresponding to GOB concealment, and Table 6.6 where Line concealment is used. As can be seen in the tables, comparison between the different error detection and concealment tools considered is not evident at low values of Eb/No. This is due to the fact that in these conditions, no retransmissions are allowed. Therefore, at low Eb/hb values the FER is very high. Hence, no matter how effective is the concealment technique, no substantial improvement can be obtained. This can be observed when analyzing the coding results corresponding to the case of one retransmission allowed over the initial transmission (Table 6.7 - Table 6.12). It is evident that for a greater number of retransmissions, the FER is lower and hence the number of frames in error in a decoded image is lower and concealment on it would likely improve the decoding results. Tnble 6.1 PSNR performance results for "lena" in the presence of channel errors: without Badtward Decoding, no concealment, L = 0. Without Backward Decoding, no Concealment, L = 0 Eb/No[dB] 11 Max-PSNR [dB] I Min-PSNR [dB] I Mean-PSNR [dB]

In order to examine the performance on the reconstructed images we consider the trans-

6 Transmission of VB ID-CELP Coded Images over Noisy Channels

Table 6.2 PSNR performance results for 'lena" in the presence of channel errors: without Backward Decoding, GOB concealment, L = 0. Without Backward Decoding, GOB Concealment, L = 0 EblNo [dB] 11 Max-PSNR [dB] Min-PSNR [dB] 1 Mean-PSNR [dB] 1.n II 5-71 I 5.67 I 5-68

I

Table 6.3 PSNR performance results for "lenan in the presence of d ~ a n n e l errors: without Backward Decoding, Line concealment, L = 0. Without Backward Decoding, Line Concealment, L = 0 Eb/No [dB] 11 Max-PSNR [dB] I Min-PSNR [dB] I Mean-PSNR [dB]

120

6 Transmission o f

VB 2D-CELP C o d e d Images o v e r N o i s y C h a n n e l s

Table 6.4 PSNR performance results for "lenan in the presence of channel errors: with Backward Decoding, no concealment, L = 0.

Table 6.5 PSNR performance results for "lena" in the presence of channel errors: with Backward Decoding, GOB concealment, L = 0. With Backward Decoding, GOB Concealment, L = 0 EbINo [dB] 11 Max-PSNR [dB] 1 Min-PSNR [dB] I Mean-PSNR [dB]

121

6 Transmission of

VB 2D-CELP Coded Images over Noisy Channels

T a b l e 6.6 PSNR performance results for "lena" in the presence of channel errors: with Backward Decoding, Line concealment, L = 0. With Backward Decoding, Line Concealment, L = 0 EbJNo [dB] 11 Max-PSNR [dB] 1 Min-PSNR [dB] I Mean-PSNR [dB] 1.0 11 5.77 5.69 I 5.72

l h b l e 6.7 PSNR performance results for 4enan in the presence of channel errors: without Badtward Decoding, no concealment, L = 1. Without Backward Decoding, no Concealment, L = 1 Eb/No [dB] 11 Max-PSNR [dB] I Min-PSNR [dB] I Mean-PSNR [dB]

122

6 Transmission of VB 2D-CELP Coded Imaees over Noisv Channels

Table 6.8 PSNR performance results for "lenan in the presence of channel errors: without Backward Decoding, GOB concealment, L = 1. Without Badrward Decoding, GOB Concealment, L = 1 EbINo [dB] Max-PSNR [dB] Min-PSNR [dB] Mean-PSNR [dB] 1.0 5.73 5.67 5.68 6.61 1.5 13.91 5.74

Table 6.9 PSNR performance results for Yenan in the presence of channel errors: without Backward Decoding, Line concealment, L = 1. Without Backward Decoding, Line Concealment, L = 1 Eb/No [dB] 11 Max-PSNR [dB] I Min-PSNR [dB] I Mean-PSNR [dB] 1.0 11 5.75 I 5.69 I 5.70

123

6 Transmission o f

VB

2D-CELP C o d e d I m a g e s over Noisy C h a n n e l s

Table 6.10 PSNR performance results for "lenanin the presence of channel errors: with Backward Decoding, no concealment, L = 1. With Backward Decoding, no Concealment, L = 1 Eb/No [dB] )I Max-PSNR [dB] I Min-PSNR [dB] ( Mean-PSNR [dB] 1.0 11 5.67 I 5.67 1 5.67

Table 6.11 PSNR performance results for "lena" in the presence of channel errors: with Backward Decoding, GOB concealment, L = 1.

124

6 Transmission of

VB I D - C E L P Coded Images over Noisy Channels

125

Table 6.12 PSNR performace results for "Lenanin the presence of channel errors: with Backward Decoding, Line concealment, L = 1.

With Backward Decoding, Line Concealment, L = 1 &./No [dB] Max-PSNR [dB] Min-PSNR [dB] Mean-PSNR [dB] 1.O 1.5

5.84 13.82

5.69 5.95

5.72 7.23

mission conditions that correspond to EbINo = 3 dB. With no retransmissions, a FER of value 7.44 is obtained. This FER corresponds to the probability of total error at the input of the RS decoder. This choice is motivated by the fact the use of ARQ affects the protocol error probability. Therefore, in order to keep the same comparison conditions with or without ARQ, the FER a t the input of the RS decoder is considered. Hence, we show reconstructed images corresponding to Min-PSNR and Max-PSNR obtained at EbIN0 = 3 dB. Figure 6.14 represents the best case, that is the best decoded image over the 25 transmitted images, that can be obtained when backward decoding is used but with no concealment. At these conditions, where no retransmission is allowed, the best image has a PSNR value of Max-PSNR = 9.41 dB. As can be seen in Figure 6.15, the simple use of GOB concealment significantly improves the image quality yielding to Max-PSNR = 26.99 dB. When Line concealment is performed the value of Max-PSNR obtained is 27.15 dB. No significant improvement is noted over GOB concealment in terms of PSNR. However, a closer look at the decoded image with Line concealment (Figure 6.16) shows better results in terms of subjective quality as compared to the decoded image in Figure 6.15. The use of backward decoding significantly improves the quality of decoded images.

6 Transmission of

VB 2D-CELP Coded Images over Noisy Channels

126

Fig. 6.14 Best reconstructed image at Eb/No = 3 dB with L = 0, using backward decoding and no concealment: Max-PSNR=9.41 dB.

Fig. 6.15 Best reconstructed image at Eb/No = 3 dB with L = 0, using backward decoding and GOB concealment: MaxPSNR=26.99 dB.

Fig. 6.16 Best 1WJnst~ctedimage at Eb/No = 3 dB with L = 0, with backward decoding and L i e concealment: MaxPSNR=27.15 dB.

Fig. 6.17 Best reconstructed image at Eb/No = 3 dB with L = 0, with no backward decoding and using Line concealment: MaxPSNR=20.47 dB.

6 Transmission of

VB I D - C E L P Coded Images over Noisy Channels

127

Using Line concealment but only fonvard decoding results in a Maximum PSNR of value Max-PSNR = 20.47 dB. The corresponding image is shown in Figure 6.17 which is far from satisfactory compared to the image in Figure 6.16. As previously mentioned, the use of ARQ significantly reduces the data FER, that is the frame error rate at the output of the RS decoder. Allowing only one retransmission improves the results considerably as can be seen from the decoded images corresponding to Eb/No = 3 dB with L = 1. Figure 6.18 shows the worst image that can be obtained a t these conditions with no backward decoding or concealment. With backward decoding and also no concealment, the worst reconstructed image is represented in Figure 6.19. The use of GOB concealment yields a minimum PSNR of value Min-PSNR = 26.99 dB. The corresponding image is shown in Figure 6.20. With Line concealment, the Min-PSNR is equal to 27.13 dB but subjectively speaking the quality of reconstructed image (Figure 6.21) is better than the one obtained with GOB concealment. With only one ARQ retransmission used, the system performance has been shown to significantly improve over the use of FEC only. It is also important to mention that in this case, the average quality of the reconstructed images in our simulations is closer to the image corresponding to Max-PSNR rather than Min-PSNR. Finally, in order to show the performance of Line concealment over no concealment, we show in Figure 6.22 a graph of Mean PSNR against Eb/No for three values of retransmissions (L = 0 , I and 3), using backward decoding (denoted in the figures by BD). Using Line concealment and these three values of maximum retransmissions, the benefit of backward decoding is shown in the graph of Figure 6.23. Similar results are provided in the graph in Figure 6.24 showing Mean PSNR values against the FER at the input of the RS decoder.

6 Transmission of VB 2D-CELP Coded Imaees over Noisv Channels

128

Fig. 6.18 Worst reconstructed image at Eb/No = 3 dB with L = 1, with no backward decoding and no concealment: Min-PSNR=10.90 dB.

Fig. 6.19 Worst reconstructed image at Eb/No = 3 dB with L = 1, using backward decoding and no concealment: Min-PSNR=13.40 dB.

Fig. 6.20 Worst reconstructed image at Eb/No = 3 dB with L = 1, using backward decoding and GOB concealment: MiPSNR=26.99 dB.

Fig. 6.21 Worst reconstructed image at Eh/No = 3 dB with L = 1, using backward decoding and Line concealment: MinPSNR=27.13 dB.

6

Transmission of VB 2D-CELP Coded Images over Noisy Channels

Fig. 6.22 Mean PSNR performance as function of Eb/No for Yenan: comparison between Line concealment and no concealment using BD for L = 0, 1 and 3.

FER

Fig. 6.24 Image Vena" Mean PSNR performance as function of FER: comparison between BD and forward decoding only, using Line concealment for L=O, 1 and 3.

Fig. 6.23 Mean PSNR performance as function of &/No for 'lenan: comparison between BD and forward decoding only, using Line concealment for L=O, 1 and 3.

129

6 Transmission of

VB 2D-CELP Coded I m a g e s over Noisy Channels

IEOI

130

z = = = = I 4

LC01

lorn

FER

Fig. 6.25 Mean PSNR performance as function of Eb/No for "Lenan: comparison between Line concealment and no concealment using backward decoding (BD) for L = 0, 1 and 3 in the case of a correlated channel.

Fig. 6.26 Image 'Lena" Mean PSNR performance as function of FER for a correlated channel, using backward decoding and Line concealn~entfor L = 0, 1 and 3.

6.3.2 A quasi-static highly correlated channel As in the case of the memoryless channel, the uplink transmission model is used to transmit the coded image bit-stream over the quasi-static highly correlated channel under consideration. A number of data frames corresponding t o the transmission of 25 images has been simulated for different values of RS interleaving depth I and maximum number of retransmissions L. For the slow fading channel considered herein (slow fading since the Doppler frequency is about 2 Hz), the errors tend to be bursty. Also the error bursts tend to be long. For this channel, the bit error rate in an erroneous data frame is much higher than the timeaveraged channel BER. As previously seen, the retransmission based protocol yields high performance since the number of required retransmissions before truncation is minimal.

6 Transmission of

VB 2D-CELPCoded Images over Noisy Channels

Fig. 6.27 Effect of RS interleaving depth I on the Mean PSNR performance for "lena".

Fig. 6.29 Image "Ienan Mean PSNR performance as function of FER for a correlated channel, using badtward decoding and Line concealment: influence of L and I.

131

Fig. 6.28 Effect of RS interleaving depth I on the Minimum PSNR performance for "lena".

6 Transmission of

V B 2D-CELP Coded Images over Noisy Channels

132

We first consider the case without RS interleaving. Figure 6.25 shows the performance of the decoder implementing backward decoding and Line concealment over the scheme using BD but without concealment. The Line concealment has been adopted based on the analysis of its performance compared to GOB concealment method. A comparative study led us to the same conclusions derived in the analysis accomplished for the memoryless channel. We therefore provide results based on Line concealment only. Figure 6.26 shows how the measured PSNR varies with the FER and the maximum number of allowed retransmissions. It is observed that the retransmission based protocol yields high PSNR performance as the number of required retransmissions is minimal. In fact, the impact of retransmitting incorrectly received frames is only seen up to a FER value of 1.3 lo-'. Moreover, it is noted that only one retransmission is capable of providing the same PSNR performance (in terms of Mean-PSNR) obtained with FEC only but at a FER of lo-' instead of For the purpose of investigating the influence of outer interleaving, we provide results based on variation of the RS interleaving depth I. However, in order to avoid mismatch with other parameter variation namely the number of retransmission L we consider the case of no retransmission (L = 0). Figure 6.27 shows Mean PSNR variation as a function of the FER for different values of I. Recall that a value of I = 1 indicates the case of no interleaving. Analysis of this graph indicates that in terms of Mean PSNR there is no substantial improvement through the use of interleaving to justify the following increase in delay. This is attributed to the error resilient tools implemented. A more detailed investigation of the effect of interleaving can be obtained by analyzing the PSNR performance in terms of hlinimum PSNR rather than Mean PSNR. Figure 6.28 shows the hiin-PSNR over the 25 transmitted images for the range of FER of interest herein. The figure shows that when the FER is high, which corresponds to very low Eb/No values, the outer interleaver is not capable of breaking up the error bursts but only spreads errors in an erroneous frame over a large number of frames. For FER values higher than about 5 we start seeing the effect of interleaving. At a certain point, no interleaving leads to the worst Minimum PSNR. If interleaving is used, it starts spreading the errors and if spreading is within the error correction capability of the RS decoder, interleaving yields improvement. However, on the average, interleaving can only increase the Min-PSNR by a few dB (2 dB). This variation is not significant especially that in our simulations, the average quality of the reconstructed images is usually closer to the image corresponding to

6 Transmission of VB 2D-CELP C o d e d Images over Noisy Channels

133

hlau-PSNR rather than Min-PSNR. Moreover, because of the fact that the PSNR measure is used, it is impossible to really investigate the effect of interleaving unless another measure such as the image block loss rate is used. Therefore, in this study, the interleaving analysis provided is enough to conclude that there is no need to interleave which in consequence would avoid the addition of unnecessary delays. Figure 6.29 shows a summary of the results presented in the case of the correlated channel considered in terms of Mean PSNR as function of FER for different values of the emphasized parameters. The results are provided using the error resilient coder that implements backward decoding and Line concealment. In order to subjectively compare the improvement achieved with GOB and Line concealment techniques, we consider the example of the transmission channel at Ea/lV0 = 3.8 dB and compare best and worst decoded images over the transmitted sequence. With no retransmission attempts, the best decoded image using GOB concealment is represented in Figure 6.30. The line-concealed decoded image is shown in Figure 7.31. Allowing one retransmission and using Line concealment yields a Min-PSNR of value 15.43 dB (Figure 6.32). Using two retransmission attempts allows reconstruction with Min-PSNR = 15.53 dB (Figure 6.33). No extensive results are provided in this section, especially in terms of subjective tests for displayed coded images, as further improvements will be obtained through proposal of an enhanced coding scheme in the next chapter. Moreover, in this chapter only one test image ("lena") has been used. Experiments with the test image "boat" have shown similar results

6.4 Summary A simple error detection and concealment scheme has been shown to give significant improvements for noisy channels [121]. We have tested two channel models in order to investigate the image transmission system performance. Ideal inner interleaving, used to make the slowly Rayleigh fading channel look memoryless, and a highly correlated channel have been considered. These channel models have been used to investigate the performance of our VB 2D-CELP coding scheme for transmission of images over noisy channels in a CDMA environment. Experiments show that there is little value in providing outer interleaving that has

6 Transmission of

VB 2D-CELP C o d e d I m a e e s over Noisv Channels

134

Fig. 6.30 Best decoded image using GOB concealment at Eb/No=3.8 dB and with no retransmission: Max-PSNR=20.12 dB.

Fig. 6.31 Best decoded image using Line concealment at Ea/No=3.8 dB and with no retransmission: Max-PSNR=20.29 dB.

Fig. 6.32 Wont decoded image using Line concealment at &/No=3.8 dB and with a maximum of one retransmission: MinPSNR=15.43 dB.

Fig. 6.33 Worst decoded image using Line concealment at Eb/No=3.8 dB and with a maximum of two retransmissions: MinPSNR=16.53 dB.

6 Transmission of

VB 2D-CELP Coded Images over Noisy Channels

135

previously been demonstrated to yield increase in the transmission delay. However, the ability of the source decoder to conceal the effect of erroneous or incorrectly received data has been shown to increase the image transmission performance. The effectiveness of the error resilient tools implemented in the source codec suggests that error detection and concealment is an important part of any error resilient encoder. There are probably advantages in more sophisticated schemes but these are not discussed in this work. On thc contrary, our interest is in simple techniques that do not yield an increased coding complexity but rather use the source encoder ability to simply conceal the effect of lost data using its coding particularities, as will be seen in the next chapter.

Chapter 7 Source and Channel Coding Interdependency 7.1 Introduction In the coding scheme referred to in the previous chapter as the GOB scheme, Huffman coding is used to entropy encode the code-vector index selected for each image block under the coding process. In each codeword, three types of information are multiplexed: blocksize, prediction filter index, and codovector index. If an error occurs in a series of VLC data, all information multiplexed in the codeword will be wrongly identified. Even when codeword synchronization is regained, the image block positional information is likely to be corrupted, resulting in the following data to be incorrectly decoded. Therefore, at the decoder side, when a codeword is corrupted, the corresponding block cannot be reconstructed as it has been at the encoder. Only a concealed version of it can be generated based on the previously reconstructed blocks. In this chapter, we propose to increase error resilience by a method based on separation of the zero-input response (ZIR)and zero-state response (ZSR)of each image block. At the decoder, if the codeword of a block is lost, the ZIR is generated and used as the reconstructed block. The coding scheme proposed in this chapter is the result of studying the source and channel coding interdependency for the purpose of increasing the source codec robustness to transmission errors.

7 Source and Channel Coding Interdependency

137

7.2 Enhanced VB 2D-CELP coding scheme Separation of the synthesis filter response into zero-input and zero-state responses implies modification to the coding procedure previously presented in Chapter 3. In the following we present the formulation of the coding process and implications on the information to be transmitted from the encoder to the decoder. Based on an error-sensitivity analysis, we propose a method for codeword indexing and a bit-stream organization taking into account the channel error control used, for the purpose of increasing the coding robustness.

7.2.1 Coding scheme description

4"l""""""""-...^"""..", -H Y I 4TH -FP .... .........,....."..........."."."..

--I.""".-.~~

Input

Image

1

:.

a

,,

:.-.""...... -.. "

Encoder

"".-

wp,,, ~ m g e

Decoder

Fig. 7.1 Source encoderJdecoder components.

For convenience, we partition the source coding scheme into blocks based on their different functionalities. Figure 7.1 represents these blocks, where cr represents the lossy part of the encoder and y corresponds to the lossless part, i.e., the entropy encoder. At the decoder side, y-' performs the inverse operation of the entropy encoder and fl indicates the lossy decoder. So far, an input image block of size 2bx 2b has been denoted by xp), where i denotes the time index and (b) refers to the coding block-size used. For simplicity, the time index will be omitted and an image block will be denoted by x ( ~ ) . Recall that for each block size considered in the variable block-size coding, we have a set of K predictors denoted by {Hib))f..=,.Also, a codebook corresponding to block size 2b x 2b is denoted by C(b).Since K predictors are used for each possible coding block-size, each codebook consists of K sub-codebooks: C(b)= ~ f ) . In block a at the encoder, once the predictor that results in the minimum MSPE (MMSPE) is selected, the encoder determines both the size of the block and the prediction filter index. The encoder then uses an exhaustive search through the excitation sub-codebook ~ fthat) corresponds to the prediction filter selected Hib).The search proceeds by passing

ukK,,

*

7 Source a n d Channel Coding Interdependency

Fig. 7.2

138

Encoder block diagram based on separation of the ZIR and ZSR.

each of the excitation vectors through the synthesis filter to obtain the candidate reproductions of the current input block. First, the ZIR difference signal dZrR (also called ZIR residual or ZIR error) is generated based on previously reconstructed blocks. Then, each of the code-vectors c, in subcodebook cf')is passed through the ZSR synthesis filter to generate ZSR candidates 2fSR.The code-vector that produces the least reconstruction error (distortion) is selected for the block, and the corresponding index is sent to the entropy encoder along with the predictor index k and b for codeword assignment. Consequently, the function cr of the encoder (Figure 7.2) maps an input block into three values: a block-size index b, a code-vector index i in the index set I(b) corresponding to The function 7 maps (b, k, i) codebook C(b),and a filter index k. Hence, (b, k,i) = CY(X(~)). into one or more codewords as will be seen later in detail. For now, we represent these codewords by a codeword vector u = ~ ( bk,, i) selected from the entropy codebook U. A block diagram of the decoder is given in Figure 7.3. The decoder performs the function 7 - I which is the inverse of 7, and the function P which maps the indm into a residual vector ci = P(i) that belongs to codebook c:). The decoder gets the reconstructed

7 Source a n d Channel Coding Interdependency

. . iii

: :

From channel

Inverse -'

Et:tgy

.: :.

Residual Codebook

*

: :

t

139

-

/ Synthesis Filter 1 I1

k,b

-

- H(z,, z 2 )

/

Fig. 7.3 Decoder block diagram.

based on the synthesis filter of index k that corresponds to block size 2b x 2b. vector db) 7.2.2 Separation of Z I R a n d Z S R

In this section we present the explicit minimization of the coding error by reformulating the reconstructed blocks as the sum of the ZIR and the ZSR of the inverse prediction error filter. Since we are encoding 2D blocks, a block of dimension 2bx2bwill be represented by both a matrix of size 2b x 2b and a row ordered 22b-dimensionalcolumn vector. Let A' represent the 2D-block and x its one-dimensional form. For simplicity, the index (b) referring to the coding block-size is omitted from x and a predictor with index k and block size 2b x 2b will be denoted by HkSb. The ZIR is a function of the previously reconstructed blocks only, whereas the ZSR is a function of the selected residual codovector. Our goal is to rewrite % as the summation of the separated ZIR and ZSR vectors denoted by kZIR and kZSR. For the ith code-vector ci, &zSR is a linear function of ci. Then,

where ci is a 22bcolumn vector and H[tR is the matrix of size 22bx 22bcomposed of samples of the impulse response of the synthesis filter corresponding to the selected predictor Hk,b 11221. and that -is the correRecall that H & l , q ) = C(p,q)EFhf)(p,q)~;Pqq

7 Source a n d C h a n n e l Coding Interdependency

140

sponding synthesis filter that is the inverse of the predictor error filter. Let be the Ldimensional vector of reconstructed values outside X on which the ZSR depends. Then, as jrZrR is a linear function of 9, we have: 2ZIR

= HZIR k.b

-

Y

(7.2)

where HtLR is a matrix of size 22bx L determined by the impulse response of the synthesis filter corresponding to predictor Hk.6. Then, the error vector that is the quantization error vector obtained for the input x with residual code-vector c, is given by:

Let us denote by c the selected code-vector that results in the minimum distortion. The squared error distortion resulting from code-vector ci is given by:

Therefore, during the search of c the first term is constant and the minimization argument can be chosen to be

D,= -zPTci

+ Ei

(7.5)

where, p=

(7.6)

and Ei = [~H:f~c,ll~ Notice that p is calculated once during the search and is constant for all i.

(7.7)

7 Source a n d Channel Coding Interdependency

141

7.2.3 Effects of channel errors As previously mentioned, separation of ZIR and ZSR of each block can be exploited to increase the coding error resilience. At the decoder side, if the code-vector index of a block is lost, the reconstructed vector can be taken as the ZIR of the block under the decoding process. This is motivated by the fact that the ZIR is a function of the previously reconstructed vectors only, whereas the ZSR is a function of the selected residual codevector. We refer to this scheme as the ZIR scheme. However, this approach is possible only if the predictor index and the coding block-size are known to the decoder. In the original coding scheme, a single codeword is used for each block and if this code is lost, all information multiplexed in it is lost, i.e., the block size -and the predictor used for the block are also unknown or corrupted. In order to exploit the error resilience of the encoder based on the separation of ZIR and ZSR, the predictor information and the coding block-size need to be perfectly recovered at the decoder side. Consequently, this information needs to be transmitted separately from the code-vector index in the form of side information, unlike what is done in the method that multiple~esall types of information in a single codeword. This is intended to increase error resilience but a t the expense of a possible increase in the bit rate.

7.2.4 Coded d a t a s t r u c t u r e For simplicity, we assume that two coding block sizes are used (4 x 4 and 2 x 2). In order to lower the side information corresponding to the predictors, it might be more appropriate to use tlie same predictor for all sub-blocks in a large block (2 x 2 blocks in a 4 x 4 block) and hence reduce the side information by forcing a certain homogeneity in the predictor selection. Moreover, if the predictor index is sent as side information, the same VLC codewords can be used for all sub-codebooks, given that a sub-codebook corresponds to a specified predictor and contains N, code-vectors (Figure 7.4). The information about the block subdivision, namely the block-size i n d c ~b used for each image block, can be sent as side information too, or it can be included in tlie VLC codewords. When the latter option is considered, different codewords have to be used for codebooks d4) and C(Z)as i t is also possible to make the block-size information part of the predictor side information. In the following, we summarize the different possibilities that can be implemented.

7 S o u r c e a n d Channel C o d i n g Interdependency

142

c

cl"

c'7

cl.I 5

"1

4x4 Codebwk

5

NI

2x2 Codebook

Fig. 7.4 Organization of residual codebooks for five predictors.

A single entropy c o d e b o o k

Fig. 7.5 Organization of codebooks into a global codebook.

First, we illustrate what can be implemented with an approach similar t o that adopted for the GOB scheme. We considered that codebooks C(b)are of equal sizes, and that they are organized sequentially into a global codebook C. Therefore, the number of code-vectors in each subcodebook is equal t o N,. This is illustrated in Figure 7.5 for two coding block sizes 4 x 4 and 2 x 2.

7 Source and Channel Coding Interdependency

Fig. 7.6 Codebook C histogram for VB 2D-CELP coded 4enan with two block sizes.

143

Fig. 7.7 Codebook C histogram for VB 2D-CELP coded 'boatn with two

block sizes.

Using coding results based on the training sequence used to build the codebooks, we have plotted the histograms of the probability of occurrence of the code-vectors in codebook C. Examination of the probability of occurrence of each code-vector revealed the non uniform character of the probability of occurrence. A histogram is shown in Figure 7.6 for image "lena" and in Figure 7.7 for image "boat". Based on examination of these histograms, a single Huffman code is used for the global codebook. Thus, the entropy codebook in the entropy encoder block .y consists of a single book U in which a block size, predictor index and a code-vector index are implicitly multiplexed in each codeword. Separation of basic a n d side information Using predictors associated with a specified block size has shown improvements in the PSNR, but now that predictor indices have to be sent as side information, this may result in an increase in the bit rate. If this increase is considerable, predictors

7 Source a n d Channel Coding Interdependency

Fig. 7.8 Predictors histogram for VEI 2D-CELP coded "lenan with two block sizes 4 x 4 and 2 x 2.

144

Fig. 7.9 Predictors histogram for VB 2D-CELP coded "boat" with two block sizes 4 x 4 and 2 x 2.

corresponding to only block size 4 x 4 can be used and sent as side information. In this case, two approaches can be adopted: (i) one entropy codebook of length N is generated and used for C(4)as well as C(') and hence the block-size is sent as side information; (ii) a different entropy codebook of length N codewords is generated for each codebook. We have plotted the hitograms of occurrence of the predictor indices for the training sequence as well as for the two test images considered (Figure 7.8 and Figure 7.9). The histograms have shown that the probability of occurrence is not uniform. We also examined the spatial distribution of the different predictors used for encoding the test images. Figure 7.10 shows the distribution of predictors corresponding to 4 x 4 block-size for encoding image "lena". Predictor distribution corresponding to the high-activity region coded with a 2 x 2 block-size is represented in Figure 7.11. Similar results are also provided for image "boat" in Figure 7.12 for the low-activity region and in Figure 7.13 for the high-activity one.

7 Source a n d Channel C o d i n g Interdependency

145

Examination of these figures shows that there is no high correlation between adjacent predictor indices that can be exploited to reduce the predictor side information. This is visible when analyzing the distribution of individual predictors throughout the image. Figures 7.14-7.18 show the distribution of 4 x 4 individual predictors whereas Figures 7.19-7.23 correspond to 2 x 2 block-size. Hence, VLC is more appropriate for coding the predictor information. For this purpose, there are different approaches: 1. Generate a VLC for K predictor indices and use them along with the coding block-size side information. Hence, a VLC for the code-vector codebooks needs to be generated for N, indices only. 2. Generate a VLC for 2K predictor indices. Hence, no variable block-size information needs to be transmitted and a VLC needs to be generated for N. code-vector indices only. 7.2.5 Consequences o n codebook indexing When the same codewords are used for the different codebooks namely C(4)and 0 2 ) ,special arrangement of the codebooks is needed. Recall that in each codebook, there is a number of sub-codebooks each corresponding to one predictor. In order to calculate the probability mass function of the codebook needed to get a VLC such as Huffman, special arrangement of the code-vectors needs t o be performed. In fact, the code-vectors need to be sorted in each sub-codebook according to the descending or ascending order of the probability of occurrence. As mentioned, there is a number of possibilities on how to organize codebooks and sub-codebooks. Moreover, information that needs to be transmitted to the decoder can be multiplexed or separated into basic and side information. Therefore, the bit-stream can be organized in different ways that would exhibit different degrees of sensitivity to channel errors. Thus, a choice of the coded data structure results from a compromise. On one hand, the source bit rate has to be reduced and on the other hand error resilience needs to be increased. It is in this vein that our choice of the information to be transmitted to the decoder as well as its organization in the bit-stream is based on its capability to increase error resilience. For this purpose, an error sensitivity analysis is needed for the different cases of study previously mentioned.

7 Source a n d Channel Coding Interdependency

146

Fig. 7.10 Predictor ditribution for '1enan. The five regions fmm white to black represent predictors H(4) H(4) ~ ( H ~f ) and 1 H?); the 5 1 4 9 3 7 high-activity region (block size 2x2) is represented by 0 gray level.

Fig. 7.11 Predictor distribution for Senan. The five regions from white to black represent predictors HF', H r ) , I f f ) , H?) and the low-activity region (block size 4 x 4) is represented by 0 gray level.

Fig. 7.12 Predictor ditribution for "hat". The five regions from white to black represent predictors H P ) , H Y ) , H$), Hi4) and H?); the high-activity region (block size 2x2) is represented by 0 gray level.

Fig. 7.13 Predictor distribution for "boatn: The five regions from white to black represent predictors Hi2), H?), H f ) , and H?); the low-activity region (block size 4 x 4) is represented by 0 gray level.

HF);

H?)

7 Source and Channel Coding Interdependency

147

Fig. 7.14 Spatial distribution of Predictor used for image 4ena" coded with two block sizes 4x4and2x2.

Fig. 7.15 Spatial distribution of Predictor H?) used for image Yenan coded with two block sizes 4x4and2x2.

Fig. 7.16 Spatial distribution of Predictor used for image "lena" coded with two block sizes 4x4and2x2.

Fig. 7.17 Spatial distribution of used for image Predictor 9enan coded with two block sizes 4~4and2x2.

HF)

HF)

HP)

7 Source and Channel Coding Interdependency

148

Fig. 7.18 Spatial distribution of Predictor H?) used for image ?enan coded with two block sizes 4~4and2x2.

Fig. 7.19 Spatial distribution of Predictor H?) used for image "1enan coded with two block sizes 4x4and2x2.

Fig. 7.20 Spatial distribution of Predictor H?) used for image "lena" coded with two block sizes 4 x 4 and 2 x 2.

Fig. 7.21 Spatial distribution of Predictor H:) used for image "lena" coded with two block sizes 4x4and2x2.

149

7 Source and Channel Coding Interdependency

Fig. 7.22 Spatial distribution of Predictor H?) used for image "lenan coded with two blodr sizes 4x4and2x2.

Fig. 7.23 Spatial distribution of Predictor used for image "lenan coded with two block sizes 4x4and2x2.

HP)

7.3 A bit-stream structure for improved error-resilience 7.3.1 E r r o r sensitivity analysis In order to exploit the error resilience of the encoder based on the separation of ZIR and ZSR, we found that the predictor information needs to be transmitted as side information. The two other types of information that remain to be considered are the block size and the code-vector index of each block under the coding process. The image coded bit-stream is separated into two types of information: type-I and type-11: type-I contains the side information and type-I1 contains the rest of the image data which is the entropy-coded code-vector data. We suppose that type-I data is perfectly recovered a t the decoder. Therefore, predictor information is available a t the decoder side. In order to analyze the error sensitivity of the block-size index and code-vector index, the image coding scheme is summarized in the form of an algorithm for each of the previously proposed cases. For simplicity, we still assume that two coding block sizes are used (4 x 4 and 2 x 2). First, K predictors corresponding to block size 4 x 4 are used whether the

7 Source a n d Channel Coding Interdependency

150

block is coded using 4 x 4 size or split into four 2 x 2 blocks. In algorithm-I, the coding block-size is part of type-I data. For algorithm-2, only predictor information is sent as side information. Algorithm-1 For each 4 x 4 image block, 1. Select the predictor

~ that f )results in the MMSPE.

2. Based on the comparison of the hlhlSPE value with threshold A, check if the block can be classified as a low-activity block,

(a) If yes, indicate this by sending a low-activity one-bit flag as type-I data, and the codeword of the code-vector in sub-codebook C f ) that results in the MMSRE as type-I1 data. (b) If no, send the high-activity one-bit flag as typo1 data, decompose the block into 2 x 2 sub-blocks, and for each block: i. select the code-vector in sub-codebook Cf) that results in the MhlSRE and send the codeword as type-I1 data.

Algorithm-2 For each 4 x 4 image block, 1. Select the predictor

~ that f )results in the MMSPE.

2. Based on comparison of the MMSPE value with a threshold A, check if the block can be classified as a lowactivity block,

(a) If yes, select the code-vector in sub-codebook SRE and send the codeword as type-I1 data.

c!) that results in the Mhf-

(b) If no, decompose the block into 2 x 2 sub-blocks, and for each block: i. Select the code-vector in sub-codebook C f ) that results in the MMSRE and send the codeword to type-I1 data. For Algorithm-1, the same Huffman code is used for codebook C(4)and for [email protected]).The coding block-size is sent as side information which results in an increase of the bit rate by bpp. This increase is not really significant, but what is its impact on error resilience

7 Source and Channel Coding Interdependency

151

? As for the predictor side information, the block-size information is appended to type-I data. Suppose that with high error protection this information is perfectly recovered at the decoder side. In decoding the index bit-stream, once the erroneous region is delimited, the ZIR reconstructed block can be obtained with the appropriate block size since this information is available. In Algorithm-2, once the erroneous region is delimited a t the decoder side, ZIR reconstructed blocks have to be generated considering a block size of 4 x 4 since no information about the effective coding block-size is available at this stage. Hence, the main difference between the algorithms regarding PSNR performance would result from the fact that ZIR reconstructed blocks in Algorithm-:! do not correspond to what can actually be generated according to the coding process. Therefore, in order to avoid reducing the source coding performance in terms of PSNR, we confine ourselves to the use of the appropriate predictors for each coding block-size. Therefore, different sets of predictors are used for block size 4 x 4 and 2 x 2 and this information is sent as side information. As this information is highly crucial for the decoder, it has to be highly protected. We illustrate the coding process in form of an algorithm as we did for the previous cases.

Algorithm-3 For each 4 x 4 image block, 1. Select the predictor H f ) that results in the MMSPE.

2. Based on comparison of the MMSPE value with a threshold A, check if the block can be classified as a low-activity block, (a) If yes, send the predictor index code to type-I data and the codewvord of the code-vector index in sub-codebook that results in the MMSRE to the type-11.

c?)

(b) If no, decompose the block into 2 x 2 sub-blocks. For each sub-block: i. Select the predictor

HF) that results in the MMSPE, c;) that results in the MMSRE

ii. Select the code-vector in sub-codebook and send the codeword to typo11 data. 3. Send four predictors indices to type-I data.

7 Source a n d Channel Codine I n t e r d e ~ e n d e n c v

152

We examined the different above mentioned ways in separating type-I and type-I1 data. Based on the error sensitivity analysis and the requirements needed to take advantage of separation of the responses into ZIR and ZSR, we found that the minimum source coding bit-rate can be obtained through the use of two entropy codebooks: one for the predictors and variable block-size information, and one for the code-vector indices. The type-I information entropy codebook consists of a single VLC code where each codeword indicates one of 2K predictors. Thus, the index corresponding to a single codeword yields two pieces of information to the decoder, namely the predictor index and the coding block-size. On the other hand, the code-vector index entropy codebook consists of a VLC table of length N,. This corresponds to organizing the codebooks according to the illustration given in Figure 7.24.

Fig. 7.24 Residual codebook organization so that only N, code-vector indices are addressed: the code-vector index entropy codebook is of size N,.

7.3.2 Bit-stream of t h e VB 2D-CELP compressed image We considered that the bit-stream is separated into type-I data that contains the information that identifies the coding block-size and the predictor index for each block and type-I1 that contains the entropy-coded code-vector data. Examination of the VB 2D-CELP coded data demonstrated that type-I section of the bit-stream is extremely sensitive to transmission errors. As this information is highly crucial for the decoder, it has to be highly protected. We therefore consider a bit-stream organization in which type-I data is separately transmitted with high error protection. Since the objective is to design a low-delay, bandwidth-efficient transmission system, we employ UEP for type-1 and type-I1 data.

7 Source a n d Channel Coding Interdependency

153

I t is desired to transmit type-I data as error-free as possible. Therefore, we use the untmncated protocol to transmit this sensitive section of the bit-stream. With the purpose of reducing the transmission delay, we propose to use the protocol with truncated retransmissions for type-I1 data. The untmncated hybrid ARQ scheme ensures error-free transmission of the sensitive section of the coded bit-stream. This way, the delay due to multiple ARQ retransmissions or interleaving can be minimized since only a fraction of the image bit-stream rather than the entire bit-stream is transmitted using this protocol.

7.4 Coding performance improvement 7.4.1 Results: a comparative s t u d y The uplink transmission model is used to transmit the coded image bit-stream for the case of the highly correlated quasi-static channel. A number of data frames corresponding to the transmission of 25 images has been simulated for the ZIR scheme for different values of maximum number of retransmissions L. As in the previous experiments, the image transmission performance is measured in terms of mean PSNR (Mean-PSNR), minimum PSNR (Min-PSNR), and maximum PSNR (Max-PSNR) in dB, computed over the 25 decoded images. For the GOB scheme, the volume of coded data corresponds to a source bit rate of 0.577 bpp. A single bit-stream represents the coded image and transmission of the bitstream is performed with equal error protection using the truncated protocol. The GOB scheme implements line concealment as well as backward decoding. For the ZIR scheme, the volume of coded data resulting from separation of the information into type-I and type-I1 data corresponds to a source bit rate of 0.517 bpp. Error-free transmission results in a PSNR of value 34.89 dB. For comparison purposes, we first consider the case of no retransmission. As can be seen in Figure 7.25, showing the Mean PSNR versus Eb/No, the ZIR scheme provides improvement over the original GOB scheme. In Figure 7.26, we show the influence of varying the number of retransmissions L on the performance of both schemes. It is observed that the ZIR scheme always leads to higher performance in terms of PSNR. The minimum PSNR obtained over the 25 decoded images is also compared. Figure

7 Source a n d Channel Coding Interdependency

-0

154

[dB]

Fig. 7.25 Mean PSNR performance comparison for the ZIR scheme and the GOB scheme in case of no retransmission (L = 0).

Fig. 7.26 Effect of the variation of the retransmission number L on the PSNR performance of the ZIR scheme and the GOB scheme.

7.27 shows the minimum PSNR obtained with the ZIR scheme, and Figure 7.28 reproduces the results obtained using the GOB scheme. The ZIR schemes not only yield an increased performance but also provide a certain smoothness in the curves. This is due to the fact that the concealed blocks based on the ZIR are not just concealed through substituting them with neighboring blocks hut rather decoded based on the surrounding blocks. It is important to mention that the irregularities in the curves of Figure 7.28 are mainly due t o the performance evaluation through PSNR computation. In fact, the degree of image quality degradation strongly depends on the error position. Thus, a decoded image may have a relatively low PSNR but still be of acceptable quality with not much annoyance. It is certainly more appropriate to consider another performance evaluation measure. However, this is left for future work. Comparison of the results of both schemes also shows that the required Eb/Noto achieve a desired performance in terms of quality of the decoded images is lower in the case of the

7 Source a n d Channel Coding Interdependency

Fig. 7.27 Minimum PSNR obtained over the 25 decoded images using the ZIR scheme and different maximum number of retransmissions L.

155

Fig. 7.28 Minimum PSNR obtained over the 25 decoded images using the GOB scheme and different maximum number of retransmissions L.

ZIR scheme than for the GOB scheme. In addition to this, the standard deviation of the Mean PSNR is lower for the ZIR scheme. Generally, images decoded using the ZIR scheme have a PSNR not only closer to the Mean PSNR rather than to the minimum PSNR, but the difference between the minimum and maximum PSNR is always lower compared to the results of the GOB scheme. This is illustrated in Figure 7.29 showing the PSNR variation at Eb/No=3.8 dB using the ZIR scheme, and in Figure 7.30 corresponding to the GOB scheme. Using the GOB and ZIRschemes, we show reconstructed images obtained at Eb/No= 3.8 dB. Using no retransmission for the GOB scheme, the best image obtained is represented in Figure 7.31. The corresponding PSNR is Max-PSNR = 19.96 dB. The worst decoded image through the ZIR scheme obtained with no retransmission of type-I1 data is shown in Figure 7.32. This decoded image has a PSNR of value Min-PSNR = 16.93 dB. Hence, it is only 3 dB worse in performance compared to the above mentioned GOB best image, but 6 dB

7 Source a n d Channel Coding Interdependency

Fig. 7.29 ZIR scheme PSNR variation at Eb/No=3.8 dB.

156

Fig. 7.30 GOB scheme PSNR variation at Eb/No=3.8 dB.

better than the worst decoded image through the GOB scheme under the same conditions. Therefore, the ZIR scheme provides improved performance as it provides improved errorresilience compared to the GOB scheme even if type-I1 data is coded using FEC only. Using the truncated protocol to retransmit type-I1 data significantly improves the ZIR scheme performance. With a maximum of one retransmission, the worst decoded image corresponds to a Min-PSNR of value 23.07 dB. This image is represented in Figure 7.33. Allowing one more retransmission attempt increases the PSNR to Min-PSNR = 29.01 dB. The corresponding image is shown in Figure 7.34. The variation of the PSNR for the sequence of transmitted images is shown in Figure 7.29. For the purpose of guaranteeing minimum delay, it is proposed to transmit type-I1 data with FEC coding only. We compare results of Mean PSNR for this case with the GOB scheme performance when only one retransmission is allowed. Figure 7.35 plots the Mean PSNR as function of the data FER. Is is observed that the ZIR scheme always leads to better performance. Communication using the ZIR scheme is feasible at frame error rates higher than those the GOB scheme can handle. This is also visible in Figure 7.36 that

7 Source and Channel Coding Interdependency

157

Fig. 7.31 Best decoded image using the GOB scheme at Eb/No=3.8 dB and with no retransmission: Max-PSNR=l9.96 dB.

Fig. 7.32 Worst decoded image using the ZIR scheme a t Ea/No=3.8 dB when typo11 data is sent with no retransmission: Min-PSNR=16.93 dB.

Fig. 7.33 Worst decoded image using the ZIR scheme at Eb/N0=3.8 dB when typoII data is sent with a maximum of one retransmission: Min-PSNR=23.07 dB.

Fig. 7.34 Worst decoded image using the ZIRscheme at Eb/No=3.8 dB when typo11 data is sent with a maximum of two retransmissions: Min-PSNR=29.01 dB.

7 Source a n d Channel Coding Interdependency

Fig. 7.35 Mean PSNR performance as function of data FER: comparison between ZIR scheme and GOB scheme.

158

Fig. 7.36 Comparison between Minimum and Maximum PSNR for the ZIR scheme, represented a s function of data FER for L = 0.

compares the minimum and the maximum PSNR obtained with the ZIR scheme. In order to compare the transmission delay performance of both schemes, we consider the average image transmission time computed over the sequence of transmitted images. Two parameters affect this delay time: the maximum number of allowed retransmissions and the volume of image coded data. As the volume of coded data is higher using the GOB scheme, we consider the corresponding minimum achievable delay as a reference and normalize transmission time values by this minimum. Figure 7.37 plots the normalized delay for the ZIR as well as for the GOB scheme, under different charnel conditions. The use of UEP for the ZIR scheme may result in additional delay because the untruncated protocol is used to transmit type-I data. This is important in severe channel conditions (Ea/No
View more...

Comments

Copyright © 2017 KUPDF Inc.
SUPPORT KUPDF