Free Trial

Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.

Share this Page URL
Help

11.2.2 Information Theoretic Metrics of ... > 11.2.2 Information Theoretic Metrics... - Pg. 315

11.2 Information Theoretic Basics 315 In the point-to-point AWGN channel, the output Y is related to the input X according to Y = hX + N, where h is a fading coefficient (often modeled as a Gaussian random variable), and N is the noise, which is N N (0, 1). Under an average input power constraint E[|X| 2 ] P, it is known that the optimal input distribution is Gaussian as well, allowing one to obtain the well-known capacity 1 1 log 2 1 + |h| 2 P = log 2 (1 + SINR ) = C( SINR ) (bits/channel use) . 2 2 Here SINR is the received signal-to-interference-plus-noise ratio, and C(x) = 1 2 log 2 (1 + x). Gaussian noise channels have the computationally convenient prop- erty that the optimal capacity-achieving input distribution p(x) is often Gaussian as well. Thus, in Gaussian noise channels, even when the capacity-achieving input distribution of the channel is unknown, achievable rate regions are often computed assuming Gaussian input distributions. Capacity as defined above is a single number. When multiple data streams are to be communicated, for example, in a channel that consists of two users transmitting at rates R 1 and R 2 , respectively, the capacity region, in this case a two-dimensional region, is the generalization of the capacity to multiple dimensions. While capacity is central to many information theoretic studies, it is often challenging to determine. C =