EE 346-LN-CH1-II

Embed Size (px)

Citation preview

  • 8/2/2019 EE 346-LN-CH1-II

    1/61

    EE 346-CH I-II 07/08 Fall 1

    NEAR EAST UNIVERSITY

    FACULTY OF ENGINEERING

    DEPARTMENT OF ELECTRICAL & ELECTRONIC

    ENGINEERING

    EE 346

    COMMUNICATION SYSTEMS

    LECTURE NOTES

    Prepared by

    Dr. Tayseer Alshanableh

    Nicosia-2007

  • 8/2/2019 EE 346-LN-CH1-II

    2/61

  • 8/2/2019 EE 346-LN-CH1-II

    3/61

    EE 346-CH I-II 07/08 Fall 3

    CHAPTER I

    INTODUCTION TO COMMUNICATION SYSTEMS

    The Structure of a Communication System

    An electrical communication system is designed to send messages or information from a

    source that generates the messages to one or more destinations. In general, a communication

    system can be represented by the functional block diagram shown in Figure 1.1.

    The information generated by the source may be of the form of voice (speech source), a

    picture (image source), or plain text in some particular language, such as English, Japanese,

    German, French, etc. an essential feature of any source that generates information is that its

    output is described in probabilistic terms; that is, the output of a source is not deterministic.Otherwise, there would be no need to transmit the message.

    Output

    signal

    Figure 1.1. Functional block diagram of a communication system.

    A transducer is usually required to convert the output of a source into an electrical signal that

    is suitable for transmission. For example, a microphone serves as the transducer that converts

    an acoustic speech signal into an electrical signal, and a video camera converts an image into

    an electrical signal. At the destination, a similar transducer is required to convert the electrical

    signals that are received into a form that is suitable for the user; for example, acoustic signals,

    images, etc.

    The heart of the communication system consists of three basic parts, namely, the transmitter,

    the channel, and the receiver. The functions performed by these three elements are described

    below.

    Information

    source and

    input transducer

    Transmitter

    Channel

    Receiver

    Output

    transducer

  • 8/2/2019 EE 346-LN-CH1-II

    4/61

    EE 346-CH I-II 07/08 Fall 4

    Transmitter

    A transmitter converts the electrical signal into a form that is suitable for transmission through

    the physical channel or transmission medium. For example, in radio and TV broadcast, the

    Federal Communications Commission (FCC) specifies the frequency range for each

    transmitting station. Hence, the transmitter must translate the information signal to be

    transmitted into the appropriate frequency range that matches the frequency allocation

    assigned to the transmitter. Thus, signals transmitted by multiple radio stations do not

    interfere with one another. Similar functions are performed in telephone communication sys-

    tems, where the electrical speech signals from many users are transmitted over the same wire.

    In general, the transmitter performs the matching of the message signal to the channel by a

    process called modulation. Usually, modulation involves the use of the information signal to

    systematically vary the amplitude, frequency, or phase of a sinusoidal carrier.In general, carrier modulation such as AM, FM, and PM is performed at the transmitter, as

    indicated above, to convert the information signal to a form that matches the characteristic of

    the channel. Thus, through the process of modulation, the information signal is translated in

    frequency to match the allocation of the channel. The choice of the type of modulation is

    based on several factors, such as

    - the amount of bandwidth allocated,- the types of noise and interference that the signal encounters in transmission over the

    channel, and

    - the electronic devices that are available for signal amplification prior to transmission.In any case, the modulation process makes it possible to accommodate the transmission of

    multiple messages from many users over the same physical channel.

    In addition to modulation, other functions that are usually performed at the transmitter are

    filtering of the information-bearing signal, amplification of the modulated signal, and in the

    case of wireless transmission, radiation of the signal by means of a transmitting antenna.

    Channel

    The communications channel is the physical medium that is used to send the signal from the

    transmitter to the receiver. In wireless transmission, the channel is usually the atmosphere

    (free space). On the other hand, telephone channels usually employ a variety of physical

    media, including wire lines, optical fibre cables, and wireless (microwave radio). Whatever

    the physical medium for signal transmission, the essential feature is that the transmitted signal

    is corrupted in a random manner by a variety of possible mechanisms. The most common

    form of signal degradation comes in the form of additive noise, which is generated at the front

  • 8/2/2019 EE 346-LN-CH1-II

    5/61

    EE 346-CH I-II 07/08 Fall 5

    end of the receiver, where signal amplification is performed. This noise is often called

    thermal noise. In wireless transmission, additional additive disturbances are man-made noise

    and atmospheric noise picked up by a receiving antenna.

    Signal distortions are usually characterised as random phenomena and described in statistical

    terms. The effect of these signal distortions must be taken into account in the design of the

    communication system.

    In the design of a communication system, the system designer works with mathematical

    models that statistically characterise the signal distortion encountered on physical channels.

    Often, the statistical description that is used in a mathematical model is a result of actual

    empirical measurements obtained from experiments involving signal transmission over such

    channels. In such case, there is a physical justification for the mathematical model used in the

    design of communication systems. On the other hand, in some communication system

    designs, the statistical characteristics of the channel may vary significantly with time. In such

    cases, the system designer may design a communication system that is robust to the variety of

    signal distortions. This can be accomplished by having the system adapt some of its

    parameters to the channel distortion encountered.

    Receiver

    The function of the receiver is to recover the message signal contained in the received signal.

    If the message signal is transmitted by carrier modulation, the receiver performs carrier

    demodulation to extract the message from the sinusoidal carrier. Since the signal

    demodulation is performed in the presence of additive noise and possibly other signal

    distortions, the demodulated message signal is generally degraded to some extent by the

    presence of these distortions in the received signal. The fidelity of the received message

    signal is a function of the type of modulation, the strength of the additive noise, the type and

    strength of any other additive interference, and the type of any non-additive interference.

    Besides performing the primary function of signal demodulation, the receiver also performs a

    number of peripheral functions, including signal filtering and noise suppression.

    Digital Communication System

    In a digital communication system, the functional operations performed at the transmitter and

    receiver must be expanded to include message signal discrimination at the transmitter and

    message signal synthesis or interpolation at the receiver. Additional functions include

    redundancy removal, and channel coding and decoding.

    Figure 1.2 illustrates the functional diagram and the basic elements of a digital

    communication system. The source output may be either an analog signal, such as audio or

  • 8/2/2019 EE 346-LN-CH1-II

    6/61

    EE 346-CH I-II 07/08 Fall 6

    video signal, or a digital signal, such as the output of a Teletype machine, which is discrete

    in time and has a finite number of output characters. In a digital communication system, the

    messages produced by the source are usually converted into a sequence of binary digits.

    Figure 1.2. Basic elements of a digital communication system

    Ideally, we would like to represent the source output (message) by as few binary digits as

    possible. In other words, we seek an efficient representation of the source output that results

    in little or no redundancy. The process of efficiently converting the output of either an analog

    or a digital source into a sequence of binary digits is called source encoder or data

    compression.

    The sequence of binary digits from the source encoder, which we call the information

    sequence, is passed to the channel encoder. The purpose of the channel encoder is to introduce

    in a controlled manner some redundancy in the binary information sequence, which can be

    used at the receiver to overcome the effects of noise and interference encountered in the

    transmission of the signal through the channel. Thus, the added redundancy serves to increase

    the reliability of the received data and improves the fidelity of the received signal. In effect,

    redundancy in the information sequence aids the receiver in decoding the desired information

    sequence. For example, a (trivial) form of encoding of the binary information sequence is

    simply to repeat each binary digit m times, where m is some positive integer.

    The binary sequence at the output of the channel encoder is passed to the digital modulator,

    which serves as the interface to the communications channel. Since nearly all of the

    communication channels encountered in practice are capable of transmitting electrical signals

    (waveforms), the primary purpose of the digital modulator is to map the binary information

    sequence into signal waveforms.

    At the receiving end of a digital communications system, the digital demodulator processesthe channel-corrupted transmitted waveform and reduces each waveform to a single number

    that represents an estimate of the transmitted data symbol. A measure of how well the

    Information

    source andinput transducer

    Source

    encoder

    Channel

    encoder

    Digital

    modulation

    Channel

    Digital

    demodulator

    Channel

    decoder

    Source

    decoder

    Output

    transducer

    Output

    signal

  • 8/2/2019 EE 346-LN-CH1-II

    7/61

    EE 346-CH I-II 07/08 Fall 7

    demodulator and encoder perform is the frequency with which errors occur in the decoded

    sequence. More precisely, the average probability of a bit-error at the output of the decoder is

    a measure of the performance of the demodulator-decoder combination. In general, the

    probability of error is a function of the code characteristics, the types of waveforms used to

    transmit the information over the channel, the transmitter power, the characteristics of the

    channel (i.e., the amount of noise), the nature of the interference, etc., and the method of

    demodulation and decoding. These items and their effect on performance will be discussed in

    detail in subsequent chapters.

    As a final step, when an analog output is desired, the source decoder accepts the output

    sequence from the channel decoder, and from knowledge of the source encoding method

    used, attempts to reconstruct the original signal from the source. Due to channel decoding

    errors and possible distortion introduced by the source encoder and, perhaps, the source

    decoder, the signal at the output of the source decoder is an approximation to the original

    source output. The difference or some function of the difference between the original signal

    and the reconstructed signal is a measure of the distortion introduced by the digital

    communications system.

    Early Work in Digital Communications

    Although Morse is responsible for the development of the first electrical digital

    communication system (telegraphy), the beginnings of what we now regard as modem digital

    communications stem from the work of Nyquist (1924), who investigated the problem of

    determining the maximum signalling rate that can be used over a telegraph channel of a given

    bandwidth without intersymbol interference. He formulated a model of a telegraph system in

    which a transmitted signal has the general form

    n

    nnTtgatS )()(

    Where g(t) represents a basic pulse shape and {an} is the binary data sequence of {1}

    transmitted at a rate of 1/Tbits per second.

    In light of Nyquist's work Hartley (1928) considered the issue of the amount of data that can

    be transmitted reliably over a bandlimited channel when multiple amplitude levels are used.

    Due to the presence of noise and other interference, Hartley postulated that the receiver could

    reliably estimate the received signal amplitude to some accuracy, say A. This investigation

    led Hartley to conclude that there is maximum data rate that can be communicated reliably

    over a bandlimited channel when the maximum signal amplitude is limited to Amax (fixed

    power constraint) and the amplitude resolution isA.

  • 8/2/2019 EE 346-LN-CH1-II

    8/61

    EE 346-CH I-II 07/08 Fall 8

    Another significant advance in the development of communications was the work of Wiener

    (1942) who considered the problem of estimating a desired signal waveform s(t) in the

    presence of additive noise n(t), based on observation of the received signal r(t) = s(t) + n(t).

    This problem arises in signal demodulation. Wiener determined the linear filter whose output

    is the best mean-square approximation tothe desired signal s(t). The resulting filter is called

    the optimum linear (Wiener) filter. Hartleys and Nyquist results on the maximum

    transmission rate of digital information were precursors to the work of Shannon (1948 a, b)

    who established the mathematical foundations for information theory and derived the

    fundamental limits for digital communication systems. In his pioneering work, Shannon

    formulated the basic problem of reliable transmission of information in statistical terms, using

    probabilistic models for information sources and communication channels. Based on such a

    statistical formulation, he adopted a logarithmic measure for the information content of a

    source. He also demonstrated that the effect of a transmitter power constraint, a bandwidth

    constraint, and additive noise can be associated with the channel and incorporated into a

    single parameter, called the channel capacity For example, in the case of an additive white

    (spectrally flat) Gaussian noise interference, an ideal bandlimited channel of bandwidth W has

    a capacity Cgiven by

    P

    C= W log2(1 +) bits/sWN0

    where P is the average transmitted power andN0 is the power spectral density of the additive

    noise. The significance of the channel capacity is as follows: If the information rate R from

    the source is less than C (R < C), then it is theoretically possible to achieve reliable (error-

    free) transmission through the channel by appropriate coding. On the other hand, ifR > C,

    reliable transmission is not possible regardless of the amount of signal processing performed

    at the transmitter and receiver. Thus, Shannon established basic limits on communication of

    information and gave birth to a new field that is now called information theory.

    Initially the fundamental work of Shannon had a relatively small impact on the design and

    development of new digital communications systems. In part, this was due to the small

    demand for digital information transmission during the 1950's. Another reason was the

    relatively large complexity and, hence, the high cost of digital hardware required to achieve

    the high efficiency and high reliability predicted by Shannon's theory.

    Another important contribution to the field of digital communications is the work of

    Kotelnikov (1947), which provided a coherent analysis of the various digital communication

    systems based on a geometrical approach. Kotelnikov approach was later expanded byWozencraft and Jacobs (1965).

  • 8/2/2019 EE 346-LN-CH1-II

    9/61

    EE 346-CH I-II 07/08 Fall 9

    The increase in the demand for data transmission during the last three decades, coupled with

    the development of more sophisticated integrated circuits, has led to the development of very

    efficient and more reliable digital communications systems. In the course of these

    developments, Shannon's original results and the generalization of his results on maximum

    transmission limits over a channel and on bounds on the performance achieved have served as

    benchmarks for any given communications system design. The theoretical limits derived by

    Shannon and other researchers that contributed to the development of information theory

    serve as an ultimate goal in the continuing efforts to design and develop more efficient digital

    communications systems.

    The classic work of Hamming (1950) on error detecting and error-correcting codes to combat

    the detrimental effects of channel noise came after Shannon's publications. Hamming's work

    stimulated many researchers in the years that followed, and a variety of new and powerful

    codes were discovered, many of which are used today in the implementation of modem

    communication systems.

    Communication Channels and Their Characteristics

    The physical channel may be a pair of wires that carry the electrical signal, or an optical fibre

    that carries the information on a modulated light beam, or an underwater ocean channel in

    which the information is transmitted acoustically, or free space over which the information-

    bearing signal is radiated by use of an antenna. Other media that can be characterized as

    communication channels are data storage media, such as magnetic tape, magnetic disks, and

    optical disks.

    One common problem in signal transmission through any channel is additive noise. In

    general, additive noise is generated internally by components such as resistors and solid-state

    devices used to implement the communication system. This is sometimes called thermal

    noise. Other sources of noise and interference may arise externally to the system, such as

    interference from other users of the channel. When such noise and interference occupy the

    same frequency band as the desired signal, its effect can be minimized by proper design of the

    transmitted signal and its demodulator at the receiver. Other types of signal degradations that

    may be encountered in transmission over the channel are signal attenuation, amplitude and

    phase distortion, and multipath distortion.

    Increasing the power in the transmitted signal may minimize the effects of noise. However,

    equipment and other practical constraints limit the power level in the transmitted signal.

    Another basic limitation is the available channel bandwidth. A bandwidth constraint is usuallydue to the physical limitations of the medium and the electronic components used to

    implement the transmitter and the receiver. These two limitations result in constraining the

  • 8/2/2019 EE 346-LN-CH1-II

    10/61

    EE 346-CH I-II 07/08 Fall 10

    amount of data that can be transmitted reliably over any communications channel.

    Shannon's basic results relate the channel capacity to the available transmitted power and

    channel bandwidth.

    Wireline Channels

    The telephone network makes extensive use of wirelines for voice signal transmission, as well

    as data and video transmission. Twisted pair wirelines and coaxial cable are basically guided

    electromagnetic channels, which provide relatively modest bandwidths. Telephone wire

    generally used to connect a customer to a central office has a bandwidth of several hundred

    kilo-hertz (kHz). On the other hand, coaxial cable has a usable bandwidth of several

    megahertz (MHz). Figure 1.3 illustrates the frequency range of guided electromagnetic

    channels, which includes waveguides and optical fibres.

    Signals transmitted through such channels are distorted in both amplitude and phase and

    further corrupted by additive noise. Twisted-pair wireline channels are also prone to crosstalk

    interference from physically adjacent channels. Because wireline channels carry a large

    percentage of our daily communications around the country and the world, much research has

    been performed on the characterization of their transmission properties and on methods for

    mitigating the amplitude and phase distortion encountered in signal transmission.

    Fibre Optic Channels

    Optical fibres offer the communications system designer a channel bandwidth that is several

    orders of magnitude larger than coaxial cable channels. During the past decade, optical fibre

    cables have been developed that have relatively low signal attenuation, and highly reliable

    photonic devices have been developed for signal generation and signal detection. These

    technological advances have resulted in a rapid deployment of optical fibre channels, both in

    domestic telecommunication systems as well as for transatlantic and trans-pacific

    communications. With the large bandwidth available on fibre optic channels it is possible fortelephone companies to offer subscribers a wide array of telecommunication services,

    including voice, data, facsimile, and video.

    The transmitter or modulator in a fibre optic communication system is a light source, either a

    light-emitting diode (LED) or a laser. Information is transmitted by varying (modulating) the

    intensity of the light source with the message signal. The light propagates through the fibre as

    a light wave and is amplified periodically (in the case of digital transmission, it is detected

    and regenerated by repeaters) along the transmission path to compensate for signalattenuation. At the receiver, the light intensity is detected by a photodiode, whose output is an

  • 8/2/2019 EE 346-LN-CH1-II

    11/61

    EE 346-CH I-II 07/08 Fall 11

    electrical signal that varies in direct proportion to the power of the light impinging on the

    photodiode.

    Waveguide

    Coaxial cable

    channels

    Wirelines

    Channels

    Figure 1.3. Frequency range for guided wire channel.

    Wireless Electromagnetic Channels

    In wireless communication systems, electromagnetic energy is coupled to the propagation

    medium by an antenna, which serves as the radiator. The physical size and the configuration

    of the antenna depend primarily on the frequency of operation. To obtain efficient radiation of

    electromagnetic energy the antenna must be longer than 1/10 of the wavelength.

    Consequently, a radio transmitting in the AM frequency band, say at 1 MHz (corresponding

    to a wavelength of = c/fc = 300m), requires an antenna of at least 30 meters.

    Figure 1.4 illustrates the various frequency bands of the electromagnetic spectrum. The mode

    of propagation of electromagnetic waves in the atmosphere and in free space may be

    subdivided into three categories, namely, ground-wave propagation, sky-wave propagation,

    and line-of-sight (LOS) propagation.

    Ultraviolet

    Visible light

    Infrared

    1015Hz

    100 mm

    100 GHz

    10 GHz

    1 GHz

    100 MHz

    10 MHz

    1 MHz

    10Hz

    100 kHz

    1 kHz

    Frequency

    10-6

    m

    100m

    10 cm

    1m

    100 km

    10 m

    1 km

    10 k

    m

    1 cm

    Wavelen

    th

    1014

    Hz

  • 8/2/2019 EE 346-LN-CH1-II

    12/61

    EE 346-CH I-II 07/08 Fall 12

    In the VLF and ELF frequency bands, where the wavelengths exceed 10 km, the earth and

    the ionosphere act as a waveguide for electromagnetic wave propagation. In these frequency

    ranges, communication signals practically propagate around the globe. For this reason, these

    frequency bands are primarily used to provide navigational aids from shore to ships around

    the world. The channel bandwidth available in these frequency bands are relatively small

    (usually from 1% to 10% of the center frequency), and hence, the information that is

    transmitted through these channels is relatively slow speed and, generally, confined to digital

    transmission.

    A dominant type of noise at these frequencies is generated from thunderstorm activity around

    the globe, especially in tropical regions. Interference results from the many users of these

    frequency bands.

  • 8/2/2019 EE 346-LN-CH1-II

    13/61

    EE 346-CH I-II 07/08 Fall 13

    Ultraviolet

    ExperimentalVisible light

    Infrared

    Millimetre wave Experimental

    Navigation

    Satellite to Satellite

    Microwave relay

    Earth-satellite

    Rader

    Super high

    Frequency

    (SHF)

    Ultra High

    Frequency

    (UHF)UHF TV

    Very High

    Frequency

    (VHF)

    Mobile, aeronautical

    VHF TV and FM

    broadcast

    Mobile radio

    High Frequency

    (HF)

    Business

    Amateur radio

    International radio

    Citizens bandMedium Frequency

    (MF)AM broadcast

    Aeronautical

    Navigation

    Radio teletypeLow Frequency

    (LF)

    Very Low Frequency

    (VLF)

    Audio

    band

    Figure 1.4. Frequency range wireless electro magnetic channel.

    Ground-wave propagation, as illustrated in Figure 1.5, is the dominant mode of propagation

    for frequencies in the MF band (0.3 to 3 MHz). This is the frequency band used for AM

    broadcasting and maritime radio broadcasting. In AM broadcasting, the range with ground-

    wave propagation of even the more powerful radio stations is limited to about 100 miles.

    1015

    Hz

    1014

    Hz

    100 GHz

    10 GHz

    1 GHz

    100 MHz

    10 MHz

    1 MHz

    100 kHz

    10 kHz

    1 kHz

    10-6

    m

    1cm

    10

    cm

    1m

    10m

    100

    m

    1 km

    10km

    Frequency Use

    Microwave

    radio

    Shortwave

    radio

    Longwave

    radio

    Frequency

    Wav

    elenth

    1mm

  • 8/2/2019 EE 346-LN-CH1-II

    14/61

    EE 346-CH I-II 07/08 Fall 14

    Atmospheric noise, man-made noise, and thermal noise from electronic components at the

    receiver are dominant disturbances for signal transmission of MF.

    Figure 1.5. Illustration ofground-wave propagation.

    Sky-wave propagation, as illustrated in Figure 1.6, results from transmitted signals being

    reflected (bent or refracted) from the ionosphere, which consists of several layers of charged

    particles ranging in altitude from 30 to 250 miles above the surface of the earth. During the

    daytime hours, the heating of the lower atmosphere by the sun causes the formation of the

    lower layers at altitudes below 75 miles. These lower layers, especially the D-layer serves to

    absorb frequencies below 2 MHz, thus severely limiting sky-wave propagation of AM radio

    broadcast. However, during the night-time hours, the electron density in the lower layers of

    the ionosphere drops sharply and the frequency absorption that occurs during the daytime is

    significantly reduced. As a consequence, powerful AM radio broadcast stations can propagate

    over large distances via sky wave over the F-layer of the ionosphere, which ranges from 90

    miles to 250 miles above the surface of the earth.

    A frequently occurring problem with electromagnetic wave propagation via sky wave in the

    HF frequency range is signal multipath. Signal multipath occurs when the signal multipath

    generally results in intersymbol interference in a digital communication system.

    Moreover, the signal components arriving via different propagation paths may add

    destructively, resulting in a phenomenon called signal fading, which most people have

    experienced when listening to a distant radio station at night when sky wave is the dominant

    propagation Mode. Additive noise at HF is a combination of atmospheric noise and thermal

    voice.

    Sky-wave ionospheric propagation ceases to exist at frequencies above approximately 30

    MHz, which is the end of the HF band. However, it is possible to have ionospheric scatter

    propagation at frequencies in the range of 30 MHz to 60MHz, resulting from signal scattering

    from the lower ionosphere. It is also possible to communicate over distances of several

    hundred miles by use of troposphere scattering at frequencies in the range of 40 MHz to 300

    MHz. Troposcatter results from signal scattering due to particles in the atmosphere at altitudes

  • 8/2/2019 EE 346-LN-CH1-II

    15/61

    EE 346-CH I-II 07/08 Fall 15

    of 10 miles or less. Generally, ionospheric scatter and tropospheric scatter involve large

    signal propagation losses and require a large amount of transmitter power and relatively large

    antennas.

    Figure 1.6. Illustration of sky-wave propagation.

    Frequencies above 30 MHz propagate through the ionosphere with relatively little loss and

    make satellite and extraterrestrial communications possible. Hence, at frequencies in the VHF

    band and higher, the dominant mode of electromagnetic propagation is line-of-sight (LOS)

    propagation. For terrestrial communication systems, this means that the transmitter and

    receiver antennas must be in direct LOS with relatively little or no obstruction. For this

    reason, television stations transmitting in the VHF and UHF frequency bands mount their

    antennas on high towers to achieve a broad coverage area.

    In general, the coverage area for LOS propagation is limited by the curvature of the earth. Ifthe transmitting antenna is mounted at a height h feet above the surface of the earth, the

    distance to the radio horizon, assuming no physical obstructions such as mountains, is

    approximately=2h miles. For example, a TV antenna mounted on a tower of 1000 ft. in

    height provides coverage of approximately 50 miles. As another example, microwave radio

    relay systems used extensively for telephone and video transmission at frequencies above 1

    GHz have antennas mounted on tall towers or on the top of tall buildings.

    The dominant noise limiting the performance of communication systems in the VHF and

    UHF frequency ranges is thermal noise generated in the receiver front end and cosmic noise

    picked up by the antenna. At frequencies in the SHF band above 10 MHz, atmospheric

    conditions play a major role in signal propagation. Figure1.7 illustrates the signal attenuation

    in dB/mile due to precipitation for frequencies in there range of 10 to 100GHz. We observe

    that heavy rain introduces extremely high propagation losses that can result in service outages

    (total breakdown in the communication system).

    At frequencies above the EHF band, we have the infrared and visible light regions of the

    electromagnetic spectrum, which can be used to provide LOS optical communication in free

  • 8/2/2019 EE 346-LN-CH1-II

    16/61

    EE 346-CH I-II 07/08 Fall 16

    10 30 100

    Frequency, GHz

    space. To date, these frequency bands have been used in experimental communication

    systems, such as satellite-to-satellite links.

    .

    Figure 1.7. Signal attenuation due to precipitation.

    Underwater Acoustic Channels

    Over the past few decades, ocean exploration activity has been steadily increasing. Coupled

    with this increase is the need to transmit data collected by sensors placed under water to the

    surface of the ocean. From there it is possible to relay the data via a satellite to a data

    collection center.Electromagnetic waves do not propagate over long distances under water except at extremely

    low frequencies. However, the transmission of signals at such low frequencies is prohibitively

    expensive because of the large and powerful transmitters required. The attenuation of

    electromagnetic waves in water can be expressed in terms of the skin depth, which is the

    distance a signal is attenuated by1/e. For sea water, the skin depth

    f250

    wherefis expressed in Hz and is in meters. For example, at 10 kHz, the skin depth is 2.5

    meters. In contrast, acoustic signals propagate over distances of tens and even hundreds of

    kilometres.

    An underwater acoustic channel is characterized as a multipath channel due to signal

    reflections from the surface and the bottom of the sea. Because of wave motion, the signal

    multipath components undergo time-varying propagation delays, which result in signal

    fading. In addition, there is frequency-dependent attenuation, which is approximately

    proportional to the square of the signal frequency.

    Fog (0.01 in/h)

    Light rain (0.05 in/h)

    Medium rain (0.1 in/h)

    Heavy rain (1 in/h)

    10.0

    1.0

    0.1

    0.01

    0.001

    Attenuation,dB/mi

  • 8/2/2019 EE 346-LN-CH1-II

    17/61

    EE 346-CH I-II 07/08 Fall 17

    Ambient ocean acoustic noise is caused by shrimp, fish, and various mammals. Near

    harbours, there is also man-made acoustic noise in addition to the ambient noise. In spite of

    this hostile environment, it is possible to design and implement efficient and highly reliable

    underwater acoustic communication systems for transmitting digital signals over large

    distances.

    Storage Channels

    Information storage and retrieval systems constitute a very significant part of data-handling

    activities on a daily basis. Magnetic tape, including digital audio tape and video tape,

    magnetic disks used for storing large amounts of computer data, optical disks used for

    computer data storage and compact disks are examples of data storage systems that can be

    characterized as communication channels. The process of storing data on a magnetic tape or a

    magnetic or optical disk is equivalent to transmitting a signal over a telephone or a radiochannel. The feedback process and the signal processing involved in storage systems to

    recover the stored information is equivalent to the functions performed by a receiver in a

    telephone or radio communication system to recover the transmitted information.

    Additive noise generated by the electronic components and interference from adjacent tracks

    is generally present in the read back signal of a storage system, just as is the case in a

    telephone or a radio communication system.

    The amount of data that can be stored is generally limited by the size of the disk or tape and

    the density (number of bits stored per square inch) that can be achieved by the write/read

    electronic systems and heads. For example, a packing density of 109

    bits per square inch has

    been recently demonstrated in an experimental magnetic disk storage system. (Current

    commercial magnetic storage products achieve a much lower density.) The speed at which

    data can be written on a disk or tape and the speed at which it can be read back is also limited

    by the associated mechanical and electrical subsystems that constitute an information storage

    system.

    Channel coding and modulation are essential components of a well-designed digital magnetic

    or optical storage system. In the read back process, the signal is demodulated and the added

    redundancy introduced by the channel encoder is used to correct errors in the read back

    signal.

    Mathematical Models for Communication Channels

    In the design of communication systems for transmitting information through physical

    channels, we find it convenient to construct mathematical models that reflect the most

    important characteristics of the transmission medium. Then, the mathematical model for the

  • 8/2/2019 EE 346-LN-CH1-II

    18/61

    EE 346-CH I-II 07/08 Fall 18

    channel is used in the design of the channel encoder and modulator at the transmitter and

    the demodulator and channel decoder at the receiver. Below, we provide a brief description of

    the channel models that are frequently used to characterize many of the physical channels that

    we encounter in practice.

    The Additive Noise Channel

    The simplest mathematical model for a communication channel is the additive noise channel,

    illustrated in Figure 2.3. In this model, the transmitted signal s(t) is corrupted by an additive

    random noise process n(t). Physically, the additive noise process may arise from electronic

    components and amplifiers at the receiver of the communication system, or from interference

    encountered in transmission as in the case of radio signal transmission.

    If the noise is introduced primarily by electronic components and amplifiers at the receiver, it

    may be characterised as thermal noise. This type of noise is characterised statistically as a

    Gaussian noise process. Hence, the resulting mathematical model applies to a broad class of

    physical communication channels, and because of its mathematical tractability this is the

    predominant channel model used in the channel is usually called the additive Gaussian noise

    channel.

    Figure 1.8. The additive noise channel.

    The Linear Filter Channel

    In some physical channels such as wireline telephone channels, filters are used to ensure that

    the transmitted signals do not exceed specified bandwidth limitations and thus do not interfere

    with one another. Such channels are generally characterized mathematically as linear filter

    Channels with additive noise, Figure 2.4. Hence, if the channel input is the signal s(t) the

    channel output is the signal

    r(t) = s(t) * h(t) + n(t)

    )()()( tndtsh

    where h() is the impulse response of the linear filter and denotes convolution.

    Channel

    n(t)

    r(t) = s(t) + n(t)s(t)

  • 8/2/2019 EE 346-LN-CH1-II

    19/61

    EE 346-CH I-II 07/08 Fall 19

    r(t) = s(t) * h(t) + n(t)

    Channel

    n(t)

    s(t)

    Channel

    n(t)

    s(t) r(t) = s(t) * h(; t) + n(t)

    Figure 1.9. The linear filter channel with additive noise.

    When the signal undergoes attenuation in transmission through the channel, the received

    signal is

    r(t) = s(t) + n(t)

    where represents the attenuation factor.

    The Linear Time-Variant Filter Channel

    Physical channels such as underwater acoustic channels and ionospheric radio channels which

    result in time-variant multipath propagation of the transmitted signal may be characterized

    mathematically as time-variant linear filters. Such linear filters are characterized by a time-

    variant channel impulse response h(; t), where h(; t) is the response of the channel at time t

    due to an impulse applied at time t - . Thus, represents the delay (elapsed-time)

    variable. The linear time-variant filter channel with additive noise is illustrated Figure 2.5. For

    an input signal s(t), the channel output signal is

    )()();( tndtsth (1)

    Figure 1.10. Linear time-variant filter channel with adaptive noise.

    Linear

    filter h(t)

    Linear

    Time-variant

    filter h(;t)

  • 8/2/2019 EE 346-LN-CH1-II

    20/61

    EE 346-CH I-II 07/08 Fall 20

    -3400 3400-300 300 f

    Sx(f)

    A good model for multipath signal propagation through physical channels, such as

    the ionosphere (at frequencies below 30 MHz) and mobile cellular radio channels, is a

    special case of Eq. (1) in which the time-variant impulse response has the form

    L

    k

    kk ttath1

    )()();( (2)

    where the {ak(t)} represents the possibly time-variant attenuation factors for the L

    multipath propagation paths. If Eq. (2) is substituted into Eq. (1), the received signal has

    the form

    L

    k

    kk tnttath1

    )()()();( (3)

    Hence, the received signal consists ofL multipath components, where each component

    is attenuated by {ak(t)} and delayed by {k}.

    The three mathematical models described above adequately characterize a large

    majority of physical channels encountered in practice. These three channel models are

    used in this text for the analysis and design of communication systems.

    Modelling of Information Sources

    The information source produces outputs that are of interest to the receiver of

    information, who does not know these outputs in advance. The role of a communication

    system designer is to make sure that this information is transmitted correctly.

    Since the output of the information source is time-varying unpredictable function, it can

    be modelled as a random process.

    In communication channels the existence of noise causes stochastic dependence

    between the input and output of the channel. Therefore, a communication system

    designer designs a system that transmits the output of a random process (information

    source), to a destination via a random medium (channel) and ensures low distortion.

    The properties of the random process depend on the nature of the source. For example,

    when modelling speech signals, the resulting random process has all its power in a

    frequency band of approximately 300-4000 Hz.

    Therefore, the power-spectral density of the

    speech signal also occupies this band of frequencies.

    Video signals are restored from a still or a moving image and, therefore, the bandwidth

    depends on the required resolution. For TV transmission, depending on the system

  • 8/2/2019 EE 346-LN-CH1-II

    21/61

    EE 346-CH I-II 07/08 Fall 21

    ... x-2, x-1, x0, x1, x2, ...

    p

    1-p

    0 1 x

    The PMF for the Bernolli

    random variable.

    Mathematical model for a

    discrete information source.

    0.3

    0 2

    x

    The PMF for binomial

    random variable.

    0.25

    0.2

    0.15

    0.1

    0.050

    4

    6

    8

    10

    12

    employed (NTSC, PAL or SECAM), this band is typically between 0-4.5 MHz and

    0-6.5 MHz. For telemetry data the bandwidth depends on the rate of change of data.

    These processes are band limited processes and, therefore, can be sampled at the

    Nyquist rate or larger and reconstructed from the sampled values. The information

    source can be modelled by a discrete-time random process

    iiX .

    The alphabet over which the random variables Xi are defined can be either discrete (in

    transmission of binary data) or continuous (e.g. sampled speech).

    The simplest model for information source is the discrete memoryless source (DMS). A

    DMS is a discrete-time, discrete-amplitude random process in which all Xis are

    generated independently and with the same distribution.

    Let set A={a1, a2, , aN} denote the set in which the random variableXtakes its values,

    and let the probability mass function for the discrete random variable Xbe denoted byPi=p(X=ai) for all i=1, 2, , N. A full description of the DMS is given by the set A,

    called the alphabet, and the probabilities Nii

    P1 .

    Important Random Variables

    The most commonly used random variables in communications are:

    - Bernolli Random VariableThis is a discrete random variable taking two values one and

    zero with probabilitiesp and 1-p. A Bernolli random variable

    is a good model for a binary data generator. We can model an

    error by modulu-2 addition of a 1 to the input bit, thus

    changing a 0 into a 1 and a 1 into a 0. Therefore, Bernolli

    random variable can be employed to model channel errors.

    - Binomial Random VariableThis is a discrete random variable giving the number of 1s in a sequence ofn independent

    Bernolli trials. The PMF is given by

    Information source

    otherwise

    nkppk

    n

    kXp

    knk

    ,0

    0,)1()(

  • 8/2/2019 EE 346-LN-CH1-II

    22/61

  • 8/2/2019 EE 346-LN-CH1-II

    23/61

    EE 346-CH I-II 07/08 Fall 23

    change the information delivered by that output by a large amount. In other words, the

    information measure should be a decreasing and continuous function of the probability of that

    source.

    Now let us assume that the information about output aj can be broken into two independent

    parts say aj1 and aj2;

    i.e.,Xj=(X(j1)

    +X(j2)

    ), aj={aj1

    ,aj2

    }, andp(X=aj)=p(Xj1

    =aj1

    )p(X(j2)

    =aj2

    ).

    If the components are independent, revealing information about one component does not

    provide any information about the other component, and therefore, the amount of information

    provided revealing aj is the sum of information obtained by revealing aj1

    and aj2

    .

    Consequently, the amount of information revealed about an output aj with probabilitypj must

    satisfy the following conditions:

    1. The information content of output aj depends only on the probability ofaj and not onthe value ofaj. We denote this function byI(pj) and call it self-information.

    2. Self-information is a continuous function ofpj; i.e.,I(.) is a continuous function.3. Self-information is a decreasing function of its arguments, i.e., the least probable

    outputs convey most information.

    4. Ifpj=p(j1)p(j2), thenI(pj)= I(p(j1))+ I(p(j2)).

    The only function that satisfies all the above properties is the logarithmic function; i.e.,

    I(x)=-log(x). The base of the logarithm determines the units used for information measure.

    Thus, for units of bits the base 2 is used. If the natural logarithm is used, the units are

    nats, and for base 10 logarithms, the unit is Hartley. (From now on we will assume alllogarithms are in base 2 and all information is given in bits).

    Each source output ai is defined as the self-information of that output, given bylog (pi). (We

    can define the information content of the source as the weighted average of the self-

    information of all source outputs. This is justified by the fact that various source outputs

    appear with their corresponding probabilities). Therefore, the information revealed by

    unidentified source output is the weighted average of the self-information of the varioussource outputs; i.e.,

  • 8/2/2019 EE 346-LN-CH1-II

    24/61

    EE 346-CH I-II 07/08 Fall 24

    N

    i

    N

    i

    iiii pppIp1 1

    log)(

    The information content of the information source is known as the entropy of the source and

    is denoted byH(x).

    Definition 1

    The entropy of a discrete random variable X is a function of the PMF and is defined by

    N

    i

    N

    i i

    iiip

    pppxH1 1

    )1

    log(log)(

    Sampling Theorem

    Nyquist Rate: Nyquist set out to determine the optimum pulse shape that was bandlimited to

    WHz and maximized the bit rate 1/T under the constraint that the pulse caused no inter

    symbol interference at the sampling times k/T, k=0, 12,.

    The maximum pulse rate 1/Tis 2Wpulse/sec. This rate is called the Nyquist rate.

    Nyquist rate (sampling rate)=2W

    This pulse rate can be achieved by using the pulses

    g(t)=(sin2Wt)/ 2Wt

    where g(t) represents a basic pulse shape.

    The sampling theorem states that a signal of bandwidth Wcan be reconstructed from samples

    taken at the Nyquist rate of 2W samples per second using the interpolation formula (any

    physical waveform may be represented over the interval -

  • 8/2/2019 EE 346-LN-CH1-II

    25/61

    EE 346-CH I-II 07/08 Fall 25

    Quantisation

    Quantizing is the process of rounding-off the flat-top samples to certain predetermined levels.

    The PCM signal is generated by carrying out three basic operations: sapling, quantizing and

    encoding. The sampling operation generates a flat-top PAM signal.

    The quantizing operation is illustrated in the following figure, for the case of M=8 levels. This

    quantiser is said to be uniform because all of the steps are of equal size. Error is introduced

    into the recovered output analog signal because of the quantizing effect. If we sample at the

    Nyquist rate (2W) or faster and there is negligible channel noise, there will still be noise,

    called quantizing noise, on the recovered analog waveform due to this error. The quantiser

    output is a quantised PAM signal.

  • 8/2/2019 EE 346-LN-CH1-II

    26/61

    EE 346-CH I-II 07/08 Fall 26

    CHAPTER II

    Modulation

    (Analogue Signal Transmission)

    A large number of information sources are analogue sources. An analogue source can be

    modulated and transmitted directly or can be converted to digital data and transmitted using

    digital modulation techniques.

    Speech, image, and video are examples of analogue sources of information. Each of these

    sources is characterised by its bandwidth, dynamic range, and the nature of the signal. (For

    instance, in case of audio and black & white video, the signal has just one component

    measuring the intensity, but in case of colour video, the signal has four components

    measuring red, blue, green colour components, and the intensity).

    The analogue signal to be transmitted is denoted by m (t), which is assumed to be a lowpass

    signal of bandwidth W, i.e.,M(f) 0, for f W. we also assume that the signal is a power-

    type signal whose power is denoted by Pm, where

    dttmT

    imP

    T

    TT

    m 2

    2

    2)(1

    Modulation is a process by which some parameter of a carrier signal is varied in accordance

    with a message signal. The analogue signal is transmitted by impressing it on the amplitude,

    the phase or the frequency of a sinusoidal carrier. The message signal is called a modulating

    signal. Modulation converts the message signal m(t) from lowpass to bandpass, in the

    neighborhood of the carrier (centre) frequencyfc.

    Modulation of the carrier c(t) by the message signal m(t) is performed in order to achieve one

    or more of the following objectives:

    1- The lowpass signal is translated in frequency to the passband of the channel so that thespectrum of the transmitted bandpass signal will match the passband characteristics of

    the channel.

    2- To accommodate for simultaneous transmission of signals from several messagesources, by means of FDM; and

  • 8/2/2019 EE 346-LN-CH1-II

    27/61

    EE 346-CH I-II 07/08 Fall 27

    3- To expand the bandwidth of the transmitted signal in order to increase its noiseimmunity in transmission over noisy channels.

    Definitions

    A bandpass signal is represented by

    ccAtc cos)( ..(2.1)

    where Ac is the carrier amplitude (envelope) and ccccc tft 2 , c is called the

    instantaneous phase deviation of c(t) and fc is the carrier frequency. For amplitude

    modulation, we can write

    ccc tfAtc 2cos)( (2.2)

    Ac is linearly related to the modulating signal m(t) and is called the instantaneous amplitude

    of c(t). Amplitude modulation is also referred to as linear modulation. Depending on the

    relationship between m(t) and Ac, we have the following types of amplitude modulation

    schemes: normal (conventional) amplitude modulation (AM), double-sideband suppressed

    carrieramplitude modulation (DSB-SC), single-sidebandamplitude modulation (SSB), and

    vestigial-sidebandamplitude modulation (VSB).

    Normal (Conventional) Amplitude Modulation (AM)

    A normal amplitude-modulated signal is given by

    tftmAts cc 2cos)()( (2.3)

    tftmtfAts ccc 2cos)(2cos)( (2.4)

    carrier sidebands

    m(t) is the modulating (message) signal. It is also very common to define a normal amplitude-

    modulated signal as

    tftmAts cc 2cos)(1)( (2.5)

    The modulation index m is defined as

    cA

    tmm

    )(min .(2.6)

  • 8/2/2019 EE 346-LN-CH1-II

    28/61

    EE 346-CH I-II 07/08 Fall 28

    m(t)

    c(t)

    s(t)

    t

    t

    t

    Amin

    Amax Ac

    s(t) )(1 tmAc

    Figure 2.1 shows normal AM signal. Clearly, the envelope of the modulated signals has the

    same shape as m(t).

    Modulating signal (audio)

    Carrier frequency

    AM signal

    Figure 2.1. Amplitude modulation

    As long as 1)( tm , the amplitude )(1 tmAc is always positive. This is a desired condition

    for normal AM that make it easy to demodulate. On the other hand, when m > 1, the carrier

    signal is said to be overmodulatedand the envelope is distorted, also its demodulation become

    more complex.

  • 8/2/2019 EE 346-LN-CH1-II

    29/61

    EE 346-CH I-II 07/08 Fall 29

    t

    m(t)

    max m(t)

    min m(t)

    t

    s t

    A

    -A

    Envelope

    0

    m1

    m t

    -m t

    (Overmodulated signal)

    Figure 2.2. Normal Am signal for various values of modulation index

    Definition

    The percentage of positive modulation on a normal AM signal is

    100)(max100max

    oo tm

    AAAulationmodpositive

    c

    c ..........(2.7)

  • 8/2/2019 EE 346-LN-CH1-II

    30/61

    EE 346-CH I-II 07/08 Fall 30

    .... (2.11)

    and the percentage of negative modulation is

    100)(min100minoo

    tmA

    AAulationmodnegative

    c

    c ..........(2.8)

    the overall modulation percentage is

    1002

    )(min)(max100minmaxoo

    tmtm

    A

    AAulationmod

    c

    .........(2.9)

    In practice , m(t) is scaled so that its magnitude is always less than unity. Sometimes it is

    convenient to express m(t) as

    )()( tmtm n

    where mn(t) is normalised such that its min. value is -1. This can be done for example, by

    defining

    )(max

    )()( tm

    tmtmn

    The scale factor is called the modulation index. Then the modulated signal can be

    expressed as

    tftmAtuts cnc 2cos)(1)()( (2.10)

    In normal AM signaling, only the sideband components convey information, so that the

    modulation efficiency is

    100)(1

    )(

    2

    2

    tm

    tmE %

    The highest efficiency that can be attained for a 100% AM signal would be 50%, (for the case

    where square-wave modulation is used).

    Also, efficiency, denoted as , of a normal AM signal is defined as

    %100t

    s

    P

    P

    where Ps is the power carried by the sidebands and Pt is the total power of the normal AM

    signal.

    m

    cc

    t PAA

    P22

    22

    Pm is the power of the message signal.

    The normalised peak envelope power (PEP) of the AM signal is

    22

    )(max12

    tmA

    PEP c ..(2.12)

  • 8/2/2019 EE 346-LN-CH1-II

    31/61

    EE 346-CH I-II 07/08 Fall 31

    0 v

    v-i characteristic

    of P-N diode

    Spectrum of Normal AM Signal

    For normal amplitude modulation,

    tftmAts cc 2cos)()(

    tftmtfAts ccc 2cos)(2cos)(

    The Fourier transform ofs(t) is

    )()(2

    1)()(

    2

    1)( ccccc ffMffMffffAfS ..(2.13)

    Figure 2.2 shows the spectrum of a normal AM signal. Normal amplitude modulation simply

    shifts the spectrum of m(t) to the carrier frequency fc. Obviously, the spectrum of a normal

    AM signal occupies a bandwidth of the message signal, therefore, the bandwidth of the

    modulated signal is 2fmHz, wherefmis the bandwidth of the modulating signal m(t).

    Figure 2.3. Spectrum of normal AM signal

    Generation of Normal AM Signals

    Since the process of modulation involves the generation of new frequency components,

    modulators are generally characterised as nonlinear or time-variant systems.

    Power-Law Modulation

    Consider the use of a nonlinear device such as a P-N diode which has a voltage-current

    characteristic as shown in the following figure. i

    Suppose that the voltage input to such a device is the sum of

    the message signal m(t) and the carrier c(t). The nonlinearity

    will generate a product of the message signal with the carrier,

  • 8/2/2019 EE 346-LN-CH1-II

    32/61

    EE 346-CH I-II 07/08 Fall 32

    tfA cc 2cos

    plus additional terms. The desired modulated signal can be filtered out by passing the

    output of the nonlinear device through a bandpass filter.

    A process of generating a normal AM signal is shown in Figure 2.3. This type of modulation

    can be achieved by using a non-linear device, such as a diode, that is shown in Figure 2.4.

    m(t) s(t)

    Figure 2.4. Block diagram of power-law AM modulator

    Figure 2.5. Amplitude modulator using a diode

    Suppose that the nonlinear device has an input-output (square-law) characteristic of the form:

    )()()( 2 tbvtavtv iio (2.14)

    where parameters a, b are constants and

    )(2cos)( tmtfAtv cci ..(2.15)

    Substituting equation (2.15) into (2.14), we have

    tftmbAtfaA

    tbmtfbAtam

    tmtfAbtmtfAatv

    cccc

    cc

    cccco

    2cos)(22cos

    )(2cos)(

    )(2cos)(2cos)(

    222

    2

    (2.16)

    The output signal vo(t) of a bandpass filter with bandwidth 2Wand centred at fc is

    or

    tftma

    baAtvts

    tftbmaAtv

    cc

    cco

    2cos)(2

    1)()(

    2cos)(2)(

    '

    0

    '

    (2.17)

    whereAc=a /2b , 2b|m(t)|/a

  • 8/2/2019 EE 346-LN-CH1-II

    33/61

    EE 346-CH I-II 07/08 Fall 33

    tfA cc 2cos

    )(tvi

    vo

    0 vi

    Input-output

    characteristic

    s(t)

    )(tm

    LR

    Switching Modulator

    A normal amplitude-modulated signal can also be obtained by multiplying m(t) by a periodic

    digital signal s(t). The modulator is called a switching modulator. Figure 2.6 shows the

    periodic rectangular waveform and its line spectrum.

    +

    - ...

    -Tp 0 Tp

    (a) (b) (c)

    Figure 2.6. Switching modulator & periodic switching signal

    Such a modulator can be implemented by the system illustrated in Figure 2.6 (a). The sum of

    the message signal and the carrier; i.e., vi(t) given in equation (2.15) above, are applied to a

    diode that has input-output characteristic, shown in Figure 2.6(b), where Ac>> m(i). The

    output across the load resistor is simply

    0)(,0

    0)(),(

    )( tc

    tctv

    tv

    i

    o .(2.18)

    This switching operation may be viewed mathematically as a multiplication of the input vi(t)

    with the switching function s(t);

    )(2cos)()( tstfAtmtv cco ..(2.19)

    where s(t) is shown in Figure 2.6(c).

    Since s(t) is a periodic function, it is the represented in the Fourier series as

    1

    1

    )12(2cos12

    )1(2

    2

    1

    )(n

    c

    n

    ntfnts ..(2.20)

    Hence,

    termsothertftmA

    A

    tstfAtmtv

    c

    c

    c

    cco

    2cos)(4

    12

    )(2cos)()(

    The desired AM modulated signal is obtained by passing vo(t) through a BPF with centre

    frequencyf=fc and bandwidth 2W. At its output, we have the desired normal (conventional)

    DSB AM signal;

  • 8/2/2019 EE 346-LN-CH1-II

    34/61

    EE 346-CH I-II 07/08 Fall 34

    Amplitude

    0 t

    Envelope

    tftmA

    Ats c

    c

    c

    2cos)(4

    12

    )(

    ..(2.21)

    Demodulation of Normal AM SignalsThe process of recovering the message signal from the modulated signal is called

    demodulation or detection.

    Envelope Detection

    Normal (conventional) DSB AM signals are easily demodulated by means of an envelope

    detector. An envelope detector consists of a diode and an RC circuit which basically a

    lowpass filter. This is shown in Figure 2.7.

    Figure 2.6. Envelope detector

    During the positive half-cycle of the input (modulated) signal, the diode is forward biased and

    the capacitor charges up to the peak value of the modulated signal. When the modulated

    signal falls below its maximum voltage on the capacitor, the diode becomes reverse-biased

    and the input is disconnected from the output. During this period, the capacitor discharges

    slowly through the load resistorR. On the next cycle of the carrier, the carrier conducts again

    when the modulated signal exceeds the voltage across the capacitor. The capacitor charges up

    again to the peak value of the modulated signal and the process is repeated again.

    To follow the variations in the envelope of the carrier signal, the discharge time constant RC

    must be selected properly.

    WRC

    fc

    11

    In such a case, the capacitor discharges lowly through the resistor, and thus, the output of the

    envelope closely follows the message signal.

    m (t)

    s(t)

    m (t)r(t) C R

  • 8/2/2019 EE 346-LN-CH1-II

    35/61

    EE 346-CH I-II 07/08 Fall 35

    Double-Sideband Suppressed Carrier

    A Double-Sideband Suppressed Carrier (DSB-SC) AM signal is obtained by multiplying the

    message signal m(t) with the carrier signal c(t). Thus, we have the amplitude modulated signal

    tftmA

    tctmtuts

    cc 2cos)(

    )()()()(

    (2.22)

    The Fourier transform ofs(t) is

    )()(2

    )()( ccc ffMffM

    AfUfS ..(2.23)

    Figure 2.7 shows the waveforms and spectra associated with a DSB signal. Clearly, the

    envelope of the modulated signal does not have the same shape as m(t). As with AM, DSB-

    SC modulation shifts the spectrum of m(t) to the carrier frequency fc. The bandwidth of the

    modulated signal is 2W(2fm) Hz, where W(fm) is the bandwidth of the modulating signal

    m(t). Therefore, the channel bandwidth required to transmit the modulated signal s(t) is

    Bc=2W.

    The frequency content of the modulated signal s(t) in the frequency band | f |>fc is called the

    upper sideband ofS(f), and the frequency content of the modulated signal s(t) in the frequency

    band |f|

  • 8/2/2019 EE 346-LN-CH1-II

    36/61

    EE 346-CH I-II 07/08 Fall 36

    tfA cc 2cos

    m(t) s(t)

    DSB

    tfA cc 2cos

    m(t)

    -m(t)

    + tftmAts cc 2cos)(2)(

    tftmA cc 2cos)(1

    tftmA cc 2cos)(1

    Figure 2.8. Generation of DSB signal

    Balanced Modulator

    A relatively simple method to generate DSB-SC AM signals is to use two normal AM

    modulators arranged as in the configuration shown in Figure 2.9, called a single balanced

    modulator(or simply a balanced modulator). (For example, we may use two square-law AM

    modulators as described before).

    -

    Figure 2.9. A balanced modulator

    Let the input-output characteristic of a diode be approximated by a power series

    )()()( 2 tbvtavtv iio (2.24)

    where parameters a, b are constants. The input voltage to the diode D1 is

    )(2cos)(1, tmtfAtv cci ..(2.25)

    and let the output voltage of the diode be vo,1(t) = vo(t). Substituting equation (2.25) into

    (2.24), we have

    tftmbAtfaA

    tbmtfbAtamtmtfAbtmtfAatv

    cccc

    cc

    cccco

    2cos)(22cos

    )(2cos)()(2cos)(2cos)(

    222

    2

    1,

    (2.26)

    Nonlinear

    device

    AM

    modulator

    AM

    modulator

  • 8/2/2019 EE 346-LN-CH1-II

    37/61

    EE 346-CH I-II 07/08 Fall 37

    m(t) vo(t)

    Square-wave carrier

    atf=fc

    Now, consider diode D2, the input voltage to the diode D2 is

    )(2cos)(2, tmtfAtv cci ..(2.27)

    and let the output voltage of the diode be vo,2(t) = vo(t). Substituting equation (2.27) into

    (2.24), we have

    tftmbAtfaA

    tbmtfbAtam

    tmtfAbtmtfAatv

    cccc

    cc

    cccco

    2cos)(22cos

    )(2cos)(

    )(2cos)(2cos)(222

    2

    1,

    (2.28)

    Subtracting equation (2.28) from (7.26), we get

    tftmbAtmaAtvtvtv cccooo 2cos)(4)(2)()()( 2,1, ..(2.29)

    The output signal vo(t) of a bandpass filter with bandwidth 2Wand centred at fc is

    tftmbAtstv cco 2cos)(4)()(' (2.30)

    That is, we generate a DSB-SC AM signal.

    Ring Modulator

    It is possible to design a balanced modulator such that the input to the bandpass filter does not

    contain the message signal m(t) or the carrier signal c(t). A circuit balanced with respect to

    both input signals is called a double-balanced modulator, known as a ring modulator. Figure

    2.10 shows a ring modulator. The switching of the diodes is controlled by a square wave of

    frequencyfc, denoted as c(t), which is applied to the centre taps of the two transformers.

    Figure 2.10. Ring modulator generating DSB-SC AM signal

    When c(t)>>0, the top and bottom diodes conduct, while the two diodes in the cross arms are

    off. In this case, the message signal m(t) is multiplied by +1. When c(t)

  • 8/2/2019 EE 346-LN-CH1-II

    38/61

    EE 346-CH I-II 07/08 Fall 38

    r(t) m(t)

    1

    1

    )12(2cos12

    )1(4)(

    n

    c

    n

    ntfn

    tc

    (2.31)

    Hence, the desired DSB-SC AM signal s(t) is obtained by passing vo(t) through a BPF with

    centre frequencyfc and bandwidth 2W.

    The balanced modulator and the ring modulator systems multiply the message signal m(t)

    with the carrier to produce a DSB-SC AM signal. The multiplication ofm(t) with tfA cc 2cos

    is called a mixing operation. Hence, a mixer is basically a balanced modulator.

    Demodulation of DSB-SC Signals

    Since the envelope of the modulated signal does not have the same shape as m(t), an envelope

    detector cannot be used to recover the message signal. Therefore, demodulation of a DSB-SC

    AM signal requires a synchronous detector (demodulator). That is, the demodulator must use

    a coherent phase reference, see Figure 2.11, which is usually generated by means of a phase-

    locked loop (PLL) to demodulate the received signal.

    Figure 2.11. Demodulation of DSB-SC AM signal

    A PLL is used to generate a phase-coherent carrier signal that is mixed with the received

    signal in a balanced modulator. The output of the balanced modulator is passed through a LPF

    of bandwidth W that passes the desired signal and rejects all signal and noise components

    above WHz.

    Single-Sideband and Vestigial-Sideband Modulations

    We have seen that both normal AM and DSB signals require a transmission bandwidth equal

    to twice the bandwidth of the message signal m(t). Since either the upper sideband (USB) or

    the lower sideband (LSB) contains the complete information of the message signal, we can

    conserve bandwidth by transmitting only one sideband. The modulation is called single-

    sideband (SSB) modulation. It is widely used by the military and by radio amateurs in high-

    frequency (HF) communication systems. It is popular because the bandwidth is the same asthat of the modulating signal m(t) (half of the BW of AM or DSB-SC signal)

    Balanced

    modulator LPF

    PLL

  • 8/2/2019 EE 346-LN-CH1-II

    39/61

    EE 346-CH I-II 07/08 Fall 39

    Single-Sideband Signals

    An upper single sideband (USSB) signal has a zero-valued spectrum at |f | > fc and a lower

    single sideband (LSSB) signal has a zero-valued spectrum at |f |

  • 8/2/2019 EE 346-LN-CH1-II

    40/61

    EE 346-CH I-II 07/08 Fall 40

    tfA cc 2cos

    m(t)

    )( tm

    )(ts

    tfA cc 2sin

    SSB

    S(f)

    - fc fc

    M(f)

    S(f)

    - fc fc

    Figures 2.13 and 2.14 show the generation of a SSB and the spectra associated with it.

    Figure 2.13. Generation of a SSB-AM signal using phasing method

    -USSB AM signal is: ttmttmAts cccu sin)(cos)()( (2.33)

    -LSSB AM signal is: ttmttmAts cccl sin)(cos)()( (2.34)

    The spectra of the USSB and LSSB are given as follows:

    cc

    c

    c

    c

    cc

    cffffM

    ffA

    ff

    ffffMAfS

    ),(

    ,0

    ,0

    ),()( .(2.35)

    cc

    c

    c

    c

    cc

    cffffM

    ffA

    ff

    ffffMAfS

    ),(

    ,0

    ,0

    ),()( ....(2.35a)

    0

    M(f+fc) M(f-fc)

    0

    (a)

    M(f+fc) M(f-fc)

    0

    (b)

    Figure 2.14. Spectra for (a) USSB and (b) LSSB

    Hilbert

    transform

    90o

  • 8/2/2019 EE 346-LN-CH1-II

    41/61

    EE 346-CH I-II 07/08 Fall 41

    m(t) LSSBm(t) s(t)

    tfA cc 2cos

    LPF

    HPF

    )(2cos mcc ffA

    )(2cos mcc ffA

    USSB

    c(t)

    Advantages:

    Does not employ bandpass filter.

    Suitable for message signals with frequency content down to dc.

    Disadvantage:

    Practical realization of a wideband 90o

    phase shift circuit is difficult.

    Filter Method

    In this method, a balanced modulator is used to generate a DSB signal, and the desired

    sideband signal is then selected by a bandpass filter, which selects either upper sideband or

    lower sideband of the SSB AM signal. Figures 2.15 and 2.16 show the generation of a SSB

    signal using the filter method, and the spectra associated with it.

    Figure 2.15. Generation of SSB signal using filter method

    Figure 2.16. Spectra associated with SSB AM signal using filter method

    The technique is suitable for message signals with very little frequency content down to dc

    and hence does not require sharp filter cut-off characteristics.

    BPF

  • 8/2/2019 EE 346-LN-CH1-II

    42/61

    EE 346-CH I-II 07/08 Fall 42

    r(t) m(t)

    VSB AM signal

    m(t) s(t)

    tfA cc 2cos

    DSB

    Demodulation of SSB Signals

    Demodulation of SSB signals also requires the use of a phase coherent reference. In the case

    of signals such as speech, that is relatively little or no power content at dc, it is straight

    forward to generate the SSB signal, as in the case of filtering method, and then to insert a

    small carrier component that is transmitted along with the message signal. In such a case a

    balanced modulator is used for the purpose of frequency conversion of the bandpass signal to

    lowpass or baseband. This is shown in Figure 2.17.

    Figure 2.17. Demodulation of SSB AM signal with a carrier component

    Vestigial Sideband (VSB) Modulation

    Vestigial sideband modulation is a compromise between DSB and SSB modulations. It is

    often chosen when DSB takes too much bandwidth and SSB is too complicated for particular

    application. Typically, the bandwidth of a VSB modulated signal is about 1.25 times that of

    the corresponding SSB modulated signal. It is commonly used for transmission of video

    signals in commercial television broadcasting. To generate a VSB one of the sidebands of the

    DSB is partially suppressed. In other words, the DSB-SC AM signal is passed through a

    sideband filter with frequency response h(t).h(t) is the transfer function of the BPF.

    Figures 2.18 and 2.19 show the generation of a VSB signal, and the spectra associated with it.

    h(f)

    Figure 2.18. Generation of VSB signal

    Balanced

    modulator LPF

    Estimatecarrier

    component

    Sideband

    filter

    h(f)

  • 8/2/2019 EE 346-LN-CH1-II

    43/61

    EE 346-CH I-II 07/08 Fall 43

    m(t) Acm(t)

    tfA cc 2cos

    Figure 2.19. Spectra associated with VSB signal.

    In the time domain the VSB signal may be expressed as

    )(2cos)()()()( thtftmAthtsts ccVSB .(2.36)

    where s(t) is a DSB signal and h(t) is the impulse response of the VSB filter.

    To recover an undistorted message signal m(t) the SVB filter characteristic must satisfy the

    condition

    WfffHffH cc ||,constant)()(

    |f|W

    Figure 2.20. Demodulation of VSB signal

    In the frequency domain, the corresponding expression is

    )()()(2

    )( fHffMffMA

    fS ccc (2.36)

    To determine the frequency-response characteristics of the filter, consider the demodulation

    of the VSB signal s(t). We multiply s(t) by the carrier component tfc2cos and pass the result

    through an ideal LPF.

    Ideal

    LPF

  • 8/2/2019 EE 346-LN-CH1-II

    44/61

    EE 346-CH I-II 07/08 Fall 44

    H(f)

    H(f-fc)+ H(f+fc)

    H(f-fc)H(f+fc)

    - fc fc

    H(f)

    - fc fc

    Thus, the product signal is

    tftstv c2cos)()(

    or equivalently,

    )()()(2

    1)( fHffSffSfV cc

    )()2()(4

    )()()2(4

    )(

    ccc

    ccc

    ffHffMfMA

    ffHfMffMA

    fV

    The LPF rejects the double frequency term and passes only the components in the frequency

    range |f|W. Hence, the signal spectrum at the output of the ideal LPF is

    )()()(4

    )( ccc

    l ffHffHfMA

    fV .(2.37)

    -f-W -fc-fa -fc+fa 0 fc-fa fc+fa fc+W f

    -2fc -W - fa 0 fa W 2fc f

    -f-fa -fc+W 0 fc-W fc+fa f

    H(f) selects the upper sideband as a vestige of the lower sideband. It has odd symmetry about

    the carrier frequency fc, in the frequency range aacac fffff where, is a conveniently

    selected frequency that is small fraction of W; i.e., fa

  • 8/2/2019 EE 346-LN-CH1-II

    45/61

    EE 346-CH I-II 07/08 Fall 45

    Demodulation of VSB Signals

    In VSB a carrier component is generally transmitted along with the message sidebands. The

    existence of the carrier component makes it possible to extract a phase-coherent reference for

    demodulation in a balanced modulator, as shown in Figure 2.20.

    In some applications such as TV broadcast, a large carrier component is transmitted along

    with the message in the VSB signal. In such a case, it is possible to recover the message by

    passing the received VSB signal through an envelope detector.

    Signal Multiplexing

    Frequency Division Multiplexing (FDM)

    The process of combining a number of separate message signals into a composite signal for

    transmission over a common channel is called multiplexing. One of the basic problems in

    communication engineering is the design of a system which allows many individual signals

    from users to be transmitted simultaneously over a single communication channel. The most

    commonly used methods are: time-division multiplexing (TDM), and frequency-division

    multiplexing (FDM). TDM is usually used in the transmission of digital information. FDM

    however, may be used with either analogue or digital signal transmission.

    FDM, involves dividing the multiplexed channel into a number of unique frequencies, each

    one assigned to a pair of communicating entities. FDM can be achieved only if the available

    bandwidth on the multiplexed channel exceeds the bandwidth needs of all the communicating

    entities.

    Whenever a multiplexer receives data for transmission, the data is transmitted by it on the

    frequency allocated to the transmitting entity. The receiving multiplexer forwards the

    information received on a specific frequency to the destination associated with that frequency.

    The following example illustrates how a frequency division multiplexer connects DTEs 1, 2,

    and 3 with ports E, L, and S, respectively, on a central host. The frequency allocation is given in

    the following Table.

    The 1000-Hz separation between the channels is known as the guard band and is used to

    ensure that one set of signals does not interfere with another. Diagrammatically, theconnections and their frequencies are shown in Figure 2.22.

    DTE-Port Pair Frequency (Hz)

    1 and E 10,000-14,000

    2 and L 5,000-9,000

    3 and S 0-4,000

  • 8/2/2019 EE 346-LN-CH1-II

    46/61

    EE 346-CH I-II 07/08 Fall 46

    m1(t)

    m2(t)

    mk(t)

    m1(t)

    f1

    m2(t)

    mk(t)

    f2

    fk

    s1(t)

    s2(t)

    sk(t)

    s1(t)

    s2(t)

    sk(t)

    Figure 2.22. Frequency-division multiplexing of k number of signals

    The advantage of FDM is that each DTE is assigned a unique frequency that can be treated as

    an unshared channel. An everyday example of FDM is cable television, in which many signals

    are "stacked up" and transmitted simultaneously over the cable. The user selects a viewing

    channel by tuning to that channel's frequency.

    Transmitter Receiver

    Figure 2.23. Frequency-division multiplexing of k number of signals

    In FDM, the message signals are separated in frequency as described above. A typical

    configuration of an FDM system is shown in Figure 2.23. This figure illustrates the

    frequency-division multiplexing of k message signals at the transmitter and their

    demodulation at the receiver. The LPFs at the transmitter are used to ensure that the

    bandwidth of the message signals is limited to W Hz. Each signal modulates a separatecarrier; hence, kmodulators are required. Then, the signals from kmodulators are summed

    DTE 1

    MUX

    DEMUX

    DTE 3

    0-4,000 Hz5,000-9,000 Hz

    10,000-14,000 HzDTE 2 Port E

    Port L

    Port S

    ModulatorLPF

    ModulatorLPF

    ModulatorLPF

    Frequency

    synthesizer

    Demodulator LPF

    Frequency

    synthesizer

    BPF

    Demodulator LPFBPF

    Demodulator LPFBPF

    channel

  • 8/2/2019 EE 346-LN-CH1-II

    47/61

    EE 346-CH I-II 07/08 Fall 47

    and transmitted over the channel. For SSB and VSB modulation, the modulator outputs are

    filtered prior to summing the modulated signals.

    At the receiver of an FDM system signals are usually separated by passing through a parallel

    band and BPFs, where each filter is tuned to one of the carrier frequencies and has a

    bandwidth that is sufficiently wide to pass the desired signal. The output of each BPF is

    demodulated and each demodulated signal is fed to a LPF that passes the baseband message

    signal and eliminates the double frequency components.

    FDM is widely used in radio and telephone communications. For example, in telephone

    communications, each voice-message signal occupies a nominal bandwidth of 3 kHz. The

    message signal is SSB modulated for bandwidth efficient transmission. In the first level of

    multiplexing, 12 signals are stacked in frequency, with a frequency separation of 4 kHz

    between adjacent carriers. Thus, a composite 48-kHz channel, called a group channel, is used

    to transmit 12 voice-band signals simultaneously. In the next level of FDM, a number of

    group channels (typically five or six) are stacked together in frequency to form a super-group

    channel, and the composite signal is transmitted over the channel. Higher-order multiplexing

    is obtained by combining several super-group channels. Thus, an FDM hierarchy is employed

    in telephone communication systems.

    Quadrature-Carrier Multiplexing

    A totally different type of multiplexing allows us to transmit two message signals (for

    example m1(t) and m2(t)) on the same carrier frequency, using two quadrature carriers

    tfA cc 2cos and tfA cc 2sin . The signal m1(t) amplitude modulates the carrier tfA cc 2cos

    and m2(t) amplitude modulates the carrier tfA cc 2sin . The two signals are added and

    transmitted over the channel. Hence, the transmitted signal is

    tftmAtftmAts cccc 2sin)(2cos)()( 21 ..(2.38)

    Therefore, each message signal is transmitted by DSB-SC AM. This type of signal

    multiplexing is called quadrature-carrier multiplexing. Figure 2.24 illustrates the modulation

    and demodulation of the quadrature-carrier multiplexed signals. As shown, a synchronous

    demodulator is required at the receiver to separate and recover the quadrature-carrier

    modulated signal.

  • 8/2/2019 EE 346-LN-CH1-II

    48/61

    EE 346-CH I-II 07/08 Fall 48

    tfA cc 2cos

    m1(t)

    tfA cc 2sin

    channel

    m2(t)

    m1(t)

    m2(t)

    PLL

    Transmitter Receiver

    Figure 2.24. Quadrature-carrier multiplexing

    Quadrature-carrier multiplexing results in a bandwidth-efficient communication system that is

    comparable in bandwidth efficiency to SSB AM.

    Angle Modulation

    Amplitude modulation methods are also called linear-modulation methods. Frequency and

    phase modulation are jointly referred to as angle-modulation methods, and they are non-

    linear.

    Frequency and phase modulation systems generally expand bandwidth such that the effective

    bandwidth of the modulated signal is usually many times the bandwidth of the message

    signal. Although angle modulation is more complex to implement and occupy more

    bandwidth, the major benefit of these systems is their high degree of noise immunity. That is

    the reason that FM systems are widely used in high-fidelity music broadcasting and point-to-

    point communication systems where the transmitter power is quite limited.

    FM and PM Signals

    An angle-modulated signal can be written in general as

    ))(cos()( tAts c

    )(t is the phase of the signal, and its instantaneous frequencyfi(t) is given by

    )(2

    1)( t

    dt

    dtf

    i

    Balancedmodulator

    -90o phase

    shift

    Balancedmodulator

    Balancedmodulator

    -90o phase

    shift

    Balancedmodulator

    LPF

    LPF

  • 8/2/2019 EE 346-LN-CH1-II

    49/61

    EE 346-CH I-II 07/08 Fall 49

    FM modulator

    m(t) FM

    modulator

    m(t) PM

    modulatorIntegrator

    PM modulatorm(t) PM

    modulator

    m(t) FM

    modulatorDifferentiator

    Since s(t) is a bandpass signal, it can be represented as

    )(2

    1)(

    ))(2cos()(

    tdt

    dftf

    ttfAts

    ci

    cc

    Ifm(t) is the message signal, then in a PM system we have

    )()( tmkt p (2.39)

    and in an FM system we have

    )(2

    1)()( t

    dt

    dtmkftf fci

    ..............(2.40)

    where kp and kfare phase and frequency deviation constants.

    t

    f

    p

    FMdmk

    PMtmk

    t,)(2

    ),(

    )(

    .(2.41)

    Equation 2.41 shows the close relation between FM and PM systems. This relationship makes

    it possible to analyse these systems in parallel.

    - If we phase modulate the carrier with the integral of the message, it is equivalent toFM of the carrier with the message.

    - If we frequency modulate the carrier with derivative of a message, the result isequivalent to PM of the carrier with the message.

    FMtmk

    PMtmdtdk

    tdt

    d

    f

    p

    ),(2

    ),()(

    ..(2.42)

    Figure 2.25. A comparison of FM and PM modulators

  • 8/2/2019 EE 346-LN-CH1-II

    50/61

    EE 346-CH I-II 07/08 Fall 50

    1

    0

    -1

    1

    0

    -1

    1

    0

    -1

    1

    0

    -1

    2 4

    FM

    signal

    PM

    signal

    Equivalent

    1

    0

    -1

    0

    Figure 2.26. FM & PM square and sawtooth waves

    The demodulation of an FM signal involves finding the instantaneous frequency of the

    modulated signal and then subtracting the carrier frequency from it. In PM, the demodulation

    process is done by finding the phase of the signal and then recovering m(t). the maximum

    phase deviation in a PM system is given by

    )(maxmax tmkp

    and the maximum frequency-deviation in an FM system is given by

    )(maxmax tmkf f

    Narrowband Angle Modulation

    If max| t |

  • 8/2/2019 EE 346-LN-CH1-II

    51/61

    EE 346-CH I-II 07/08 Fall 51

    vs(t)

    cos

    -sin

    Ac Acm(t)

    vs(t)

    cos

    -sin

    Ac

    Ac(t)

    AM phasor

    (a)

    FM phasor

    (b)

    tfA cc 2cos

    m(t)

    tfA cc 2sin

    NBFM

    Signal out

    Figure 2.27. Vector representation of (a) AM and (b) FM phasor

    As shown in Figure 2.28, the narrowband signal can be generated by using a balanced

    modulator (multiplier).

    Figure 2.28. Generating of NBFM using a balanced modulator

    Angle Modulation by a Sinusoidal Signal

    ))2sin(2cos()( tftfAts mcc ..(2.45)

    whereis the modulation index that can be eitherp orf.

    The modulated signal can be written as

    )()(2sin2 tfjtfj

    cmc eeARts e

    .(2.46)

    Since tfm2sin is periodic with periodm

    mf

    T1

    , the same is true for the complex exponential

    signaltfj me

    2sin

    Therefore, it can be expanded in a Fourier series representation. The Fourier series

    coefficients are obtained from the following integral

    2

    0

    )sin(

    2

    1

    0

    2sin

    2

    1due

    dteefC

    nuuj

    tfjnf tfj

    mnmm m

    This integral is known as the Bessel function of the first kind of order n and is denoted by

    Jn(). Therefore, the Fourier series for the complex exponential is

    Oscillator

    f=fc-90o

    Phase shift

    Integrator

    gain=Df

  • 8/2/2019 EE 346-LN-CH1-II

    52/61

    EE 346-CH I-II 07/08 Fall 52

    n

    tfj

    n

    tfj mm eJe

    22sin

    )(

    ))(2cos()(

    )()()(22

    tnffJA

    eeJARts

    mcnc

    tfj

    n

    tfj

    ncecm

    The angle modulated signal contains all frequencies of the form ,...2,1,0for nnff mc .

    Therefore, the bandwidth of the modulated signal is infinite. Thus, we can define a finite

    effective bandwidth for the modulated signal. A series expansion for the Bessel function is

    given by

    0

    2

    )!(!

    1)(

    2

    k

    kn

    k

    nnkk

    J

    (2.47)

    For small, we can use the approximation

    !2)(

    nJ

    n

    n

    n

    (2.48)

    Thus, for small modulation index , only the first sideband corresponding to n =1 is of

    importance.

    The symmetry properties of the Bessel function is

    oddnJ

    evennJJ

    n

    n

    n),(

    ),()(

    Some useful properties of theJn()

    - Jn() =J-n() for n even.- Jn() = -J-n() for n odd.- 1)0(1for

    2

    1)( 0

    2

    JJn

    .

    - 0if,0)0(1for0when2!

    1)(

    2

    nJnn

    J nn

    .

    Ex.

    Let the carrier be given by )2cos(10)( tftc c and let the message signal be )2cos()( tftm c .

    Assume that the message is used to frequency modulate the carrier with kf = 50. Find the

    expression for the modulated signal and determine how many harmonics should be selected to

    contain 99 % of the modulated signal power.

  • 8/2/2019 EE 346-LN-CH1-II

    53/61

    EE 346-CH I-II 07/08 Fall 53

    Solution

    The power content of the carrier signal is given by

    502

    100

    2

    2

    ccA

    P

    The modulated signal is represented by

    )20sin(52cos(10

    )20sin(10

    502cos(10

    ))20cos(22cos(10)(

    ttf

    ttf

    dktfts

    c

    c

    t

    fc

    The modulation index is given by

    5

    )(max

    m

    ff

    tmk

    ))10(2cos()5(10

    ))(2cos()()(

    tnfJ