76
ADV. COMMUNICATION LAB 6 TH SEM E&C LINE CODING Line coding consists of representing the digital signal to be transported by an amplitude- and time-discrete signal that is optimally tuned for the specific properties of the physical channel (and of the receiving equipment). The waveform pattern of voltage or current used to represent the 1s and 0s of a digital data on a transmission link is called line encoding. The common types of line encoding are unipolar , polar , bipolar and Manchester encoding . For reliable clock recovery at the receiver, one usually imposes a maximum run length constraint on the generated channel sequence, i.e. the maximum number of consecutive ones or zeros is bounded to a reasonable number. A clock period is recovered by observing transitions in the received sequence, so that a maximum run length guarantees such clock recovery, while sequences without such a constraint could seriously hamper the detection quality. After line coding, the signal is put through a "physical channel ", either a "transmission medium " or "data storage medium ". Sometimes the characteristics of two very different-seeming channels are similar enough that the same line code is used for them. The most common physical channels are: the line-coded signal can directly be put on a transmission line , in the form of variations of the voltage or current (often using differential signaling ). the line-coded signal (the "base-band signal") undergoes further pulse shaping (to reduce its frequency bandwidth) and then modulated (to shift its frequency bandwidth) to create the "RF signal" that can be sent through free space. the line-coded signal can be used to turn on and off a light in Free Space Optics , most commonly infrared remote control . the line-coded signal can be printed on paper to create a bar code . the line-coded signal can be converted to a magnetized spots on a hard drive or tape drive . the line-coded signal can be converted to a pits on optical disc . Unfortunately, most long-distance communication channels cannot transport a DC component . The DC component is also called the disparity, the bias, or the DC coefficient . The simplest possible line code, called unipolar because it has an unbounded

Line Coding

Embed Size (px)

Citation preview

Page 1: Line Coding

ADV. COMMUNICATION LAB 6TH SEM E&C

LINE CODING

Line coding consists of representing the digital signal to be transported by an

amplitude- and time-discrete signal that is optimally tuned for the specific

properties of the physical channel (and of the receiving equipment). The

waveform pattern of voltage or current used to represent the 1s and 0s of a

digital data on a transmission link is called line encoding. The common types

of line encoding are unipolar, polar, bipolar and Manchester encoding.

For reliable clock recovery at the receiver, one usually imposes a maximum

run length constraint on the generated channel sequence, i.e. the maximum

number of consecutive ones or zeros is bounded to a reasonable number. A

clock period is recovered by observing transitions in the received sequence,

so that a maximum run length guarantees such clock recovery, while

sequences without such a constraint could seriously hamper the detection

quality.

After line coding, the signal is put through a "physical channel", either a

"transmission medium" or "data storage medium". Sometimes the

characteristics of two very different-seeming channels are similar enough

that the same line code is used for them. The most common physical channels

are:

the line-coded signal can directly be put on a transmission line, in the

form of variations of the voltage or current (often using differential

signaling).

the line-coded signal (the "base-band signal") undergoes further

pulse shaping (to reduce its frequency bandwidth) and then

modulated (to shift its frequency bandwidth) to create the "RF signal"

that can be sent through free space.

the line-coded signal can be used to turn on and off a light in Free

Space Optics, most commonly infrared remote control.

the line-coded signal can be printed on paper to create a bar code.

the line-coded signal can be converted to a magnetized spots on a

hard drive or tape drive.

the line-coded signal can be converted to a pits on optical disc.

Unfortunately, most long-distance communication channels cannot

transport a DC component. The DC component is also called the

disparity, the bias, or the DC coefficient. The simplest possible line

code, called unipolar because it has an unbounded DC component,

gives too many errors on such systems.

Most line codes eliminate the DC component — such codes are called

DC balanced, zero-DC, zero-bias or DC equalized etc. There are two

ways of eliminating the DC component:

Use a constant-weight code. In other words, design each transmitted

code word such that every code word that contains some positive or

negative levels also contains enough of the opposite levels, such that

the average level over each code word is zero. For example,

Manchester code and Interleaved 2 of 5.

Use a paired disparity code. In other words, design the receiver such

that every code word that averages to a negative level is paired with

another code word that averages to a positive level. Design the

Page 2: Line Coding

receiver so that either code word of the pair decodes to the same data

bits. Design the transmitter to keep track of the running DC buildup,

and always pick the code word that pushes the DC level back towards

zero. For example, AMI, 8B10B, 4B3T, etc.

Line coding should make it possible for the receiver to synchronize itself to

the phase of the received signal. If the synchronization is not ideal, then the

signal to be decoded will not have optimal differences (in amplitude)

between the various digits or symbols used in the line code. This will

increase the error probability in the received data.

It is also preferred for the line code to have a structure that will enable error

detection.

Note that the line-coded signal and a signal produced at a terminal may

differ, thus requiring translation.

A line code will typically reflect technical requirements of the transmission

medium, such as optical fiber or shielded twisted pair. These requirements

are unique for each medium, because each one has different behavior related

to interference, distortion, capacitance and loss of amplitude.

[EDIT ] COMMON LINE CODES

AMI

Modified AMI codes : B8ZS, B6ZS, B3ZS, HDB3

2B1Q

4B5B

4B3T

6b/8b encoding

Hamming Code

8b/10b encoding

64b/66b encoding

128b/130b encoding

Coded mark inversion (CMI)

Conditioned Diphase

Eight-to-Fourteen Modulation (EFM) used in Compact Disc

EFMPlus used in DVD

RZ — Return-to-zero

NRZ — Non-return-to-zero

NRZI — Non-return-to-zero, inverted

Manchester code (also variants Differential Manchester & Biphase

mark code)

Miller encoding (also known as Delay encoding or Modified

Frequency Modulation, and has variant Modified Miller encoding)

MLT-3 Encoding

Hybrid Ternary Codes

Surround by complement (SBC)

TC-PAM

Optical line codes:

Carrier-Suppressed Return-to-Zero

Page 3: Line Coding

Alternate-Phase Return-to-Zero

NON-RETURN-TO-ZERO

From Wikipedia, the free encyclopedia

Jump to: navigation, search

The binary signal is encoded using rectangular pulse amplitude modulation

with polar non-return-to-zero code

In telecommunication, a non-return-to-zero (NRZ) line code is a binary

code in which 1's are represented by one significant condition (usually a

positive voltage) and 0's are represented by some other significant condition

(usually a negative voltage), with no other neutral or rest condition. The

pulses have more energy than a RZ code. Unlike RZ, NRZ does not have a rest

state. NRZ is not inherently a self-synchronizing code, so some additional

synchronization technique (for example a run length limited constraint, or a

parallel synchronization signal) must be used to avoid bit slip.

For a given data signaling rate, i.e., bit rate, the NRZ code requires only half

the bandwidth required by the Manchester code.

When used to represent data in an asynchronous communication scheme, the

absence of a neutral state requires other mechanisms for bit synchronization

when a separate clock signal is not available.

NRZ-Level itself is not a synchronous system but rather an encoding that can

be used in either a synchronous or asynchronous transmission environment,

that is, with or without an explicit clock signal involved. Because of this, it is

not strictly necessary to discuss how the NRZ-Level encoding acts "on a clock

edge" or "during a clock cycle" since all transitions happen in the given

amount of time representing the actual or implied integral clock cycle. The

real question is that of sampling--the high or low state will be received

correctly provided the transmission line has stabilized for that bit when the

physical line level is sampled at the receiving end.

However, it is helpful to see NRZ transitions as happening on the trailing

(falling) clock edge in order to compare NRZ-Level to other encoding

methods, such as the mentioned Manchester code, which requires clock edge

information (is the XOR of the clock and NRZ, actually) and to see the

difference between NRZ-Mark and NRZ-Inverted.

CONTENTS

1 Unipolar Non-Return-to-Zero Level 2 Bipolar Non-Return-to-Zero Level

3 Non-Return-to-Zero Space

4 Non-Return-to-Zero Inverted (NRZI)

5 See also

6 References

[EDIT ] UNIPOLAR NON-RETURN-TO-ZERO LEVEL

Main article: On-off keying

"One" is represented by one physical level (such as a DC bias on the

transmission line).

Page 4: Line Coding

"Zero" is represented by another level (usually a positive voltage).

In clock language, "one" transitions or remains high on the trailing clock edge

of the previous bit and "zero" transitions or remains low on the trailing clock

edge of the previous bit, or just the opposite. This allows for long series

without change, which makes synchronization difficult. One solution is to not

send bytes without transitions. Disadvantages of an on-off keying are the

waste of power due to the transmitted DC level and the power spectrum of

the transmitted signal does not approach zero at zero frequency. See RLL

[EDIT ] BIPOLAR NON-RETURN-TO-ZERO LEVEL

"One" is represented by one physical level (usually a negative voltage).

"Zero" is represented by another level (usually a positive voltage).

In clock language, in bipolar NRZ-Level the voltage "swings" from positive to

negative on the trailing edge of the previous bit clock cycle.

An example of this is RS-232, where "one" is −5V to −12V and "zero" is +5 to

+12V.

[EDIT ] NON-RETURN-TO-ZERO SPACE

Non-Return-to-Zero Space

"One" is represented by no change in physical level.

"Zero" is represented by a change in physical level.

In clock language, the level transitions on the trailing clock edge of the

previous bit to represent a "zero."

This "change-on-zero" is used by High-Level Data Link Control and USB. They

both avoid long periods of no transitions (even when the data contains long

sequences of 1 bits) by using zero-bit insertion. HDLC transmitters insert a 0

bit after five contiguous 1 bits (except when transmitting the frame delimiter

'01111110'). USB transmitters insert a 0 bit after six consecutive 1 bits. The

receiver at the far end uses every transition — both from 0 bits in the data

and these extra non-data 0 bits — to maintain clock synchronization. The

receiver otherwise ignores these non-data 0 bits.

[EDIT ] NON-RETURN-TO-ZERO INVERTED (NRZI)

Example NRZI encoding

NRZ-transition occurs for a zero

Non return to zero, inverted (NRZI) is a method of mapping a binary signal

to a physical signal for transmission over some transmission media. The two

level NRZI signal has a transition at a clock boundary if the bit being

transmitted is a logical 1, and does not have a transition if the bit being

transmitted is a logical 0.

Page 5: Line Coding

"One" is represented by a transition of the physical level.

"Zero" has no transition.

Also, NRZI might take the opposite convention, as in Universal Serial Bus

(USB) signalling, when in Mode 1 (transition when signalling zero and steady

level when signalling one). The transition occurs on the leading edge of the

clock for the given bit. This distinguishes NRZI from NRZ-Mark.

However, even NRZI can have long series of zeros (or ones if transitioning on

"zero"), so clock recovery can be difficult unless some form of run length

limited (RLL) coding is used on top. Magnetic disk and tape storage devices

generally use fixed-rate RLL codes, while USB uses bit stuffing, which is

efficient, but results in a variable data rate: it takes slightly longer to send a

long string of 1 bits over USB than it does to send a long string of 0 bits. (USB

inserts an additional 0 bit after 6 consecutive 1 bits.)

MANCHESTER CODE

From Wikipedia, the free encyclopedia

Jump to: navigation, search

In telecommunication, Manchester code (also known as Phase Encoding, or

PE) is a line code in which the encoding of each data bit has at least one

transition and occupies the same time. It therefore has no DC component, and

is self-clocking, which means that it may be inductively or capacitively

coupled, and that a clock signal can be recovered from the encoded data.

Manchester code is widely used (e.g. in Ethernet; see also RFID). There are

more complex codes, such as 8B/10B encoding, that use less bandwidth to

achieve the same data rate but may be less tolerant of frequency errors and

jitter in the transmitter and receiver reference clocks.

CONTENTS

1 Features 2 Description

o 2.1 Manchester encoding as phase-shift keying

o 2.2 Conventions for representation of data

3 References

4 See also

[EDIT ] FEATURES

Manchester code ensures frequent line voltage transitions, directly

proportional to the clock rate. This helps clock recovery.

The DC component of the encoded signal is not dependent on the data and

therefore carries no information, allowing the signal to be conveyed

conveniently by media (e.g. Ethernet) which usually do not convey a DC

component.

[EDIT ] DESCRIPTION

Page 6: Line Coding

An example of Manchester encoding showing both conventions

Extracting the original data from the received encoded bit (from Manchester

as per 802.3):

original data XOR clock = Manchester value

0 0 0

0 1 1

1 0 1

1 1 0

Summary:

Each bit is transmitted in a fixed time (the "period").

A 0 is expressed by a low-to-high transition, a 1 by high-to-low

transition (according to G.E. Thomas' convention -- in the IEEE 802.3

convention, the reverse is true).

The transitions which signify 0 or 1 occur at the midpoint of a period.

Transitions at the start of a period are overhead and don't signify

data.

Manchester code always has a transition at the middle of each bit period and

may (depending on the information to be transmitted) have a transition at

the start of the period also. The direction of the mid-bit transition indicates

the data. Transitions at the period boundaries do not carry information. They

exist only to place the signal in the correct state to allow the mid-bit

transition. The existence of guaranteed transitions allows the signal to be

self-clocking, and also allows the receiver to align correctly; the receiver can

identify if it is misaligned by half a bit period, as there will no longer always

be a transition during each bit period. The price of these benefits is a

doubling of the bandwidth requirement compared to simpler NRZ coding

schemes (or see also NRZI).

In the Thomas convention, the result is that the first half of a bit period

matches the information bit and the second half is its complement.

[EDIT] MANCHESTER ENCODING AS PHASE-SHIFT KEYING

Manchester encoding is a special case of binary phase-shift keying (BPSK),

where the data controls the phase of a square wave carrier whose frequency

is the data rate. Such a signal is easy to generate.

[EDIT] CONVENTIONS FOR REPRESENTATION OF DATA

Encoding of 11011000100 in Manchester code (as per G. E. Thomas)

There are two opposing conventions for the representations of data.

The first of these was first published by G. E. Thomas in 1949 and is followed

by numerous authors (e.g., Tanenbaum). It specifies that for a 0 bit the signal

levels will be Low-High (assuming an amplitude physical encoding of the

data) - with a low level in the first half of the bit period, and a high level in the

second half. For a 1 bit the signal levels will be High-Low.

The second convention is also followed by numerous authors (e.g., Stallings)

as well as by IEEE 802.4 (token bus) and lower speed versions of IEEE 802.3

(Ethernet) standards. It states that a logic 0 is represented by a High-Low

signal sequence and a logic 1 is represented by a Low-High signal sequence.

Page 7: Line Coding

If a Manchester encoded signal is inverted in communication, it is

transformed from one convention to the other. This ambiguity can be

overcome by using differential Manchester encoding.

UNIVERSAL ASYNCHRONOUS RECEIVER/TRANSMITTER

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article needs additional citations for verification.Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (November 2010)

A universal asynchronous receiver/transmitter (usually abbreviated

UART and pronounced / ju rt/ˈ ːɑ ) is a type of "asynchronous

receiver/transmitter", a piece of computer hardware that translates data

between parallel and serial forms. UARTs are commonly used in conjunction

with communication standards such as EIA RS-232, RS-422 or RS-485. The

universal designation indicates that the data format and transmission speeds

are configurable and that the actual electric signaling levels and methods

(such as differential signaling etc) typically are handled by a special driver

circuit external to the UART.

A UART is usually an individual (or part of an) integrated circuit used for

serial communications over a computer or peripheral device serial port.

UARTs are now commonly included in microcontrollers. A dual UART, or

DUART, combines two UARTs into a single chip. Many modern ICs now come

with a UART that can also communicate synchronously; these devices are

called USARTs (universal synchronous/asynchronous receiver/transmitter).

CONTENTS

1 Transmitting and receiving serial data o 1.1 Character framing

o 1.2 Receiver

o 1.3 Transmitter

o 1.4 Application

2 Synchronous transmission

3 History

4 Structure

5 Special receiver conditions

o 5.1 Overrun error

o 5.2 Underrun error

o 5.3 Framing error

o 5.4 Parity error

o 5.5 Break condition

6 UART models

7 See also

8 References

9 External links

[EDIT ] TRANSMITTING AND RECEIVING SERIAL DATA

See also: Asynchronous serial communication

The Universal Asynchronous Receiver/Transmitter (UART) takes bytes of

data and transmits the individual bits in a sequential fashion. At the

Page 8: Line Coding

destination, a second UART re-assembles the bits into complete bytes. Each

UART contains a shift register which is the fundamental method of

conversion between serial and parallel forms. Serial transmission of digital

information (bits) through a single wire or other medium is much more cost

effective than parallel transmission through multiple wires.

The UART usually does not directly generate or receive the external signals

used between different items of equipment. Separate interface devices are

used to convert the logic level signals of the UART to and from the external

signaling levels. External signals may be of many different forms. Examples of

standards for voltage signaling are RS-232, RS-422 and RS-485 from the EIA.

Historically, the presence or absence of current (in current loops) was used

in telegraph circuits. Some signaling schemes do not use electrical wires.

Examples of such are optical fiber, IrDA (infrared), and (wireless) Bluetooth

in its Serial Port Profile (SPP). Some signaling schemes use modulation of a

carrier signal (with or without wires). Examples are modulation of audio

signals with phone line modems, RF modulation with data radios, and the DC-

LIN for power line communication.

Communication may be "full duplex" (both send and receive at the same

time) or "half duplex" (devices take turns transmitting and receiving).

[EDIT] CHARACTER FRAMING

Each character is sent as a logic low start bit, a configurable number of data

bits (usually 7 or 8, sometimes 5), an optional parity bit, and one or more

logic high stop bits. The start bit signals the receiver that a new character is

coming. The next five to eight bits, depending on the code set employed,

represent the character. Following the data bits may be a parity bit. The next

one or two bits are always in the mark (logic high, i.e., '1') condition and

called the stop bit(s). They signal the receiver that the character is

completed. Since the start bit is logic low (0) and the stop bit is logic high (1)

then there is always a clear demarcation between the previous character and

the next one.

[EDIT] RECEIVER

All operations of the UART hardware are controlled by a clock signal which

runs at a multiple (say, 16) of the data rate - each data bit is as long as 16

clock pulses. The receiver tests the state of the incoming signal on each clock

pulse, looking for the beginning of the start bit. If the apparent start bit lasts

at least one-half of the bit time, it is valid and signals the start of a new

character. If not, the spurious pulse is ignored. After waiting a further bit

time, the state of the line is again sampled and the resulting level clocked into

a shift register. After the required number of bit periods for the character

length (5 to 8 bits, typically) have elapsed, the contents of the shift register is

made available (in parallel fashion) to the receiving system. The UART will

set a flag indicating new data is available, and may also generate a processor

interrupt to request that the host processor transfers the received data. In

some common types of UART, a small first-in, first-out FIFO buffer memory is

inserted between the receiver shift register and the host system interface.

This allows the host processor more time to handle an interrupt from the

UART and prevents loss of received data at high rates.

[EDIT] TRANSMITTER

Transmission operation is simpler since it is under the control of the

transmitting system. As soon as data is deposited in the shift register after

completion of the previous character, the UART hardware generates a start

bit, shifts the required number of data bits out to the line,generates and

appends the parity bit (if used), and appends the stop bits. Since transmission

Page 9: Line Coding

of a single character may take a long time relative to CPU speeds, the UART

will maintain a flag showing busy status so that the host system does not

deposit a new character for transmission until the previous one has been

completed; this may also be done with an interrupt. Since full-duplex

operation requires characters to be sent and received at the same time,

practical UARTs use two different shift registers for transmitted characters

and received characters.

[EDIT] APPLICATION

Transmitting and receiving UARTs must be set for the same bit speed,

character length, parity, and stop bits for proper operation. The receiving

UART may detect some mismatched settings and set a "framing error" flag bit

for the host system; in exceptional cases the receiving UART will produce an

erratic stream of mutilated characters and transfer them to the host system.

Typical serial ports used with personal computers connected to modems use

eight data bits, no parity, and one stop bit; for this configuration the number

of ASCII characters per second equals the bit rate divided by 10.

Some very low-cost home computers or embedded systems dispensed with a

UART and used the CPU to sample the state of an input port or directly

manipulate an output port for data transmission. While very CPU-intensive,

since the CPU timing was critical, these schemes avoided the purchase of a

costly UART chip. The technique was known as a bit-banging serial port.

[EDIT ] SYNCHRONOUS TRANSMISSION

USART chips have both synchronous and asynchronous modes. In

synchronous transmission, the clock data is recovered separately from the

data stream and no start/stop bits are used. This improves the efficiency of

transmission on suitable channels since more of the bits sent are usable data

and not character framing. An asynchronous transmission sends no

characters over the interconnection when the transmitting device has

nothing to send; but a synchronous interface must send "pad" characters to

maintain synchronization between the receiver and transmitter. The usual

filler is the ASCII "SYN" character. This may be done automatically by the

transmitting device.

[EDIT ] HISTORY

Some early telegraph schemes used variable-length pulses (as in Morse code)

and rotating clockwork mechanisms to transmit alphabetic characters. The

first UART-like devices (with fixed-length pulses) were rotating mechanical

switches (commutators). These sent 5-bit Baudot codes for mechanical

teletypewriters, and replaced morse code. Later, ASCII required a seven bit

code. When IBM built computers in the early 1960s with 8-bit characters, it

became customary to store the ASCII code in 8 bits.

Gordon Bell designed the UART for the PDP series of computers. Western

Digital made the first single-chip UART WD1402A around 1971; this was an

early example of a medium scale integrated circuit.

An example of an early 1980s UART was the National Semiconductor 8250.

In the 1990s, newer UARTs were developed with on-chip buffers. This

allowed higher transmission speed without data loss and without requiring

such frequent attention from the computer. For example, the popular

National Semiconductor 16550 has a 16 byte FIFO, and spawned many

variants, including the 16C550, 16C650, 16C750, and 16C850.

Depending on the manufacturer, different terms are used to identify devices

that perform the UART functions. Intel called their 8251 device a

"Programmable Communication Interface". MOS Technology 6551 was

known under the name "Asynchronous Communications Interface Adapter"

(ACIA). The term "Serial Communications Interface" (SCI) was first used at

Page 10: Line Coding

Motorola around 1975 to refer to their start-stop asynchronous serial

interface device, which others were calling a UART.

[EDIT ] STRUCTURE

A UART usually contains the following components:

a clock generator, usually a multiple of the bit rate to allow sampling

in the middle of a bit period.

input and output shift registers

transmit/receive control

read/write control logic

transmit/receive buffers (optional)

parallel data bus buffer (optional)

First-in, first-out (FIFO) buffer memory (optional)

[EDIT ] SPECIAL RECEIVER CONDITIONS

[EDIT] OVERRUN ERROR

An "overrun error" occurs when the receiver cannot process the character

that just came in before the next one arrives. Various devices have different

amounts of buffer space to hold received characters. The CPU must service

the UART in order to remove characters from the input buffer. If the CPU

does not service the UART quickly enough and the buffer becomes full, an

Overrun Error will occur.

[EDIT] UNDERRUN ERROR

An "underrun error" occurs when the UART transmitter has completed

sending a character and the transmit buffer is empty. In asynchronous modes

this is treated as an indication that no data remains to be transmitted, rather

than an error, since additional stop bits can be appended. This error

indication is commonly found in USARTs, since an underrun is more serious

in synchronous systems.

[EDIT] FRAMING ERROR

A "framing error" occurs when the designated "start" and "stop" bits are not

valid. As the "start" bit is used to identify the beginning of an incoming

character, it acts as a reference for the remaining bits. If the data line is not in

the expected idle state when the "stop" bit is expected, a Framing Error will

occur.

[EDIT] PARITY ERROR

A "parity error" occurs when the number of "active" bits does not agree with

the specified parity configuration of the USART, producing a Parity Error.

Because the "parity" bit is optional, this error will not occur if parity has been

disabled. Parity error is set when the parity of an incoming data character

does not match the expected value.

[EDIT] BREAK CONDITION

A "break condition" occurs when the receiver input is at the "space" level for

longer than some duration of time, typically, for more than a character time.

This is not necessarily an error, but appears to the receiver as a character of

all zero bits with a framing error.

Some equipment will deliberately transmit the "break" level for longer than a

character as an out-of-band signal. When signaling rates are mismatched, no

meaningful characters can be sent, but a long "break" signal can be a useful

Page 11: Line Coding

way to get the attention of a mismatched receiver to do something (such as

resetting itself). Unix-like systems can use the long "break" level as a request

to change the signaling rate, to support dial-in access at multiple signaling

rates.

Phase-shift keying (PSK) is a digital modulation scheme that conveys data

by changing, or modulating, the phase of a reference signal (the carrier

wave).

Any digital modulation scheme uses a finite number of distinct signals to

represent digital data. PSK uses a finite number of phases, each assigned a

unique pattern of binary digits. Usually, each phase encodes an equal number

of bits. Each pattern of bits forms the symbol that is represented by the

particular phase. The demodulator, which is designed specifically for the

symbol-set used by the modulator, determines the phase of the received

signal and maps it back to the symbol it represents, thus recovering the

original data. This requires the receiver to be able to compare the phase of

the received signal to a reference signal — such a system is termed coherent

(and referred to as CPSK).

Alternatively, instead of using the bit patterns to set the phase of the wave, it

can instead be used to change it by a specified amount. The demodulator then

determines the changes in the phase of the received signal rather than the

phase itself. Since this scheme depends on the difference between successive

phases, it is termed differential phase-shift keying (DPSK). DPSK can be

significantly simpler to implement than ordinary PSK since there is no need

for the demodulator to have a copy of the reference signal to determine the

exact phase of the received signal (it is a non-coherent scheme). In exchange,

it produces more erroneous demodulations. The exact requirements of the

particular scenario under consideration determine which scheme is used.

CONTENTS

1 Introduction o 1.1 Definitions

2 Applications

3 Binary phase-shift keying (BPSK)

o 3.1 Implementation

o 3.2 Bit error rate

4 Quadrature phase-shift keying (QPSK)

o 4.1 Implementation

o 4.2 Bit error rate

o 4.3 QPSK signal in the time domain

o 4.4 Variants

4.4.1 Offset QPSK (OQPSK)

4.4.2 /4–QPSKπ

4.4.3 SOQPSK

4.4.4 DPQPSK

5 Higher-order PSK

o 5.1 Bit error rate

6 Differential phase-shift keying (DPSK)

o 6.1 Differential encoding

o 6.2 Demodulation

o 6.3 Example: Differentially-encoded BPSK

7 Channel capacity

Page 12: Line Coding

8 See also

9 Notes

10 References

[EDIT ] INTRODUCTION

There are three major classes of digital modulation techniques used for

transmission of digitally represented data:

Amplitude-shift keying (ASK)

Frequency-shift keying (FSK)

Phase-shift keying (PSK)

All convey data by changing some aspect of a base signal, the carrier wave

(usually a sinusoid), in response to a data signal. In the case of PSK, the phase

is changed to represent the data signal. There are two fundamental ways of

utilizing the phase of a signal in this way:

By viewing the phase itself as conveying the information, in which

case the demodulator must have a reference signal to compare the

received signal's phase against; or

By viewing the change in the phase as conveying information —

differential schemes, some of which do not need a reference carrier

(to a certain extent).

A convenient way to represent PSK schemes is on a constellation diagram.

This shows the points in the Argand plane where, in this context, the real and

imaginary axes are termed the in-phase and quadrature axes respectively

due to their 90° separation. Such a representation on perpendicular axes

lends itself to straightforward implementation. The amplitude of each point

along the in-phase axis is used to modulate a cosine (or sine) wave and the

amplitude along the quadrature axis to modulate a sine (or cosine) wave.

In PSK, the constellation points chosen are usually positioned with uniform

angular spacing around a circle. This gives maximum phase-separation

between adjacent points and thus the best immunity to corruption. They are

positioned on a circle so that they can all be transmitted with the same

energy. In this way, the moduli of the complex numbers they represent will

be the same and thus so will the amplitudes needed for the cosine and sine

waves. Two common examples are "binary phase-shift keying" (BPSK) which

uses two phases, and "quadrature phase-shift keying" (QPSK) which uses

four phases, although any number of phases may be used. Since the data to be

conveyed are usually binary, the PSK scheme is usually designed with the

number of constellation points being a power of 2.

[EDIT] DEFINITIONS

For determining error-rates mathematically, some definitions will be needed:

Eb = Energy-per-bit

Es = Energy-per-symbol = nEb with n bits per symbol

Tb = Bit duration

Ts = Symbol duration

N0 / 2 = Noise power spectral density (W/Hz)

Pb = Probability of bit-error

Ps = Probability of symbol-error

Q(x) will give the probability that a single sample taken from a random

process with zero-mean and unit-variance Gaussian probability density

function will be greater or equal to x. It is a scaled form of the

complementary Gaussian error function:

Page 13: Line Coding

.

The error-rates quoted here are those in additive white Gaussian noise

(AWGN). These error rates are lower than those computed in fading

channels, hence, are a good theoretical benchmark to compare with.

[EDIT ] APPLICATIONS

Owing to PSK's simplicity, particularly when compared with its competitor

quadrature amplitude modulation, it is widely used in existing technologies.

The wireless LAN standard, IEEE 802.11b-1999 [1] [2] , uses a variety of different

PSKs depending on the data-rate required. At the basic-rate of 1 Mbit/s, it

uses DBPSK (differential BPSK). To provide the extended-rate of 2 Mbit/s,

DQPSK is used. In reaching 5.5 Mbit/s and the full-rate of 11 Mbit/s, QPSK is

employed, but has to be coupled with complementary code keying. The

higher-speed wireless LAN standard, IEEE 802.11g-2003 [1] [3] has eight data

rates: 6, 9, 12, 18, 24, 36, 48 and 54 Mbit/s. The 6 and 9 Mbit/s modes use

OFDM modulation where each sub-carrier is BPSK modulated. The 12 and 18

Mbit/s modes use OFDM with QPSK. The fastest four modes use OFDM with

forms of quadrature amplitude modulation.

Because of its simplicity BPSK is appropriate for low-cost passive

transmitters, and is used in RFID standards such as ISO/IEC 14443 which has

been adopted for biometric passports, credit cards such as American

Express's ExpressPay, and many other applications[4].

Bluetooth 2 will use π / 4-DQPSK at its lower rate (2 Mbit/s) and 8-DPSK at

its higher rate (3 Mbit/s) when the link between the two devices is

sufficiently robust. Bluetooth 1 modulates with Gaussian minimum-shift

keying, a binary scheme, so either modulation choice in version 2 will yield a

higher data-rate. A similar technology, IEEE 802.15.4 (the wireless standard

used by ZigBee) also relies on PSK. IEEE 802.15.4 allows the use of two

frequency bands: 868–915 MHz using BPSK and at 2.4 GHz using OQPSK.

Notably absent from these various schemes is 8-PSK. This is because its

error-rate performance is close to that of 16-QAM — it is only about 0.5 dB

better[citation needed] — but its data rate is only three-quarters that of 16-QAM.

Thus 8-PSK is often omitted from standards and, as seen above, schemes tend

to 'jump' from QPSK to 16-QAM (8-QAM is possible but difficult to

implement).

[EDIT ] BINARY PHASE-SHIFT KEYING (BPSK)

Constellation diagram example for BPSK.

BPSK (also sometimes called PRK, Phase Reversal Keying, or 2PSK) is the

simplest form of phase shift keying (PSK). It uses two phases which are

separated by 180° and so can also be termed 2-PSK. It does not particularly

matter exactly where the constellation points are positioned, and in this

figure they are shown on the real axis, at 0° and 180°. This modulation is the

most robust of all the PSKs since it takes the highest level of noise or

distortion to make the demodulator reach an incorrect decision. It is,

however, only able to modulate at 1 bit/symbol (as seen in the figure) and so

is unsuitable for high data-rate applications when bandwidth is limited.

Page 14: Line Coding

In the presence of an arbitrary phase-shift introduced by the

communications channel, the demodulator is unable to tell which

constellation point is which. As a result, the data is often differentially

encoded prior to modulation.

[EDIT] IMPLEMENTATION

The general form for BPSK follows the equation:

This yields two phases, 0 and . In the specific form, binary data is often π

conveyed with the following signals:

fo

r binary "0"

for binary "1"

where fc is the frequency of the carrier-wave.

Hence, the signal-space can be represented by the single basis function

where 1 is represented by and 0 is represented by .

This assignment is, of course, arbitrary.

The use of this basis function is shown at the end of the next section in a

signal timing diagram. The topmost signal is a BPSK-modulated cosine wave

that the BPSK modulator would produce. The bit-stream that causes this

output is shown above the signal (the other parts of this figure are relevant

only to QPSK).

[EDIT] BIT ERROR RATE

The bit error rate (BER) of BPSK in AWGN can be calculated as[5]:

or

Since there is only one bit per symbol, this is also the symbol error rate.

[EDIT ] QUADRATURE PHASE-SHIFT KEYING (QPSK)

Constellation diagram for QPSK with Gray coding. Each adjacent symbol only

differs by one bit.

Sometimes this is known as quaternary PSK, quadriphase PSK, 4-PSK, or 4-

QAM. (Although the root concepts of QPSK and 4-QAM are different, the

resulting modulated radio waves are exactly the same.) QPSK uses four

points on the constellation diagram, equispaced around a circle. With four

phases, QPSK can encode two bits per symbol, shown in the diagram with

Page 15: Line Coding

gray coding to minimize the bit error rate (BER) — sometimes misperceived

as twice the BER of BPSK.

The mathematical analysis shows that QPSK can be used either to double the

data rate compared with a BPSK system while maintaining the same

bandwidth of the signal, or to maintain the data-rate of BPSK but halving the

bandwidth needed. In this latter case, the BER of QPSK is exactly the same as

the BER of BPSK - and deciding differently is a common confusion when

considering or describing QPSK.

Given that radio communication channels are allocated by agencies such as

the Federal Communication Commission giving a prescribed (maximum)

bandwidth, the advantage of QPSK over BPSK becomes evident: QPSK

transmits twice the data rate in a given bandwidth compared to BPSK - at the

same BER. The engineering penalty that is paid is that QPSK transmitters and

receivers are more complicated than the ones for BPSK. However, with

modern electronics technology, the penalty in cost is very moderate.

As with BPSK, there are phase ambiguity problems at the receiving end, and

differentially encoded QPSK is often used in practice.

[EDIT] IMPLEMENTATION

The implementation of QPSK is more general than that of BPSK and also

indicates the implementation of higher-order PSK. Writing the symbols in the

constellation diagram in terms of the sine and cosine waves used to transmit

them:

This yields the four phases /4, 3 /4, 5 /4 and 7 /4 as needed.π π π π

This results in a two-dimensional signal space with unit basis functions

The first basis function is used as the in-phase component of the signal and

the second as the quadrature component of the signal.

Hence, the signal constellation consists of the signal-space 4 points

The factors of 1/2 indicate that the total power is split equally between the

two carriers.

Comparing these basis functions with that for BPSK shows clearly how QPSK

can be viewed as two independent BPSK signals. Note that the signal-space

points for BPSK do not need to split the symbol (bit) energy over the two

carriers in the scheme shown in the BPSK constellation diagram.

QPSK systems can be implemented in a number of ways. An illustration of the

major components of the transmitter and receiver structure are shown

below.

Page 16: Line Coding

Conceptual transmitter structure for QPSK. The binary data stream is split

into the in-phase and quadrature-phase components. These are then

separately modulated onto two orthogonal basis functions. In this

implementation, two sinusoids are used. Afterwards, the two signals are

superimposed, and the resulting signal is the QPSK signal. Note the use of

polar non-return-to-zero encoding. These encoders can be placed before for

binary data source, but have been placed after to illustrate the conceptual

difference between digital and analog signals involved with digital

modulation.

Receiver structure for QPSK. The matched filters can be replaced with

correlators. Each detection device uses a reference threshold value to

determine whether a 1 or 0 is detected.

[EDIT] BIT ERROR RATE

Although QPSK can be viewed as a quaternary modulation, it is easier to see

it as two independently modulated quadrature carriers. With this

interpretation, the even (or odd) bits are used to modulate the in-phase

component of the carrier, while the odd (or even) bits are used to modulate

the quadrature-phase component of the carrier. BPSK is used on both

carriers and they can be independently demodulated.

As a result, the probability of bit-error for QPSK is the same as for BPSK:

However, in order to achieve the same bit-error probability as BPSK, QPSK

uses twice the power (since two bits are transmitted simultaneously).

The symbol error rate is given by:

.

If the signal-to-noise ratio is high (as is necessary for practical QPSK systems)

the probability of symbol error may be approximated:

[EDIT] QPSK SIGNAL IN THE TIME DOMAIN

The modulated signal is shown below for a short segment of a random binary

data-stream. The two carrier waves are a cosine wave and a sine wave, as

indicated by the signal-space analysis above. Here, the odd-numbered bits

have been assigned to the in-phase component and the even-numbered bits

to the quadrature component (taking the first bit as number 1). The total

signal — the sum of the two components — is shown at the bottom. Jumps in

phase can be seen as the PSK changes the phase on each component at the

start of each bit-period. The topmost waveform alone matches the

description given for BPSK above.

Page 17: Line Coding

Timing diagram for QPSK. The binary data stream is shown beneath the time

axis. The two signal components with their bit assignments are shown the

top and the total, combined signal at the bottom. Note the abrupt changes in

phase at some of the bit-period boundaries.

The binary data that is conveyed by this waveform is: 1 1 0 0 0 1 1 0.

The odd bits, highlighted here, contribute to the in-phase component:

1 1 0 0 0 1 1 0

The even bits, highlighted here, contribute to the quadrature-phase

component: 1 1 0 0 0 1 1 0

[EDIT] VARIANTS

[EDIT ] OFFSET QPSK (OQPSK)

Signal doesn't cross zero, because only one bit of the symbol is changed at a

time

Offset quadrature phase-shift keying (OQPSK) is a variant of phase-shift keying

modulation using 4 different values of the phase to transmit. It is sometimes

called Staggered quadrature phase-shift keying (SQPSK).

Difference of the phase between QPSK and OQPSK

Taking four values of the phase (two bits) at a time to construct a QPSK

symbol can allow the phase of the signal to jump by as much as 180° at a

time. When the signal is low-pass filtered (as is typical in a transmitter),

these phase-shifts result in large amplitude fluctuations, an undesirable

quality in communication systems. By offsetting the timing of the odd and

even bits by one bit-period, or half a symbol-period, the in-phase and

quadrature components will never change at the same time. In the

constellation diagram shown on the right, it can be seen that this will limit

the phase-shift to no more than 90° at a time. This yields much lower

amplitude fluctuations than non-offset QPSK and is sometimes preferred in

practice.

The picture on the right shows the difference in the behavior of the phase

between ordinary QPSK and OQPSK. It can be seen that in the first plot the

Page 18: Line Coding

phase can change by 180° at once, while in OQPSK the changes are never

greater than 90°.

The modulated signal is shown below for a short segment of a random binary

data-stream. Note the half symbol-period offset between the two component

waves. The sudden phase-shifts occur about twice as often as for QPSK (since

the signals no longer change together), but they are less severe. In other

words, the magnitude of jumps is smaller in OQPSK when compared to QPSK.

Timing diagram for offset-QPSK. The binary data stream is shown beneath

the time axis. The two signal components with their bit assignments are

shown the top and the total, combined signal at the bottom. Note the half-

period offset between the two signal components.

[EDIT ] Π/4–QPSK

Dual constellation diagram for /4-QPSK. This shows the two separate π

constellations with identical Gray coding but rotated by 45° with respect to

each other.

This final variant of QPSK uses two identical constellations which are rotated

by 45° (π / 4 radians, hence the name) with respect to one another. Usually,

either the even or odd symbols are used to select points from one of the

constellations and the other symbols select points from the other

constellation. This also reduces the phase-shifts from a maximum of 180°, but

only to a maximum of 135° and so the amplitude fluctuations of π / 4–QPSK

are between OQPSK and non-offset QPSK.

One property this modulation scheme possesses is that if the modulated

signal is represented in the complex domain, it does not have any paths

through the origin. In other words, the signal does not pass through the

origin. This lowers the dynamical range of fluctuations in the signal which is

desirable when engineering communications signals.

On the other hand, π / 4–QPSK lends itself to easy demodulation and has

been adopted for use in, for example, TDMA cellular telephone systems.

The modulated signal is shown below for a short segment of a random binary

data-stream. The construction is the same as above for ordinary QPSK.

Successive symbols are taken from the two constellations shown in the

diagram. Thus, the first symbol (1 1) is taken from the 'blue' constellation

and the second symbol (0 0) is taken from the 'green' constellation. Note that

magnitudes of the two component waves change as they switch between

constellations, but the total signal's magnitude remains constant. The phase-

shifts are between those of the two previous timing-diagrams.

Page 19: Line Coding

Timing diagram for /4-QPSK. The binary data stream is shown beneath the π

time axis. The two signal components with their bit assignments are shown

the top and the total, combined signal at the bottom. Note that successive

symbols are taken alternately from the two constellations, starting with the

'blue' one.

[EDIT ] SOQPSK

The license-free shaped-offset QPSK (SOQPSK) is interoperable with Feher-

patented QPSK (FQPSK), in the sense that an integrate-and-dump offset

QPSK detector produces the same output no matter which kind of

transmitter is used[1].

These modulations carefully shape the I and Q waveforms such that they

change very smoothly, and the signal stays constant-amplitude even during

signal transitions. (Rather than traveling instantly from one symbol to

another, or even linearly, it travels smoothly around the constant-amplitude

circle from one symbol to the next.)

The standard description of SOQPSK-TG involves ternary symbols.

[EDIT ] DPQPSK

Dual-polarization quadrature phase shift keying (DPQPSK) or dual-

polarization QPSK - involves the polarization multiplexing of two different

QPSK signals, thus improving the spectral efficiency by a factor of 2. This is a

cost-effective alternative, to utilizing 16-PSK instead of QPSK to double the

spectral the efficiency.

[EDIT ] HIGHER-ORDER PSK

Constellation diagram for 8-PSK with Gray coding.

Any number of phases may be used to construct a PSK constellation but 8-

PSK is usually the highest order PSK constellation deployed. With more than

8 phases, the error-rate becomes too high and there are better, though more

complex, modulations available such as quadrature amplitude modulation

(QAM). Although any number of phases may be used, the fact that the

constellation must usually deal with binary data means that the number of

symbols is usually a power of 2 — this allows an equal number of bits-per-

symbol.

[EDIT] BIT ERROR RATE

For the general M-PSK there is no simple expression for the symbol-error

probability if M > 4. Unfortunately, it can only be obtained from:

where

Page 20: Line Coding

,

,

,

and

and are jointly-

Gaussian random variables.

Bit-error rate curves for BPSK, QPSK, 8-PSK and 16-PSK, AWGN channel.

This may be approximated for high M and high Eb / N0 by:

.

The bit-error probability for M-PSK can only be determined exactly once the

bit-mapping is known. However, when Gray coding is used, the most

probable error from one symbol to the next produces only a single bit-error

and

.

(Using Gray coding allows us to approximate the Lee distance of the errors as

the Hamming distance of the errors in the decoded bitstream, which is easier

to implement in hardware.)

The graph on the left compares the bit-error rates of BPSK, QPSK (which are

the same, as noted above), 8-PSK and 16-PSK. It is seen that higher-order

modulations exhibit higher error-rates; in exchange however they deliver a

higher raw data-rate.

Bounds on the error rates of various digital modulation schemes can be

computed with application of the union bound to the signal constellation.

[EDIT ] DIFFERENTIAL PHASE-SHIFT KEYING (DPSK)

This article may require cleanup to meet Wikipedia's quality standards. Please improve this article if you can. The talk page may contain suggestions. (May 2009)

[EDIT] DIFFERENTIAL ENCODING

Main article: differential coding

Differential phase shift keying (DPSK) is a common form of phase modulation

that conveys data by changing the phase of the carrier wave. As mentioned

for BPSK and QPSK there is an ambiguity of phase if the constellation is

rotated by some effect in the communications channel through which the

Page 21: Line Coding

signal passes. This problem can be overcome by using the data to change

rather than set the phase.

For example, in differentially-encoded BPSK a binary '1' may be transmitted

by adding 180° to the current phase and a binary '0' by adding 0° to the

current phase. Another variant of DPSK is Symmetric Differential Phase Shift

keying, SDPSK, where encoding would be +90° for a '1' and -90° for a '0'.

In differentially-encoded QPSK (DQPSK), the phase-shifts are 0°, 90°, 180°, -

90° corresponding to data '00', '01', '11', '10'. This kind of encoding may be

demodulated in the same way as for non-differential PSK but the phase

ambiguities can be ignored. Thus, each received symbol is demodulated to

one of the M points in the constellation and a comparator then computes the

difference in phase between this received signal and the preceding one. The

difference encodes the data as described above. Symmetric Differential

Quadrature Phase Shift Keying (SDQPSK) is like DQPSK, but encoding is

symmetric, using phase shift values of -135°, -45°, +45° and +135°.

The modulated signal is shown below for both DBPSK and DQPSK as

described above. In the figure, it is assumed that the signal starts with zero

phase, and so there is a phase shift in both signals at t = 0.

Timing diagram for DBPSK and DQPSK. The binary data stream is above the

DBPSK signal. The individual bits of the DBPSK signal are grouped into pairs

for the DQPSK signal, which only changes every Ts = 2Tb.

Analysis shows that differential encoding approximately doubles the error

rate compared to ordinary M-PSK but this may be overcome by only a small

increase in Eb / N0. Furthermore, this analysis (and the graphical results

below) are based on a system in which the only corruption is additive white

Gaussian noise(AWGN). However, there will also be a physical channel

between the transmitter and receiver in the communication system. This

channel will, in general, introduce an unknown phase-shift to the PSK signal;

in these cases the differential schemes can yield a better error-rate than the

ordinary schemes which rely on precise phase information.

[EDIT] DEMODULATION

BER comparison between DBPSK, DQPSK and their non-differential forms

using gray-coding and operating in white noise.

For a signal that has been differentially encoded, there is an obvious

alternative method of demodulation. Instead of demodulating as usual and

ignoring carrier-phase ambiguity, the phase between two successive received

symbols is compared and used to determine what the data must have been.

When differential encoding is used in this manner, the scheme is known as

differential phase-shift keying (DPSK). Note that this is subtly different to just

differentially-encoded PSK since, upon reception, the received symbols are

not decoded one-by-one to constellation points but are instead compared

directly to one another.

Page 22: Line Coding

Call the received symbol in the kth timeslot rk and let it have phase φk. Assume without loss of generality that the phase of the carrier wave is zero.

Denote the AWGN term as nk. Then

.

The decision variable for the k − 1th symbol and the kth symbol is the phase

difference between rk and rk − 1. That is, if rk is projected onto rk − 1, the

decision is taken on the phase of the resultant complex number:

where superscript * denotes complex conjugation. In the absence of noise,

the phase of this is θk − θk − 1, the phase-shift between the two received

signals which can be used to determine the data transmitted.

The probability of error for DPSK is difficult to calculate in general, but, in the

case of DBPSK it is:

which, when numerically evaluated, is only slightly worse than ordinary

BPSK, particularly at higher Eb / N0 values.

Using DPSK avoids the need for possibly complex carrier-recovery schemes

to provide an accurate phase estimate and can be an attractive alternative to

ordinary PSK.

In optical communications, the data can be modulated onto the phase of a

laser in a differential way. The modulation is a laser which emits a

continuous wave, and a Mach-Zehnder modulator which receives electrical

binary data. For the case of BPSK for example, the laser transmits the field

unchanged for binary '1', and with reverse polarity for '0'. The demodulator

consists of a delay line interferometer which delays one bit, so two bits can

be compared at one time. In further processing, a photo diode is used to

transform the optical field into an electric current, so the information is

changed back into its original state.

The bit-error rates of DBPSK and DQPSK are compared to their non-

differential counterparts in the graph to the right. The loss for using DBPSK is

small enough compared to the complexity reduction that it is often used in

communications systems that would otherwise use BPSK. For DQPSK though,

the loss in performance compared to ordinary QPSK is larger and the system

designer must balance this against the reduction in complexity.

[EDIT] EXAMPLE: DIFFERENTIALLY-ENCODED BPSK

Differential encoding/decoding system diagram.

At the kth time-slot call the bit to be modulated bk, the differentially-encoded

bit ek and the resulting modulated signal mk(t). Assume that the

constellation diagram positions the symbols at ±1 (which is BPSK). The

differential encoder produces:

where indicates binary or modulo-2 addition.

Page 23: Line Coding

BER comparison between BPSK and differentially-encoded BPSK with gray-

coding operating in white noise.

So ek only changes state (from binary '0' to binary '1' or from binary '1' to

binary '0') if bk is a binary '1'. Otherwise it remains in its previous state. This

is the description of differentially-encoded BPSK given above.

The received signal is demodulated to yield ek = ±1 and then the differential

decoder reverses the encoding procedure and produces:

since binary subtraction is the same as binary

addition.

Therefore, bk = 1 if ek and ek − 1 differ and bk = 0 if they are the same.

Hence, if both ek and ek − 1 are inverted, bk will still be decoded correctly.

Thus, the 180° phase ambiguity does not matter.

Differential schemes for other PSK modulations may be devised along similar

lines. The waveforms for DPSK are the same as for differentially-encoded PSK

given above since the only change between the two schemes is at the

receiver.

The BER curve for this example is compared to ordinary BPSK on the right.

As mentioned above, whilst the error-rate is approximately doubled, the

increase needed in Eb / N0 to overcome this is small. The increase in Eb / N0 required to overcome differential modulation in coded systems, however,

is larger - typically about 3 dB. The performance degradation is a result of

noncoherent transmission - in this case it refers to the fact that tracking of

the phase is completely ignored.

Given a fixed bandwidth, channel capacity vs. SNR for some common

modulation schemes

Like all M-ary modulation schemes with M = 2b symbols, when given

exclusive access to a fixed bandwidth, the channel capacity of any phase shift

keying modulation scheme rises to a maximum of b bits per symbol as the

SNR increases.

TIME-DIVISION MULTIPLEXING

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Page 24: Line Coding

Time-division multiplexing (TDM) is a type of digital or (rarely) analog

multiplexing in which two or more signals or bit streams are transferred

apparently simultaneously as sub-channels in one communication channel,

but are physically taking turns on the channel. The time domain is divided

into several recurrent timeslots of fixed length, one for each sub-channel. A

sample byte or data block of sub-channel 1 is transmitted during timeslot 1,

sub-channel 2 during timeslot 2, etc. One TDM frame consists of one timeslot

per sub-channel plus a synchronization channel and sometimes error

correction channel before the synchronization. After the last sub-channel,

error correction, and synchronization, the cycle starts all over again with a

new frame, starting with the second sample, byte or data block from sub-

channel 1, etc.

CONTENTS

1 Application examples 2 TDM versus packet mode communication

3 History

o 3.1 Transmission using Time Division Multiplexing (TDM)

4 Synchronous time division multiplexing (Sync TDM)

5 Synchronous digital hierarchy (SDH)

6 Statistical time-division multiplexing (Stat TDM)

7 See also

8 Notes

9 References

APPLICATION EXAMPLES

The plesiochronous digital hierarchy (PDH) system, also known as

the PCM system, for digital transmission of several telephone calls

over the same four-wire copper cable (T-carrier or E-carrier) or fiber

cable in the circuit switched digital telephone network

The SDH and synchronous optical networking (SONET) network

transmission standards, that have surpassed PDH.

The RIFF (WAV) audio standard interleaves left and right stereo

signals on a per-sample basis

The left-right channel splitting in use for stereoscopic liquid crystal

shutter glasses

TDM can be further extended into the time division multiple access (TDMA)

scheme, where several stations connected to the same physical medium, for

example sharing the same frequency channel, can communicate. Application

examples include:

The GSM telephone system

The Tactical Data Links Link 16 and Link 22

[EDIT ] TDM VERSUS PACKET MODE COMMUNICATION

In its primary form, TDM is used for circuit mode communication with a fixed

number of channels and constant bandwidth per channel.

Bandwidth Reservation distinguishes time-division multiplexing from

statistical multiplexing such as packet mode communication (also known as

statistical time-domain multiplexing, see below) i.e. the time-slots are

recurrent in a fixed order and pre-allocated to the channels, rather than

scheduled on a packet-by-packet basis. Statistical time-domain multiplexing

resembles, but should not be considered the same as time-division

multiplexing.

Page 25: Line Coding

In dynamic TDMA, a scheduling algorithm dynamically reserves a variable

number of timeslots in each frame to variable bit-rate data streams, based on

the traffic demand of each data stream. Dynamic TDMA is used in

HIPERLAN/2 ;

Dynamic synchronous Transfer Mode ;

IEEE 802.16a .

[EDIT ] HISTORY

Time-division multiplexing was first developed in telegraphy; see

multiplexing in telegraphy: Émile Baudot developed a time-multiplexing

system of multiple Hughes machines in the 1870s.

For the SIGSALY encryptor of 1943, see PCM.

In 1962, engineers from Bell Labs developed the first D1 Channel Banks,

which combined 24 digitised voice calls over a 4-wire copper trunk between

Bell central office analogue switches. A channel bank sliced a 1.544 Mbit/s

digital signal into 8,000 separate frames, each composed of 24 contiguous

bytes. Each byte represented a single telephone call encoded into a constant

bit rate signal of 64 Kbit/s. Channel banks used a byte's fixed position

(temporal alignment) in the frame to determine which call it belonged to.[1]

[EDIT] TRANSMISSION USING TIME DIVISION MULTIPLEXING (TDM)

In circuit switched networks such as the public switched telephone network

(PSTN) there exists the need to transmit multiple subscribers’ calls along the

same transmission medium.[2] To accomplish this, network designers make

use of TDM. TDM allows switches to create channels, also known as

tributaries, within a transmission stream.[2] A standard DS0 voice signal has a

data bit rate of 64 kbit/s, determined using Nyquist’s sampling criterion.[2][3]

TDM takes frames of the voice signals and multiplexes them into a TDM

frame which runs at a higher bandwidth. So if the TDM frame consists of n

voice frames, the bandwidth will be n*64 kbit/s.[2]

Each voice sample timeslot in the TDM frame is called a channel .[2] In

European systems, TDM frames contain 30 digital voice channels, and in

American systems, they contain 24 channels.[2] Both standards also contain

extra bits (or bit timeslots) for signalling (see Signaling System 7) and

synchronisation bits.[2]

Multiplexing more than 24 or 30 digital voice channels is called higher order

multiplexing.[2] Higher order multiplexing is accomplished by multiplexing the

standard TDM frames.[2] For example, a European 120 channel TDM frame is

formed by multiplexing four standard 30 channel TDM frames.[2] At each

higher order multiplex, four TDM frames from the immediate lower order are

combined, creating multiplexes with a bandwidth of n x 64 kbit/s, where n =

120, 480, 1920, etc.[2]

[EDIT ] SYNCHRONOUS TIME DIVISION MULTIPLEXING (SYNC

TDM)

There are three types of (Sync TDM): T1, SONET/SDH (see below), and

ISDN[4].

[EDIT ] SYNCHRONOUS DIGITAL HIERARCHY (SDH)

Plesiochronous digital hierarchy (PDH) was developed as a standard for

multiplexing higher order frames.[2][3] PDH created larger numbers of

channels by multiplexing the standard Europeans 30 channel TDM frames.[2]

This solution worked for a while; however PDH suffered from several

inherent drawbacks which ultimately resulted in the development of the

Page 26: Line Coding

Synchronous Digital Hierarchy (SDH). The requirements which drove the

development of SDH were these:[2][3]

Be synchronous – All clocks in the system must align with a reference

clock.

Be service-oriented – SDH must route traffic from End Exchange to

End Exchange without worrying about exchanges in between, where

the bandwidth can be reserved at a fixed level for a fixed period of

time.

Allow frames of any size to be removed or inserted into an SDH frame

of any size.

Easily manageable with the capability of transferring management

data across links.

Provide high levels of recovery from faults.

Provide high data rates by multiplexing any size frame, limited only

by technology.

Give reduced bit rate errors.

SDH has become the primary transmission protocol in most PSTN networks.[2][3] It was developed to allow streams 1.544 Mbit/s and above to be

multiplexed, in order to create larger SDH frames known as Synchronous

Transport Modules (STM).[2] The STM-1 frame consists of smaller streams

that are multiplexed to create a 155.52 Mbit/s frame.[2][3] SDH can also

multiplex packet based frames e.g. Ethernet, PPP and ATM.[2]

While SDH is considered to be a transmission protocol (Layer 1 in the OSI

Reference Model), it also performs some switching functions, as stated in the

third bullet point requirement listed above.[2] The most common SDH

Networking functions are these:

SDH Crossconnect – The SDH Crossconnect is the SDH version of a

Time-Space-Time crosspoint switch. It connects any channel on any

of its inputs to any channel on any of its outputs. The SDH

Crossconnect is used in Transit Exchanges, where all inputs and

outputs are connected to other exchanges.[2]

SDH Add-Drop Multiplexer – The SDH Add-Drop Multiplexer (ADM)

can add or remove any multiplexed frame down to 1.544Mb. Below

this level, standard TDM can be performed. SDH ADMs can also

perform the task of an SDH Crossconnect and are used in End

Exchanges where the channels from subscribers are connected to the

core PSTN network.[2]

SDH network functions are connected using high-speed optic fibre. Optic

fibre uses light pulses to transmit data and is therefore extremely fast.[2]

Modern optic fibre transmission makes use of Wavelength Division

Multiplexing (WDM) where signals transmitted across the fibre are

transmitted at different wavelengths, creating additional channels for

transmission.[2][3] This increases the speed and capacity of the link, which in

turn reduces both unit and total costs.[2]

[EDIT ] STATISTICAL TIME-DIVISION MULTIPLEXING (STAT

TDM)

STDM is an advanced version of TDM in which both the address of the

terminal and the data itself are transmitted together for better routing. Using

STDM allows bandwidth to be split over 1 line. Many college and corporate

campuses use this type of TDM to logically distribute bandwidth.

If there is one 10MBit line coming into the building, STDM can be used to

provide 178 terminals with a dedicated 56k connection (178 * 56k =

9.96Mb). A more common use however is to only grant the bandwidth when

that much is needed. STDM does not reserve a time slot for each terminal,

Page 27: Line Coding

rather it assigns a slot when the terminal is requiring data to be sent or

received.

This is also called asynchronous time-division multiplexing[4](ATDM), in an

alternative nomenclature in which "STDM" or "synchronous time division

multiplexing" designates the older method that uses fixed time slots.

PULSE-CODE MODULATION

From Wikipedia, the free encyclopedia

  (Redirected from PCM)

Jump to: navigation, search

"PCM" redirects here. For other uses, see PCM (disambiguation).

Pulse-code modulation (PCM) is a method used to digitally represent

sampled analog signals, which was invented by Alec Reeves in 1937. It is the

standard form for digital audio in computers and various Blu-ray, Compact

Disc and DVD formats, as well as other uses such as digital telephone

systems. A PCM stream is a digital representation of an analog signal, in

which the magnitude of the analogue signal is sampled regularly at uniform

intervals, with each sample being quantized to the nearest value within a

range of digital steps.

PCM streams have two basic properties that determine their fidelity to the

original analog signal: the sampling rate, which is the number of times per

second that samples are taken; and the bit depth, which determines the

number of possible digital values that each sample can take.

CONTENTS

1 Modulation

2 Demodulation

3 Limitations

4 Digitization as part of the PCM process

5 Encoding for transmission

6 History

7 Nomenclature

8 See also

9 References

10 Further reading

11 External links

[EDIT ] MODULATION

Sampling and quantization of a signal (red) for 4-bit PCM

In the diagram, a sine wave (red curve) is sampled and quantized for pulse

code modulation. The sine wave is sampled at regular intervals, shown as

ticks on the x-axis. For each sample, one of the available values (ticks on the

y-axis) is chosen by some algorithm. This produces a fully discrete

representation of the input signal (shaded area) that can be easily encoded as

digital data for storage or manipulation. For the sine wave example at right,

Page 28: Line Coding

we can verify that the quantized values at the sampling moments are 7, 9, 11,

12, 13, 14, 14, 15, 15, 15, 14, etc. Encoding these values as binary numbers

would result in the following set of nibbles: 0111

(23×0+22×1+21×1+20×1=0+4+2+1=7), 1001, 1011, 1100, 1101, 1110, 1110,

1111, 1111, 1111, 1110, etc. These digital values could then be further

processed or analyzed by a purpose-specific digital signal processor or

general purpose DSP. Several Pulse Code Modulation streams could also be

multiplexed into a larger aggregate data stream, generally for transmission of

multiple streams over a single physical link. One technique is called time-

division multiplexing, or TDM, and is widely used, notably in the modern

public telephone system. Another technique is called Frequency-division

multiplexing, where the signal is assigned a frequency in a spectrum, and

transmitted along with other signals inside that spectrum. Currently, TDM is

much more widely used than FDM because of its natural compatibility with

digital communication, and generally lower bandwidth requirements.

There are many ways to implement a real device that performs this task. In

real systems, such a device is commonly implemented on a single integrated

circuit that lacks only the clock necessary for sampling, and is generally

referred to as an ADC (Analog-to-Digital converter). These devices will

produce on their output a binary representation of the input whenever they

are triggered by a clock signal, which would then be read by a processor of

some sort.

[EDIT ] DEMODULATION

To produce output from the sampled data, the procedure of modulation is

applied in reverse. After each sampling period has passed, the next value is

read and a signal is shifted to the new value. As a result of these transitions,

the signal will have a significant amount of high-frequency energy. To smooth

out the signal and remove these undesirable aliasing frequencies, the signal

would be passed through analog filters that suppress energy outside the

expected frequency range (that is, greater than the Nyquist frequency fs / 2).

Some systems use digital filtering to remove some of the aliasing, converting

the signal from digital to analog at a higher sample rate such that the analog

filter required for anti-aliasing is much simpler. In some systems, no explicit

filtering is done at all; as it's impossible for any system to reproduce a signal

with infinite bandwidth, inherent losses in the system compensate for the

artifacts — or the system simply does not require much precision. The

sampling theorem suggests that practical PCM devices, provided a sampling

frequency that is sufficiently greater than that of the input signal, can operate

without introducing significant distortions within their designed frequency

bands.

The electronics involved in producing an accurate analog signal from the

discrete data are similar to those used for generating the digital signal. These

devices are DACs (digital-to-analog converters), and operate similarly to

ADCs. They produce on their output a voltage or current (depending on type)

that represents the value presented on their inputs. This output would then

generally be filtered and amplified for use.

[EDIT ] LIMITATIONS

There are two sources of impairment implicit in any PCM system:

Choosing a discrete value near the analog signal for each sample leads

to quantization error, which swings between -q/2 and q/2. In the

ideal case (with a fully linear ADC) it is uniformly distributed over

this interval, with zero mean and variance of q2/12.

Between samples no measurement of the signal is made; the

sampling theorem guarantees non-ambiguous representation and

recovery of the signal only if it has no energy at frequency fs/2 or

higher (one half the sampling frequency, known as the Nyquist

Page 29: Line Coding

frequency); higher frequencies will generally not be correctly

represented or recovered.

As samples are dependent on time, an accurate clock is required for accurate

reproduction. If either the encoding or decoding clock is not stable, its

frequency drift will directly affect the output quality of the device. A slight

difference between the encoding and decoding clock frequencies is not

generally a major concern; a small constant error is not noticeable. Clock

error does become a major issue if the clock is not stable, however. A drifting

clock, even with a relatively small error, will cause very obvious distortions

in audio and video signals, for example.

Extra information: PCM data from a master with a clock frequency that can

not be influenced requires an exact clock at the decoding side to ensure that

all the data is used in a continuous stream without buffer underrun or buffer

overflow. Any frequency difference will be audible at the output since the

number of samples per time interval can not be correct. The data speed in a

compact disk can be steered by means of a servo that controls the rotation

speed of the disk; here the output clock is the master clock. For all "external

master" systems like DAB the output stream must be decoded with a

regenerated and exact synchronous clock. When the wanted output sample

rate differs from the incoming data stream clock then a sample rate converter

must be inserted in the chain to convert the samples to the new clock

domain.

[EDIT ] DIGITIZATION AS PART OF THE PCM PROCESS

In conventional PCM, the analog signal may be processed (e.g., by amplitude

compression) before being digitized. Once the signal is digitized, the PCM

signal is usually subjected to further processing (e.g., digital data

compression).

PCM with linear quantization is known as Linear PCM (LPCM).[1]

Some forms of PCM combine signal processing with coding. Older versions of

these systems applied the processing in the analog domain as part of the A/D

process; newer implementations do so in the digital domain. These simple

techniques have been largely rendered obsolete by modern transform-based

audio compression techniques.

DPCM encodes the PCM values as differences between the current

and the predicted value. An algorithm predicts the next sample based

on the previous samples, and the encoder stores only the difference

between this prediction and the actual value. If the prediction is

reasonable, fewer bits can be used to represent the same information.

For audio, this type of encoding reduces the number of bits required

per sample by about 25% compared to PCM.

Adaptive DPCM (ADPCM) is a variant of DPCM that varies the size of

the quantization step, to allow further reduction of the required

bandwidth for a given signal-to-noise ratio.

Delta modulation is a form of DPCM which uses one bit per sample.

In telephony, a standard audio signal for a single phone call is encoded as

8,000 analog samples per second, of 8 bits each, giving a 64 kbit/s digital

signal known as DS0. The default signal compression encoding on a DS0 is

either -law (mu-law)μ PCM (North America and Japan) or A-law PCM

(Europe and most of the rest of the world). These are logarithmic

compression systems where a 12 or 13-bit linear PCM sample number is

mapped into an 8-bit value. This system is described by international

standard G.711. An alternative proposal for a floating point representation,

with 5-bit mantissa and 3-bit radix, was abandoned.

Where circuit costs are high and loss of voice quality is acceptable, it

sometimes makes sense to compress the voice signal even further. An

ADPCM algorithm is used to map a series of 8-bit µ-law or A-law PCM

Page 30: Line Coding

samples into a series of 4-bit ADPCM samples. In this way, the capacity of the

line is doubled. The technique is detailed in the G.726 standard.

Later it was found that even further compression was possible and additional

standards were published. Some of these international standards describe

systems and ideas which are covered by privately owned patents and thus

use of these standards requires payments to the patent holders.

Some ADPCM techniques are used in Voice over IP communications.

[EDIT ] ENCODING FOR TRANSMISSION

Main article: Line code

Pulse-code modulation can be either return-to-zero (RZ) or non-return-to-

zero (NRZ). For a NRZ system to be synchronized using in-band information,

there must not be long sequences of identical symbols, such as ones or

zeroes. For binary PCM systems, the density of 1-symbols is called ones-

density.[2]

Ones-density is often controlled using precoding techniques such as Run

Length Limited encoding, where the PCM code is expanded into a slightly

longer code with a guaranteed bound on ones-density before modulation into

the channel. In other cases, extra framing bits are added into the stream

which guarantee at least occasional symbol transitions.

Another technique used to control ones-density is the use of a scrambler

polynomial on the raw data which will tend to turn the raw data stream into

a stream that looks pseudo-random, but where the raw stream can be

recovered exactly by reversing the effect of the polynomial. In this case, long

runs of zeroes or ones are still possible on the output, but are considered

unlikely enough to be within normal engineering tolerance.

In other cases, the long term DC value of the modulated signal is important,

as building up a DC offset will tend to bias detector circuits out of their

operating range. In this case special measures are taken to keep a count of

the cumulative DC offset, and to modify the codes if necessary to make the DC

offset always tend back to zero.

Many of these codes are bipolar codes, where the pulses can be positive,

negative or absent. In the typical alternate mark inversion code, non-zero

pulses alternate between being positive and negative. These rules may be

violated to generate special symbols used for framing or other special

purposes.

See also: T-carrier and E-carrier

[EDIT ] HISTORY

In the history of electrical communications, the earliest reason for sampling a

signal was to interlace samples from different telegraphy sources, and convey

them over a single telegraph cable. Telegraph time-division multiplexing

(TDM) was conveyed as early as 1853, by the American inventor Moses B.

Farmer. The electrical engineer W. M. Miner, in 1903, used an electro-

mechanical commutator for time-division multiplex of multiple telegraph

signals, and also applied this technology to telephony. He obtained intelligible

speech from channels sampled at a rate above 3500–4300 Hz: below this was

unsatisfactory. This was TDM, but pulse-amplitude modulation (PAM) rather

than PCM.

In 1926, Paul M. Rainey of Western Electric patented a facsimile machine

which transmitted its signal using 5-bit PCM, encoded by an opto-mechanical

analog-to-digital converter.[3] The machine did not go into production. British

engineer Alec Reeves, unaware of previous work, conceived the use of PCM

for voice communication in 1937 while working for International Telephone

and Telegraph in France. He described the theory and advantages, but no

Page 31: Line Coding

practical use resulted. Reeves filed for a French patent in 1938, and his U.S.

patent was granted in 1943.

The first transmission of speech by digital techniques was the SIGSALY

vocoder encryption equipment used for high-level Allied communications

during World War II from 1943. In 1943, the Bell Labs researchers who

designed the SIGSALY system became aware of the use of PCM binary coding

as already proposed by Alec Reeves. In 1949 for the Canadian Navy's DATAR

system, Ferranti Canada built a working PCM radio system that was able to

transmit digitized radar data over long distances.[4]

PCM in the late 1940s and early 1950s used a cathode-ray coding tube with a

plate electrode having encoding perforations.[5][6] As in an oscilloscope, the

beam was swept horizontally at the sample rate while the vertical deflection

was controlled by the input analog signal, causing the beam to pass through

higher or lower portions of the perforated plate. The plate collected or

passed the beam, producing current variations in binary code, one bit at a

time. Rather than natural binary, the grid of Goodall's later tube was

perforated to produce a glitch-free Gray code, and produced all bits

simultaneously by using a fan beam instead of a scanning beam.

The National Inventors Hall of Fame has honored Bernard M. Oliver [7] and

Claude Shannon [8] as the inventors of PCM,[9] as described in 'Communication

System Employing Pulse Code Modulation,' U.S. Patent 2,801,281 filed in

1946 and 1952, granted in 1956. Another patent by the same title was filed

by John R. Pierce in 1945, and issued in 1948: U.S. Patent 2,437,707. The

three of them published "The Philosophy of PCM" in 1948.[10]

Pulse-code modulation (PCM) was used in Japan by Denon in 1972 for the

mastering and production of analogue phonograph records, using a 2-inch

Quadruplex-format videotape recorder for its transport, but this was not

developed into a consumer product.

[EDIT ] NOMENCLATURE

The word pulse in the term Pulse-Code Modulation refers to the "pulses" to be

found in the transmission line. This perhaps is a natural consequence of this

technique having evolved alongside two analog methods, pulse width

modulation and pulse position modulation, in which the information to be

encoded is in fact represented by discrete signal pulses of varying width or

position, respectively. In this respect, PCM bears little resemblance to these

other forms of signal encoding, except that all can be used in time division

multiplexing, and the binary numbers of the PCM codes are represented as

electrical pulses. The device that performs the coding and decoding function

in a telephone circuit is called a codec.

OPTICAL FIBER

An optical fiber or optical fibre is a thin, flexible, transparent fiber that acts

as a waveguide, or "light pipe", to transmit light between the two ends of the

fiber. The field of applied science and engineering concerned with the design

and application of optical fibers is known as fiber optics. Optical fibers are

widely used in fiber-optic communications, which permits transmission over

longer distances and at higher bandwidths (data rates) than other forms of

communication. Fibers are used instead of metal wires because signals travel

along them with less loss and are also immune to electromagnetic

interference. Fibers are also used for illumination, and are wrapped in

bundles so they can be used to carry images, thus allowing viewing in tight

spaces. Specially designed fibers are used for a variety of other applications,

including sensors and fiber lasers.

Optical fiber typically consists of a transparent core surrounded by a

transparent cladding material with a lower index of refraction. Light is kept

in the core by total internal reflection. This causes the fiber to act as a

waveguide. Fibers which support many propagation paths or transverse

Page 32: Line Coding

modes are called multi-mode fibers (MMF), while those which can only

support a single mode are called single-mode fibers (SMF). Multi-mode fibers

generally have a larger core diameter, and are used for short-distance

communication links and for applications where high power must be

transmitted. Single-mode fibers are used for most communication links

longer than 1,050 meters (3,440 ft).

Joining lengths of optical fiber is more complex than joining electrical wire or cable. The ends of the fibers must be carefully cleaved, and then spliced together either mechanically or by fusing them together with heat. Special optical fiber connectors are used to make removable connections

[EDIT] OPTICAL FIBER COMMUNICATION

Main article: Fiber-optic communication

Optical fiber can be used as a medium for telecommunication and networking

because it is flexible and can be bundled as cables. It is especially

advantageous for long-distance communications, because light propagates

through the fiber with little attenuation compared to electrical cables. This

allows long distances to be spanned with few repeaters. Additionally, the per-

channel light signals propagating in the fiber have been modulated at rates as

high as 111 gigabits per second by NTT,[15][16] although 10 or 40 Gbit/s is

typical in deployed systems.[17][18] Each fiber can carry many independent

channels, each using a different wavelength of light (wavelength-division

multiplexing (WDM)). The net data rate (data rate without overhead bytes)

per fiber is the per-channel data rate reduced by the FEC overhead,

multiplied by the number of channels (usually up to eighty in commercial

dense WDM systems as of 2008). The current laboratory fiber optic data rate

record, held by Bell Labs in Villarceaux, France, is multiplexing 155 channels,

each carrying 100 Gbit/s over a 7000 km fiber.[19] Nippon Telegraph and

Telephone Corporation have also managed 69.1 Tbit/s over a single 240 km

fiber (multiplexing 432 channels, equating to 171 Gbit/s per channel).[20] Bell

Labs also broke a 100 Petabit per second kilometer barrier (15.5 Tbit/s over

a single 7000 km fiber).[21]

For short distance applications, such as creating a network within an office

building, fiber-optic cabling can be used to save space in cable ducts. This is

because a single fiber can often carry much more data than many electrical

cables, such as 4 pair Cat-5 Ethernet cabling.[vague] Fiber is also immune to

electrical interference; there is no cross-talk between signals in different

cables and no pickup of environmental noise. Non-armored fiber cables do

not conduct electricity, which makes fiber a good solution for protecting

communications equipment located in high voltage environments such as

power generation facilities, or metal communication structures prone to

lightning strikes. They can also be used in environments where explosive

fumes are present, without danger of ignition. Wiretapping is more difficult

compared to electrical connections, and there are concentric dual core fibers

that are said to be tap-proof.[22]

[EDIT] FIBER OPTIC SENSORS

Main article: Fiber optic sensor

Fibers have many uses in remote sensing. In some applications, the sensor is

itself an optical fiber. In other cases, fiber is used to connect a non-fiberoptic

sensor to a measurement system. Depending on the application, fiber may be

used because of its small size, or the fact that no electrical power is needed at

the remote location, or because many sensors can be multiplexed along the

length of a fiber by using different wavelengths of light for each sensor, or by

sensing the time delay as light passes along the fiber through each sensor.

Time delay can be determined using a device such as an optical time-domain

reflectometer.

Optical fibers can be used as sensors to measure strain, temperature,

pressure and other quantities by modifying a fiber so that the quantity to be

Page 33: Line Coding

measured modulates the intensity, phase, polarization, wavelength or transit

time of light in the fiber. Sensors that vary the intensity of light are the

simplest, since only a simple source and detector are required. A particularly

useful feature of such fiber optic sensors is that they can, if required, provide

distributed sensing over distances of up to one meter.

Extrinsic fiber optic sensors use an optical fiber cable, normally a multi-mode

one, to transmit modulated light from either a non-fiber optical sensor, or an

electronic sensor connected to an optical transmitter. A major benefit of

extrinsic sensors is their ability to reach places which are otherwise

inaccessible. An example is the measurement of temperature inside aircraft

jet engines by using a fiber to transmit radiation into a radiation pyrometer

located outside the engine. Extrinsic sensors can also be used in the same

way to measure the internal temperature of electrical transformers, where

the extreme electromagnetic fields present make other measurement

techniques impossible. Extrinsic sensors are used to measure vibration,

rotation, displacement, velocity, acceleration, torque, and twisting. A solid

state version of the gyroscope using the interference of light has been

developed. The fiber optic gyroscope (FOG) has no moving parts and exploits

the Sagnac effect to detect mechanical rotation.

A common use for fiber optic sensors are in advanced intrusion detection security systems, where the light is transmitted along the fiber optic sensor cable, which is placed on a fence, pipeline or communication cabling, and the returned signal is monitored and analysed for disturbances. This return signal is digitally processed to identify if there is a disturbance, and if an intrusion has occurred an alarm is triggered by the fiber optic security system.

] PRINCIPLE OF OPERATION

An optical fiber is a cylindrical dielectric waveguide (nonconducting waveguide) that transmits light along its axis, by the process of total internal reflection. The fiber consists of a core surrounded by a cladding layer, both of which are made of dielectric materials. To confine the optical signal in the core, the refractive index of the core must be greater than that of the

cladding. The boundary between the core and cladding may either be abrupt, in step-index fiber, or gradual, in graded-index fiber

WIRELESS

From Wikipedia, the free encyclopedia

In telecommunications, wireless communication may be used to transfer

information over short distances (a few meters as in television remote

control) or long distances (thousands or millions of kilometers for radio

communications). The term is often shortened to "wireless". It encompasses

various types of fixed, mobile, and portable two-way radios, cellular

telephones, personal digital assistants (PDAs), and wireless networking.

Other examples of wireless technology include GPS units, garage door openers

and or garage doors, wireless computer mice, keyboards and headsets,

satellite television and cordless telephones.

CONTENTS

1 Introduction 2 Wireless services

3 Wireless networks

4 Modes

5 Cordless

6 History

o 6.1 Photophone

o 6.2 Early wireless work

o 6.3 Radio

7 The electromagnetic spectrum

8 Applications of wireless technology

Page 34: Line Coding

o 8.1 Security systems

o 8.2 Cellular telephone (phones and modems)

o 8.3 Wi-Fi

o 8.4 Wireless energy transfer

o 8.5 Computer interface devices

9 Categories of wireless implementations, devices and standards

10 See also

11 References

12 Further reading

13 External links

[EDIT ] INTRODUCTION

Handheld wireless radios such as this Maritime VHF radio transceiver use

electromagnetic waves to implement a form of wireless communications

technology.

Wireless operations permits services, such as long range communications,

that are impossible or impractical to implement with the use of wires. The

term is commonly used in the telecommunications industry to refer to

telecommunications systems (e.g. radio transmitters and receivers, remote

controls, computer networks, network terminals, etc.) which use some form

of energy (e.g. radio frequency (RF), infrared light, laser light, visible light,

acoustic energy, etc.) to transfer information without the use of wires.[1]

Information is transferred in this manner over both short and long distances.

[EDIT ] WIRELESS SERVICES

The term "wireless" has become a generic and all-encompassing word used

to describe communications in which electromagnetic waves or RF (rather

than some form of wire) carry a signal over part or the entire communication

path. Common examples of wireless equipment in use today include:

Professional LMR (Land Mobile Radio) and SMR (Specialized Mobile

Radio) typically used by business, industrial and Public Safety

entities.

Consumer Two way radio including FRS Family Radio Service, GMRS

(General Mobile Radio Service) and Citizens band ("CB") radios.

The Amateur Radio Service (Ham radio).

Consumer and professional Marine VHF radios.

Cellular telephones and pagers: provide connectivity for portable and

mobile applications, both personal and business.

Global Positioning System (GPS): allows drivers of cars and trucks,

captains of boats and ships, and pilots of aircraft to ascertain their

location anywhere on earth.

Cordless computer peripherals : the cordless mouse is a common

example; keyboards and printers can also be linked to a computer via

wireless.

Cordless telephone sets: these are limited-range devices, not to be

confused with cell phones.

Satellite television : Is broadcast from satellites in geostationary orbit.

Typical services use digital broadcasting to provide multiple channels

to viewers.

[EDIT ] WIRELESS NETWORKS

Wireless networking (i.e. the various types of unlicensed 2.4 GHz WiFi

devices) is used to meet many needs. Perhaps the most common use is to

Page 35: Line Coding

connect laptop users who travel from location to location. Another common

use is for mobile networks that connect via satellite. A wireless transmission

method is a logical choice to network a LAN segment that must frequently

change locations. The following situations justify the use of wireless

technology:

To span a distance beyond the capabilities of typical cabling,

To provide a backup communications link in case of normal network

failure,

To link portable or temporary workstations,

To overcome situations where normal cabling is difficult or

financially impractical, or

To remotely connect mobile users or networks.

[EDIT ] MODES

Wireless communication can be via:

radio frequency communication,

microwave communication, for example long-range line-of-sight via

highly directional antennas, or short-range communication, or

infrared (IR) short-range communication, for example from remote

controls or via Infrared Data Association (IrDA).

Applications may involve point-to-point communication, point-to-multipoint

communication, broadcasting, cellular networks and other wireless

networks.

[EDIT ] CORDLESS

The term "wireless" should not be confused with the term "cordless", which

is generally used to refer to powered electrical or electronic devices that are

able to operate from a portable power source (e.g. a battery pack) without

any cable or cord to limit the mobility of the cordless device through a

connection to the mains power supply.

Some cordless devices, such as cordless telephones, are also wireless in the

sense that information is transferred from the cordless telephone to the

telephone's base unit via some type of wireless communications link. This

has caused some disparity in the usage of the term "cordless", for example in

Digital Enhanced Cordless Telecommunications.

[EDIT ] HISTORY

[EDIT] PHOTOPHONE

Main article: Photophone

The world's first, wireless telephone conversation occurred in 1880, when

Alexander Graham Bell and Charles Sumner Tainter invented and patented

the photophone, a telephone that conducted audio conversations wirelessly

over modulated light beams (which are narrow projections of

electromagnetic waves). In that distant era when utilities did not yet exist to

provide electricity, and lasers had not even been conceived of in science

fiction, there were no practical applications for their invention, which was

highly limited by the availability of both sunlight and good weather. Similar

to free space optical communication, the photophone also required a clear

line of sight between its transmitter and its receiver. It would be several

decades before the photophone's principles found their first practical

applications in military communications and later in fiber-optic

communications.

[EDIT] EARLY WIRELESS WORK

Page 36: Line Coding

Main article: Wireless telegraphy

David E. Hughes, eight years before Hertz's experiments, transmitted radio

signals over a few hundred yards by means of a clockwork keyed transmitter.

As this was before Maxwell's work was understood, Hughes' contemporaries

dismissed his achievement as mere "Induction". In 1885, T. A. Edison used a

vibrator magnet for induction transmission. In 1888, Edison deployed a

system of signaling on the Lehigh Valley Railroad. In 1891, Edison obtained

the wireless patent for this method using inductance (U.S. Patent 465,971).

In the history of wireless technology, the demonstration of the theory of

electromagnetic waves by Heinrich Hertz in 1888 was important.[2][3] The

theory of electromagnetic waves was predicted from the research of James

Clerk Maxwell and Michael Faraday. Hertz demonstrated that

electromagnetic waves could be transmitted and caused to travel through

space at straight lines and that they were able to be received by an

experimental apparatus.[2][3] The experiments were not followed up by Hertz.

Jagadish Chandra Bose around this time developed an early wireless

detection device and helped increase the knowledge of millimeter length

electromagnetic waves.[4] Practical applications of wireless radio

communication and radio remote control technology were implemented by

later inventors, such as Nikola Tesla.

Further information: Invention of radio

[EDIT] RADIO

Main article: History of radio

The term "wireless" came into public use to refer to a radio receiver or

transceiver (a dual purpose receiver and transmitter device), establishing its

usage in the field of wireless telegraphy early on; now the term is used to

describe modern wireless connections such as in cellular networks and

wireless broadband Internet. It is also used in a general sense to refer to any

type of operation that is implemented without the use of wires, such as

"wireless remote control" or "wireless energy transfer", regardless of the

specific technology (e.g. radio, infrared, ultrasonic) used. Guglielmo Marconi

and Karl Ferdinand Braun were awarded the 1909 Nobel Prize for Physics for

their contribution to wireless telegraphy.

[EDIT ] THE ELECTROMAGNETIC SPECTRUM

Light, colors, AM and FM radio, and electronic devices make use of the

electromagnetic spectrum. In the US, the frequencies that are available for

use for communication are treated as a public resource and are regulated by

the Federal Communications Commission. This determines which frequency

ranges can be used for what purpose and by whom. In the absence of such

control or alternative arrangements such as a privatized electromagnetic

spectrum, chaos might result if, for example, airlines didn't have specific

frequencies to work under and an amateur radio operator were interfering

with the pilot's ability to land an airplane. Wireless communication spans the

spectrum from 9 kHz to 300 GHz. (Also see Spectrum management)

[EDIT ] APPLICATIONS OF WIRELESS TECHNOLOGY

[EDIT] SECURITY SYSTEMS

Wireless technology may supplement or replace hard wired implementations

in security systems for homes or office buildings.

[EDIT] CELLULAR TELEPHONE (PHONES AND MODEMS)

Perhaps the best known example of wireless technology is the cellular

telephone and modems. These instruments use radio waves to enable the

operator to make phone calls from many locations worldwide. They can be

Page 37: Line Coding

used anywhere that there is a cellular telephone site to house the equipment

that is required to transmit and receive the signal that is used to transfer

both voice and data to and from these instruments.

[EDIT] WI-FI

Main article: Wi-Fi

Wi-Fi is a wireless local area network that enables portable computing

devices to connect easily to the Internet. Standardized as IEEE 802.11 a,b,g,n,

Wi-Fi approaches speeds of some types of wired Ethernet. Wi-Fi hot spots

have been popular over the past few years. Some businesses charge

customers a monthly fee for service, while others have begun offering it for

free in an effort to increase the sales of their goods.[5]

[EDIT] WIRELESS ENERGY TRANSFER

Main article: Wireless energy transfer

Wireless energy transfer is a process whereby electrical energy is

transmitted from a power source to an electrical load that does not have a

built-in power source, without the use of interconnecting wires.

[EDIT] COMPUTER INTERFACE DEVICES

Answering the call of customers frustrated with cord clutter, many

manufactures of computer peripherals turned to wireless technology to

satisfy their consumer base. Originally these units used bulky, highly limited

transceivers to mediate between a computer and a keyboard and mouse,

however more recent generations have used small, high quality devices,

some even incorporating Bluetooth. These systems have become so

ubiquitous that some users have begun complaining about a lack of wired

peripherals.[who?] Wireless devices tend to have a slightly slower response

time than their wired counterparts, however the gap is decreasing. Initial

concerns about the security of wireless keyboards have also been addressed

with the maturation of the technology.

[EDIT ] CATEGORIES OF WIRELESS IMPLEMENTATIONS,

DEVICES AND STANDARDS

Radio communication system

Broadcasting

Amateur radio

Land Mobile Radio or Professional Mobile Radio: TETRA, P25,

OpenSky, EDACS, DMR, dPMR

Communication radio

Cordless telephony :DECT (Digital Enhanced Cordless

Telecommunications)

Cellular networks : 0G, 1G, 2G, 3G, Beyond 3G (4G), Future wireless

List of emerging technologies

Short-range point-to-point communication : Wireless microphones,

Remote controls, IrDA, RFID (Radio Frequency Identification),

Wireless USB, DSRC (Dedicated Short Range Communications),

EnOcean, Near Field Communication

Wireless sensor networks : ZigBee, EnOcean; Personal area networks,

Bluetooth, TransferJet, Ultra-wideband (UWB from WiMedia

Alliance).

Wireless networks : Wireless LAN (WLAN), (IEEE 802.11 branded as

Wi-Fi and HiperLAN), Wireless Metropolitan Area Networks (WMAN)

Page 38: Line Coding

and Broadband Fixed Access (BWA) (LMDS, WiMAX, AIDAAS and

HiperMAN)

MICROWAVE TRANSMISSION

From Wikipedia, the free encyclopedia

The atmospheric attenuation of microwaves in dry air with a precipitable

water vapor level of 0.001 mm. The downward spikes in the graph

correspond to frequencies at which microwaves are absorbed more strongly,

such as by oxygen molecules

Microwave transmission refers to the technology of transmitting

information by the use of radio waves whose wavelengths are conveniently

measured in small numbers of centimeters, by using various electronic

technologies. These are called microwaves. This part of the radio spectrum

ranges across frequencies of roughly 1.0 gigahertz (GHz) to 30 GHz. These

correspond to wavelengths from 30 centimeters down to 1.0 cm.

In the microwave frequency band, antennas are usually of convenient sizes

and shapes, and also the use of metal waveguides for carrying the radio

power works well. Furthermore, with the use of the modern solid-state

electronics and traveling wave tube technologies that have been developed

since the early 1960s, the electronics used by microwave radio transmission

have been readily used by expert electronics engineers.

Microwave radio transmission is commonly used by communication systems

on the surface of the Earth, in satellite communications, and in deep space

radio communications. Other parts of the microwave radio band are used for

radars, radio navigation systems, sensor systems, and radio astronomy.

The next higher part of the radio electromagnetic spectrum, where the

frequencies are above 30 GHz and below 100 GHz, are called "millimeter

waves" because their wavelengths are conveniently measured in millimeters,

and their wavelengths range from 10 mm down to 3.0 mm. Radio waves in

this band are usually strongly attenuated by the Earthly atmosphere and

particles contained in it, especially during wet weather. Also, in wide band of

frequencies around 60 GHz, the radio waves are strongly attenuated by

molecular oxygen in the atmosphere. The electronic technologies needed in

the millimeter wave band are also much more difficult to utilize than those of

the microwave band.

CONTENTS

1 Properties 2 Uses

3 Parabolic (microwave) antenna

4 Microwave power transmission

o 4.1 History

o 4.2 Common safety concerns

o 4.3 Proposed uses

o 4.4 Current status

Page 39: Line Coding

5 Microwave radio relay

o 5.1 How microwave radio relay links are formed

o 5.2 Planning considerations

o 5.3 Over-horizon microwave radio relay

o 5.4 Usage of microwave radio relay systems

o 5.5 Microwave link

5.5.1 Properties of microwave links

5.5.2 Uses of microwave links

o 5.6 Tunable microwave device

6 See also

7 References

8 External links

[EDIT ] PROPERTIES

Suitable over line-of-sight transmission links without obstacles

Provides good bandwidth[clarification needed]

Affected by rain, vapor, dust, snow, cloud, mist and fog, heavy

moisture, depending on chosen frequency (see rain fade)

[EDIT ] USES

Backbone or backhaul carriers in cellular networks. Used to link BTS-

BSC and BSC-MSC.

Communication with satellites

Microwave radio relay links for television and telephone service

providers

[EDIT ] PARABOLIC (MICROWAVE) ANTENNA

Main article: Parabolic antenna

A parabolic antenna is a high-gain reflector antenna used for radio,

television and data communications, and also for radiolocation (radar), on

the UHF and SHF parts of the electromagnetic spectrum. The relatively short

wavelength of electromagnetic radiation at these frequencies allows

reasonably sized reflectors to exhibit the desired highly directional response

for both receiving and transmitting.

[EDIT ] MICROWAVE POWER TRANSMISSION

Microwave power transmission (MPT) is the use of microwaves to

transmit power through outer space or the atmosphere without the need for

wires. It is a sub-type of the more general wireless energy transfer methods.

[EDIT] HISTORY

Following World War II, which saw the development of high-power

microwave emitters known as cavity magnetrons, the idea of using

microwaves to transmit power was researched. In 1964, William C. Brown

demonstrated a miniature helicopter equipped with a combination antenna

Page 40: Line Coding

and rectifier device called a rectenna. The rectenna converted microwave

power into electricity, allowing the helicopter to fly.[1] In principle, the

rectenna is capable of very high conversion efficiencies - over 90% in optimal

circumstances.

Most proposed MPT systems now usually include a phased array microwave

transmitter. While these have lower efficiency levels they have the advantage

of being electrically steered using no moving parts, and are easier to scale to

the necessary levels that a practical MPT system requires.

Using microwave power transmission to deliver electricity to communities

without having to build cable-based infrastructure is being studied at Grand

Bassin on Reunion Island in the Indian Ocean.

[EDIT] COMMON SAFETY CONCERNS

The common reaction to microwave transmission is one of concern, as

microwaves are generally perceived by the public as dangerous forms of

radiation - stemming from the fact that they are used in microwave ovens.

While high power microwaves can be painful and dangerous as in the United

States Military's Active Denial System, MPT systems are generally proposed

to have only low intensity at the rectenna.

Though this would be extremely safe as the power levels would be about

equal to the leakage from a microwave oven, and only slightly more than a

cell phone, the relatively diffuse microwave beam necessitates a large

rectenna area for a significant amount of energy to be transmitted.

Research has involved exposing multiple generations of animals to

microwave radiation of this or higher intensity, and no health issues have

been found.[2]

[EDIT] PROPOSED USES

Main article: Solar power satellite

MPT is the most commonly proposed method for transferring energy to the

surface of the Earth from solar power satellites or other in-orbit power

sources. MPT is occasionally proposed for the power supply in [beam-

powered propulsion] for orbital lift space ships. Even though lasers are more

commonly proposed, their low efficiency in light generation and reception

has led some designers to opt for microwave based systems.

[EDIT] CURRENT STATUS

Wireless Power Transmission (using microwaves) is well proven.

Experiments in the tens of kilowatts have been performed at Goldstone in

California in 1975[3][4][5] and more recently (1997) at Grand Bassin on

Reunion Island.[6] In 2008 a long range transmission experiment successfully

transmitted 20 watts 92 miles from a mountain on Maui to the main island of

Hawaii.[7]

[edit] Microwave radio relay

Heinrich-Hertz-Turm in Germany

Microwave radio relay is a technology for transmitting digital and analog

signals, such as long-distance telephone calls and the relay of television

Page 41: Line Coding

programs to transmitters, between two locations on a line of sight radio path.

In microwave radio relay, radio waves are transmitted between the two

locations with directional antennas, forming a fixed radio connection

between the two points. Long daisy-chained series of such links form

transcontinental telephone and/or television communication systems.

[EDIT] HOW MICROWAVE RADIO RELAY LINKS ARE FORMED

Relay towers on Frazier Mountain, Southern California

Because a line of sight radio link is made, the radio frequencies used occupy

only a narrow path between stations (with the exception of a certain radius

of each station). Antennas used must have a high directive effect; these

antennas are installed in elevated locations such as large radio towers in

order to be able to transmit across long distances. Typical types of antenna

used in radio relay link installations are parabolic reflectors, shell antennas

and horn radiators, which have a diameter of up to 4 meters. Highly directive

antennas permit an economical use of the available frequency spectrum,

despite long transmission distances.

Danish military radio relay node

[EDIT] PLANNING CONSIDERATIONS

Because of the high frequencies used, a quasi-optical line of sight between the

stations is generally required. Additionally, in order to form the line of sight

connection between the two stations, the first Fresnel zone must be free from

obstacles so the radio waves can propagate across a nearly uninterrupted

path. Obstacles in the signal field cause unwanted attenuation, and are as a

result only acceptable in exceptional cases. High mountain peak or ridge

positions are often ideal: Europe's highest radio relay station, the

Richtfunkstation Jungfraujoch, is situated atop the Jungfraujoch ridge at an

altitude of 3,705 meters (12,156 ft) above sea level.

Multiple antennas provide space diversity

Page 42: Line Coding

Obstacles, the curvature of the Earth, the geography of the area and reception

issues arising from the use of nearby land (such as in manufacturing and

forestry) are important issues to consider when planning radio links. In the

planning process, it is essential that "path profiles" are produced, which

provide information about the terrain and Fresnel zones affecting the

transmission path. The presence of a water surface, such as a lake or river, in

the mid-path region also must be taken into consideration as it can result in a

near-perfect reflection (even modulated by wave or tide motions), creating

multipath distortion as the two received signals ("wanted" and "unwanted")

swing in and out of phase. Multipath fades are usually deep only in a small

spot and a narrow frequency band, so space and frequency diversity schemes

were usually applied in the third quarter of the 20th century.

The effects of atmospheric stratification cause the radio path to bend

downward in a typical situation so a major distance is possible as the earth

equivalent curvature increases from 6370 km to about 8500 km (a 4/3

equivalent radius effect). Rare events of temperature, humidity and pressure

profile versus height, may produce large deviations and distortion of the

propagation and affect transmission quality. High intensity rain and snow

must also be considered as an impairment factor, especially at frequencies

above 10 GHz. All previous factors, collectively known as path loss, make it

necessary to compute suitable power margins, in order to maintain the link

operative for a high percentage of time, like the standard 99.99% or 99.999%

used in 'carrier class' services of most telecommunication operators.

Portable microwave rig for television news

[EDIT] OVER-HORIZON MICROWAVE RADIO RELAY

In over-horizon, or tropospheric scatter, microwave radio relay, unlike a

standard microwave radio relay link, the sending and receiving antennas do

not use a line of sight transmission path. Instead, the stray signal

transmission, known as "tropo - scatter" or simply "scatter," from the sent

signal is picked up by the receiving station. Signal clarity obtained by this

method depends on the weather and other factors, and as a result a high level

of technical difficulty is involved in the creation of a reliable over horizon

radio relay link. Over horizon radio relay links are therefore only used where

standard radio relay links are unsuitable (for example, in providing a

microwave link to an island).

[EDIT] USAGE OF MICROWAVE RADIO RELAY SYSTEMS

During the 1950s the AT&T Communications system of microwave radio

grew to carry the majority of US Long Distance telephone traffic, as well as

intercontinental television network signals. The prototype was called TDX

and was tested with a connection between New York City and Murray Hill,

Page 43: Line Coding

the location of Bell Laboratories in 1946. The TDX system was set up

between New York and Boston in 1947. The TDX was improved to the TD2,

which still used klystrons, and then later to the TD3 that used solid state

electronics. The main motivation in 1946 to use microwave radio instead of

cable was that a large capacity could be installed quickly and at less cost. It

was expected at that time that the annual operating costs for microwave

radio would be greater than for cable. There were two main reasons that a

large capacity had to be introduced suddenly: Pent up demand for long

distance telephone service, because of the hiatus during the war years, and

the new medium of television, which needed more bandwidth than radio.

Similar systems were soon built in many countries, until the 1980s when the

technology lost its share of fixed operation to newer technologies such as

fiber-optic cable and optical radio relay links, both of which offer larger data

capacities at lower cost per bit. Communication satellites, which are also

microwave radio relays, better retained their market share, especially for

television.

At the turn of the century, microwave radio relay systems are being used

increasingly in portable radio applications. The technology is particularly

suited to this application because of lower operating costs, a more efficient

infrastructure, and provision of direct hardware access to the portable radio

operator.

[EDIT] MICROWAVE LINK

A microwave link is a communications system that uses a beam of radio

waves in the microwave frequency range to transmit video, audio, or data

between two locations, which can be from just a few feet or meters to several

miles or kilometers apart. Microwave links are commonly used by television

broadcasters to transmit programmes across a country, for instance, or from

an outside broadcast back to a studio.

Mobile units can be camera mounted, allowing cameras the freedom to move

around without trailing cables. These are often seen on the touchlines of

sports fields on Steadicam systems.

[EDIT ] PROPERTIES OF MICROWAVE LINKS

Involve line of sight (LOS) communication technology

Affected greatly by environmental constraints, including rain fade

Have limited penetration capabilities

Sensitive to high pollen count

Signals can be degraded during Solar proton events [8]

[EDIT ] USES OF MICROWAVE LINKS

In communications between satellites and base stations

As backbone carriers for cellular systems

In short range indoor communications

[EDIT] TUNABLE MICROWAVE DEVICE

A tunable microwave device is a device that works at radio frequency range

with the dynamic tunable capabilities, especially an electric field. The

material systems for such a device usually have multilayer structure. Usually,

magnetic or ferroelectric film on ferrite or superconducting film is adopted.

The former two are used as the property tunable component to control the

working frequency of the whole system. Devices of this type include tunable

varators, tunable microwave filters, tunable phase shifters, and tunable

resonators. The main application of them is re-configurable microwave

networks, for example, reconfigurable wireless communication, wireless

network, and reconfigurable phase array antenna.[9][10]

Page 44: Line Coding

CODE DIVISION MULTIPLE ACCESS

From Wikipedia, the free encyclopedia

Code division multiple access (CDMA) is a channel access method used by

various radio communication technologies. It should not be confused with

the mobile phone standards called cdmaOne and CDMA2000 (which are

often referred to as simply CDMA), which use CDMA as an underlying channel

access method.

One of the basic concepts in data communication is the idea of allowing

several transmitters to send information simultaneously over a single

communication channel. This allows several users to share a band of

frequencies (see bandwidth). This concept is called Multiple Access. CDMA

employs spread-spectrum technology and a special coding scheme (where

each transmitter is assigned a code) to allow multiple users to be multiplexed

over the same physical channel. By contrast, time division multiple access

(TDMA) divides access by time, while frequency-division multiple access

(FDMA) divides it by frequency. CDMA is a form of spread-spectrum

signalling, since the modulated coded signal has a much higher data

bandwidth than the data being communicated.

An analogy to the problem of multiple access is a room (channel) in which

people wish to talk to each other simultaneously. To avoid confusion, people

could take turns speaking (time division), speak at different pitches

(frequency division), or speak in different languages (code division). CDMA is

analogous to the last example where people speaking the same language can

understand each other, but other languages are perceived as noise and

rejected. Similarly, in radio CDMA, each group of users is given a shared code.

Many codes occupy the same channel, but only users associated with a

particular code can communicate.

CONTENTS

1 Uses 2 Steps in CDMA Modulation

3 Code division multiplexing (Synchronous CDMA)

o 3.1 Example

4 Asynchronous CDMA

o 4.1 Advantages of asynchronous CDMA over other techniques

o 4.2 Spread-spectrum characteristics of CDMA

5 See also

6 References

7 External links

[EDIT ] USES

One of the early applications for code division multiplexing is in GPS. This

predates and is distinct from cdmaOne.

The Qualcomm standard IS-95, marketed as cdmaOne.

The Qualcomm standard IS-2000, known as CDMA2000. This

standard is used by several mobile phone companies, including the

Globalstar satellite phone network.

CDMA has been used in the OmniTRACS satellite system for

transportation logistics.

[EDIT ] STEPS IN CDMA MODULATION

CDMA is a spread spectrum multiple access[1] technique. A spread spectrum

technique spreads the bandwidth of the data uniformly for the same

transmitted power. Spreading code is a pseudo-random code that has a

Page 45: Line Coding

narrow Ambiguity function, unlike other narrow pulse codes. In CDMA a

locally generated code runs at a much higher rate than the data to be

transmitted. Data for transmission is combined via bitwise XOR (exclusive

OR) with the faster code. The figure shows how spread spectrum signal is

generated. The data signal with pulse duration of Tb is XOR’ed with the code

signal with pulse duration of Tc. (Note: bandwidth is proportional to 1 / T

where T = bit time) Therefore, the bandwidth of the data signal is 1 / Tb and

the bandwidth of the spread spectrum signal is 1 / Tc. Since Tc is much

smaller than Tb, the bandwidth of the spread spectrum signal is much larger

than the bandwidth of the original signal. The ratio Tb / Tc is called

spreading factor or processing gain and determines to a certain extent the

upper limit of the total number of users supported simultaneously by a base

station.[2]

Each user in a CDMA system uses a different code to modulate their signal.

Choosing the codes used to modulate the signal is very important in the

performance of CDMA systems. The best performance will occur when there

is good separation between the signal of a desired user and the signals of

other users. The separation of the signals is made by correlating the received

signal with the locally generated code of the desired user. If the signal

matches the desired user's code then the correlation function will be high

and the system can extract that signal. If the desired user's code has nothing

in common with the signal the correlation should be as close to zero as

possible (thus eliminating the signal); this is referred to as cross correlation.

If the code is correlated with the signal at any time offset other than zero, the

correlation should be as close to zero as possible. This is referred to as auto-

correlation and is used to reject multi-path interference.[3]

In general, CDMA belongs to two basic categories: synchronous (orthogonal

codes) and asynchronous (pseudorandom codes).

[EDIT ] CODE DIVISION MULTIPLEXING (SYNCHRONOUS

CDMA)

Synchronous CDMA exploits mathematical properties of orthogonality

between vectors representing the data strings. For example, binary string

1011 is represented by the vector (1, 0, 1, 1). Vectors can be multiplied by

taking their dot product, by summing the products of their respective

components. If the dot product is zero, the two vectors are said to be

orthogonal to each other (note: if u = (a, b) and v = (c, d), the dot product u·v

= ac + bd). Some properties of the dot product aid understanding of how W-

CDMA works. If vectors a and b are orthogonal, then and:

Each user in synchronous CDMA uses a code orthogonal to the others' codes

to modulate their signal. An example of four mutually orthogonal digital

signals is shown in the figure. Orthogonal codes have a cross-correlation

Page 46: Line Coding

equal to zero; in other words, they do not interfere with each other. In the

case of IS-95 64 bit Walsh codes are used to encode the signal to separate

different users. Since each of the 64 Walsh codes are orthogonal to one

another, the signals are channelized into 64 orthogonal signals. The following

example demonstrates how each user's signal can be encoded and decoded.

[EDIT ] EXAMPLE

An example of four mutually orthogonal digital signals.

Start with a set of vectors that are mutually orthogonal. (Although mutual

orthogonality is the only condition, these vectors are usually constructed for

ease of decoding, for example columns or rows from Walsh matrices.) An

example of orthogonal functions is shown in the picture on the left. These

vectors will be assigned to individual users and are called the code, chip code,

or chipping code. In the interest of brevity, the rest of this example uses

codes, v, with only 2 bits.

Each user is associated with a different code, say v. A 1 bit is represented by

transmitting a positive code, v, and a 0 bit is represented by a negative code,

–v. For example, if v = (1, –1) and the data that the user wishes to transmit is

(1, 0, 1, 1), then the transmitted symbols would be (1, –1, 1, 1) ⊗ v = (v0, v1, –

v0, –v1, v0, v1, v0, v1) = (1, –1, –1, 1, 1, –1, 1, –1), where ⊗ is the Kronecker

product. For the purposes of this article, we call this constructed vector the

transmitted vector.

Each sender has a different, unique vector v chosen from that set, but the

construction method of the transmitted vector is identical.

Now, due to physical properties of interference, if two signals at a point are in

phase, they add to give twice the amplitude of each signal, but if they are out

of phase, they subtract and give a signal that is the difference of the

amplitudes. Digitally, this behaviour can be modelled by the addition of the

transmission vectors, component by component.

If sender0 has code (1, –1) and data (1, 0, 1, 1), and sender1 has code (1, 1)

and data (0, 0, 1, 1), and both senders transmit simultaneously, then this

table describes the coding steps:

Step Encode sender0 Encode sender1

0 code0 = (1, –1), data0 = (1, 0, 1, 1) code1 = (1, 1), data1 = (0, 0, 1, 1)

1 encode0 = 2(1, 0, 1, 1) – (1, 1, 1, 1)

= (1, –1, 1, 1)

encode1 = 2(0, 0, 1, 1) – (1, 1, 1, 1)

= (–1, –1, 1, 1)2 signal0 = encode0 ⊗ code0

= (1, –1, 1, 1) ⊗ (1, –1)= (1, –1, –1, 1, 1, –1, 1, –1)

signal1 = encode1 ⊗ code1

= (–1, –1, 1, 1) ⊗ (1, 1)= (–1, –1, –1, –1, 1, 1, 1, 1)

Page 47: Line Coding

Because signal0 and signal1 are transmitted at the same time into the air,

they add to produce the raw signal:

(1, –1, –1, 1, 1, –1, 1, –1) + (–1, –1, –1, –1, 1, 1, 1, 1) = (0, –2, –2, 0, 2, 0,

2, 0)

This raw signal is called an interference pattern. The receiver then extracts

an intelligible signal for any known sender by combining the sender's code

with the interference pattern, the receiver combines it with the codes of the

senders. The following table explains how this works and shows that the

signals do not interfere with one another:

Step Decode sender0 Decode sender1

0code0 = (1, –1), signal = (0, –2, –2, 0, 2, 0, 2, 0)

code1 = (1, 1), signal = (0, –2, –2, 0, 2, 0, 2, 0)

1 decode0 = pattern.vector0 decode1 = pattern.vector1

2decode0 = ((0, –2), (–2, 0), (2, 0), (2, 0)).(1, –1)

decode1 = ((0, –2), (–2, 0), (2, 0), (2, 0)).(1, 1)

3decode0 = ((0 + 2), (–2 + 0), (2 + 0), (2 + 0))

decode1 = ((0 – 2), (–2 + 0), (2 + 0), (2 + 0))

4data0=(2, –2, 2, 2), meaning (1, 0, 1, 1)

data1=(–2, –2, 2, 2), meaning (0, 0, 1, 1)

Further, after decoding, all values greater than 0 are interpreted as 1 while all

values less than zero are interpreted as 0. For example, after decoding, data0

is (2, –2, 2, 2), but the receiver interprets this as (1, 0, 1, 1). Values of exactly

0 means that the sender did not transmit any data, as in the following

example:

Assume signal0 = (1, –1, –1, 1, 1, –1, 1, –1) is transmitted alone. The following

table shows the decode at the receiver:

Step Decode sender0 Decode sender1

0code0 = (1, –1), signal = (1, –1, –1, 1, 1, –1, 1, –1)

code1 = (1, 1), signal = (1, –1, –1, 1, 1, –1, 1, –1)

1 decode0 = pattern.vector0 decode1 = pattern.vector1

2decode0 = ((1, –1), (–1, 1), (1, –1), (1, –1)).(1, –1)

decode1 = ((1, –1), (–1, 1), (1, –1), (1, –1)).(1, 1)

3decode0 = ((1 + 1), (–1 – 1),(1 + 1), (1 + 1))

decode1 = ((1 – 1), (–1 + 1),(1 – 1), (1 – 1))

4data0 = (2, –2, 2, 2), meaning (1, 0, 1, 1)

data1 = (0, 0, 0, 0), meaning no data

When the receiver attempts to decode the signal using sender1's code, the

data is all zeros, therefore the cross correlation is equal to zero and it is clear

that sender1 did not transmit any data.

[EDIT ] ASYNCHRONOUS CDMA

See also: Direct-sequence spread spectrum and near-far problem

The previous example of orthogonal Walsh sequences describes how 2 users

can be multiplexed together in a synchronous system, a technique that is

commonly referred to as code division multiplexing (CDM). The set of 4 Walsh

sequences shown in the figure will afford up to 4 users, and in general, an

NxN Walsh matrix can be used to multiplex N users. Multiplexing requires all

of the users to be coordinated so that each transmits their assigned sequence

v (or the complement, –v) so that they arrive at the receiver at exactly the

same time. Thus, this technique finds use in base-to-mobile links, where all of

the transmissions originate from the same transmitter and can be perfectly

coordinated.

Page 48: Line Coding

On the other hand, the mobile-to-base links cannot be precisely coordinated,

particularly due to the mobility of the handsets, and require a somewhat

different approach. Since it is not mathematically possible to create signature

sequences that are both orthogonal for arbitrarily random starting points

and which make full use of the code space, unique "pseudo-random" or

"pseudo-noise" (PN) sequences are used in asynchronous CDMA systems. A

PN code is a binary sequence that appears random but can be reproduced in

a deterministic manner by intended receivers. These PN codes are used to

encode and decode a user's signal in Asynchronous CDMA in the same

manner as the orthogonal codes in synchronous CDMA (shown in the

example above). These PN sequences are statistically uncorrelated, and the

sum of a large number of PN sequences results in multiple access interference

(MAI) that is approximated by a Gaussian noise process (following the

central limit theorem in statistics). Gold codes are an example of a PN

suitable for this purpose, as there is low correlation between the codes. If all

of the users are received with the same power level, then the variance (e.g.,

the noise power) of the MAI increases in direct proportion to the number of

users. In other words, unlike synchronous CDMA, the signals of other users

will appear as noise to the signal of interest and interfere slightly with the

desired signal in proportion to number of users.

All forms of CDMA use spread spectrum process gain to allow receivers to

partially discriminate against unwanted signals. Signals encoded with the

specified PN sequence (code) are received, while signals with different codes

(or the same code but a different timing offset) appear as wideband noise

reduced by the process gain.

Since each user generates MAI, controlling the signal strength is an important

issue with CDMA transmitters. A CDM (synchronous CDMA), TDMA, or FDMA

receiver can in theory completely reject arbitrarily strong signals using

different codes, time slots or frequency channels due to the orthogonality of

these systems. This is not true for Asynchronous CDMA; rejection of

unwanted signals is only partial. If any or all of the unwanted signals are

much stronger than the desired signal, they will overwhelm it. This leads to a

general requirement in any asynchronous CDMA system to approximately

match the various signal power levels as seen at the receiver. In CDMA

cellular, the base station uses a fast closed-loop power control scheme to

tightly control each mobile's transmit power.

[EDIT] ADVANTAGES OF ASYNCHRONOUS CDMA OVER OTHER

TECHNIQUES

Efficient Practical utilization of Fixed Frequency Spectrum

In theory, CDMA, TDMA and FDMA have exactly the same spectral efficiency

but practically, each has its own challenges – power control in the case of

CDMA, timing in the case of TDMA, and frequency generation/filtering in the

case of FDMA.

TDMA systems must carefully synchronize the transmission times of all the

users to ensure that they are received in the correct timeslot and do not

cause interference. Since this cannot be perfectly controlled in a mobile

environment, each timeslot must have a guard-time, which reduces the

probability that users will interfere, but decreases the spectral efficiency.

Similarly, FDMA systems must use a guard-band between adjacent channels,

due to the unpredictable doppler shift of the signal spectrum because of user

mobility. The guard-bands will reduce the probability that adjacent channels

will interfere, but decrease the utilization of the spectrum.

Flexible Allocation of Resources

Asynchronous CDMA offers a key advantage in the flexible allocation of

resources i.e. allocation of a PN codes to active users. In the case of CDM,

TDMA, and FDMA the number of simultaneous orthogonal codes, time slots

and frequency slots respectively is fixed hence the capacity in terms of

Page 49: Line Coding

number of simultaneous users is limited. There are a fixed number of

orthogonal codes, timeslots or frequency bands that can be allocated for

CDM, TDMA, and FDMA systems, which remain underutilized due to the

bursty nature of telephony and packetized data transmissions. There is no

strict limit to the number of users that can be supported in an asynchronous

CDMA system, only a practical limit governed by the desired bit error

probability, since the SIR (Signal to Interference Ratio) varies inversely with

the number of users. In a bursty traffic environment like mobile telephony,

the advantage afforded by asynchronous CDMA is that the performance (bit

error rate) is allowed to fluctuate randomly, with an average value

determined by the number of users times the percentage of utilization.

Suppose there are 2N users that only talk half of the time, then 2N users can

be accommodated with the same average bit error probability as N users that

talk all of the time. The key difference here is that the bit error probability for

N users talking all of the time is constant, whereas it is a random quantity

(with the same mean) for 2N users talking half of the time.

In other words, asynchronous CDMA is ideally suited to a mobile network

where large numbers of transmitters each generate a relatively small amount

of traffic at irregular intervals. CDM (synchronous CDMA), TDMA, and FDMA

systems cannot recover the underutilized resources inherent to bursty traffic

due to the fixed number of orthogonal codes, time slots or frequency

channels that can be assigned to individual transmitters. For instance, if there

are N time slots in a TDMA system and 2N users that talk half of the time,

then half of the time there will be more than N users needing to use more

than N timeslots. Furthermore, it would require significant overhead to

continually allocate and deallocate the orthogonal code, time-slot or

frequency channel resources. By comparison, asynchronous CDMA

transmitters simply send when they have something to say, and go off the air

when they don't, keeping the same PN signature sequence as long as they are

connected to the system.

[EDIT] SPREAD-SPECTRUM CHARACTERISTICS OF CDMA

Most modulation schemes try to minimize the bandwidth of this signal since

bandwidth is a limited resource. However, spread spectrum techniques use a

transmission bandwidth that is several orders of magnitude greater than the

minimum required signal bandwidth. One of the initial reasons for doing this

was military applications including guidance and communication systems.

These systems were designed using spread spectrum because of its security

and resistance to jamming. Asynchronous CDMA has some level of privacy

built in because the signal is spread using a pseudo-random code; this code

makes the spread spectrum signals appear random or have noise-like

properties. A receiver cannot demodulate this transmission without

knowledge of the pseudo-random sequence used to encode the data. CDMA is

also resistant to jamming. A jamming signal only has a finite amount of power

available to jam the signal. The jammer can either spread its energy over the

entire bandwidth of the signal or jam only part of the entire signal.[4]

CDMA can also effectively reject narrowband interference. Since narrowband

interference affects only a small portion of the spread spectrum signal, it can

easily be removed through notch filtering without much loss of information.

Convolution encoding and interleaving can be used to assist in recovering

this lost data. CDMA signals are also resistant to multipath fading. Since the

spread spectrum signal occupies a large bandwidth only a small portion of

this will undergo fading due to multipath at any given time. Like the

narrowband interference this will result in only a small loss of data and can

be overcome.

Another reason CDMA is resistant to multipath interference is because the

delayed versions of the transmitted pseudo-random codes will have poor

correlation with the original pseudo-random code, and will thus appear as

another user, which is ignored at the receiver. In other words, as long as the

multipath channel induces at least one chip of delay, the multipath signals

Page 50: Line Coding

will arrive at the receiver such that they are shifted in time by at least one

chip from the intended signal. The correlation properties of the pseudo-

random codes are such that this slight delay causes the multipath to appear

uncorrelated with the intended signal, and it is thus ignored.

Some CDMA devices use a rake receiver, which exploits multipath delay

components to improve the performance of the system. A rake receiver

combines the information from several correlators, each one tuned to a

different path delay, producing a stronger version of the signal than a simple

receiver with a single correlator tuned to the path delay of the strongest

signal.[5]

Frequency reuse is the ability to reuse the same radio channel frequency at

other cell sites within a cellular system. In the FDMA and TDMA systems

frequency planning is an important consideration. The frequencies used in

different cells must be planned carefully to ensure signals from different cells

do not interfere with each other. In a CDMA system, the same frequency can

be used in every cell, because channelization is done using the pseudo-

random codes. Reusing the same frequency in every cell eliminates the need

for frequency planning in a CDMA system; however, planning of the different

pseudo-random sequences must be done to ensure that the received signal

from one cell does not correlate with the signal from a nearby cell.[6]

Since adjacent cells use the same frequencies, CDMA systems have the ability

to perform soft handoffs. Soft handoffs allow the mobile telephone to

communicate simultaneously with two or more cells. The best signal quality

is selected until the handoff is complete. This is different from hard handoffs

utilized in other cellular systems. In a hard handoff situation, as the mobile

telephone approaches a handoff, signal strength may vary abruptly. In

contrast, CDMA systems use the soft handoff, which is undetectable and

provides a more reliable and higher quality signal.[6]

GENERAL PACKET RADIO SERVICE

From Wikipedia, the free encyclopedia

General packet radio service (GPRS) is a packet oriented mobile data

service on the 2G and 3G cellular communication systems global system for

mobile communications (GSM). The service is available to users in over 200

countries worldwide. GPRS was originally standardized by European

Telecommunications Standards Institute (ETSI) in response to the earlier

CDPD and i-mode packet switched cellular technologies. It is now maintained

by the 3rd Generation Partnership Project (3GPP).[1][2]

It is a best-effort service, as opposed to circuit switching, where a certain

quality of service (QoS) is guaranteed during the connection. In 2G systems,

GPRS provides data rates of 56-114 kbit/second.[3] 2G cellular technology

combined with GPRS is sometimes described as 2.5G, that is, a technology

between the second (2G) and third (3G) generations of mobile telephony.[4] It

provides moderate-speed data transfer, by using unused time division

multiple access (TDMA) channels in, for example, the GSM system. GPRS is

integrated into GSM Release 97 and newer releases.

GPRS usage charging is based on volume of data, either as part of a bundle or

on a pay as you use basis. An example of a bundle is up to 5 GB per month for

a fixed fee. Usage above the bundle cap is either charged for per megabyte or

disallowed. The pay as you use charging is typically per megabyte of traffic.

This contrasts with circuit switching data, which is typically billed per minute

of connection time, regardless of whether or not the user transfers data

during that period.

CONTENTS

1 Technical overview o 1.1 Services offered

Page 51: Line Coding

o 1.2 Protocols supported

o 1.3 Hardware

o 1.4 Addressing

2 Coding schemes and speeds

o 2.1 Multiple access schemes

o 2.2 Channel encoding

o 2.3 Multislot Class

2.3.1 Multislot Classes for GPRS/EGPRS

2.3.2 Attributes of a multislot class

3 Usability

4 See also

5 References

6 External links

[EDIT ] TECHNICAL OVERVIEW

See also: GPRS Core Network

[EDIT] SERVICES OFFERED

GPRS extends the GSM circuit switched data capabilities and makes the

following services possible:

"Always on" internet access

Multimedia messaging service (MMS)

Push to talk over cellular (PoC/PTT)

Instant messaging and presence—wireless village

Internet applications for smart devices through wireless application

protocol (WAP)

Point-to-point (P2P) service: inter-networking with the Internet (IP)

If SMS over GPRS is used, an SMS transmission speed of about 30 SMS

messages per minute may be achieved. This is much faster than using the

ordinary SMS over GSM, whose SMS transmission speed is about 6 to 10 SMS

messages per minute.

[EDIT] PROTOCOLS SUPPORTED

GPRS supports the following protocols:[citation needed]

internet protocol (IP). In practice, built-in mobile browsers use IPv4

since IPv6 is not yet popular.

point-to-point protocol (PPP). In this mode PPP is often not

supported by the mobile phone operator but if the mobile is used as a

modem to the connected computer, PPP is used to tunnel IP to the

phone. This allows an IP address to be assigned dynamically to the

mobile equipment.

X.25 connections. This is typically used for applications like wireless

payment terminals, although it has been removed from the standard.

X.25 can still be supported over PPP, or even over IP, but doing this

requires either a network based router to perform encapsulation or

intelligence built in to the end-device/terminal; e.g., user equipment

(UE).

When TCP/IP is used, each phone can have one or more IP addresses

allocated. GPRS will store and forward the IP packets to the phone even

during handover. The TCP handles any packet loss (e.g. due to a radio noise

induced pause).

Page 52: Line Coding

[EDIT] HARDWARE

Devices supporting GPRS are divided into three classes:

Class A

Can be connected to GPRS service and GSM service (voice, SMS), using

both at the same time. Such devices are known to be available today.

Class B

Can be connected to GPRS service and GSM service (voice, SMS), but

using only one or the other at a given time. During GSM service (voice

call or SMS), GPRS service is suspended, and then resumed

automatically after the GSM service (voice call or SMS) has concluded.

Most GPRS mobile devices are Class B.

Class C

Are connected to either GPRS service or GSM service (voice, SMS).

Must be switched manually between one or the other service.

A true Class A device may be required to transmit on two different

frequencies at the same time, and thus will need two radios. To get around

this expensive requirement, a GPRS mobile may implement the dual transfer

mode (DTM) feature. A DTM-capable mobile may use simultaneous voice and

packet data, with the network coordinating to ensure that it is not required to

transmit on two different frequencies at the same time. Such mobiles are

considered pseudo-Class A, sometimes referred to as "simple class A". Some

networks are expected to support DTM in 2007.

Huawei E220 3G/GPRS Modem

USB 3G/GPRS modems use a terminal-like interface over USB 1.1, 2.0 and

later, data formats V.42bis, and RFC 1144 and some models have connector

for external antenna. Modems can be added as cards (for laptops) or external

USB devices which are similar in shape and size to a computer mouse, or

nowadays more like a pendrive.

[EDIT] ADDRESSING

A GPRS connection is established by reference to its access point name

(APN). The APN defines the services such as wireless application protocol

(WAP) access, short message service (SMS), multimedia messaging service

(MMS), and for Internet communication services such as email and World

Wide Web access.

In order to set up a GPRS connection for a wireless modem, a user must

specify an APN, optionally a user name and password, and very rarely an IP

address, all provided by the network operator.

[EDIT ] CODING SCHEMES AND SPEEDS

The upload and download speeds that can be achieved in GPRS depend on a

number of factors such as:

the number of BTS TDMA time slots assigned by the operator

the channel encoding used.

the maximum capability of the mobile device expressed as a GPRS

multislot class

[EDIT] MULTIPLE ACCESS SCHEMES

Page 53: Line Coding

The multiple access methods used in GSM with GPRS are based on frequency

division duplex (FDD) and TDMA. During a session, a user is assigned to one

pair of up-link and down-link frequency channels. This is combined with time

domain statistical multiplexing; i.e., packet mode communication, which

makes it possible for several users to share the same frequency channel. The

packets have constant length, corresponding to a GSM time slot. The down-

link uses first-come first-served packet scheduling, while the up-link uses a

scheme very similar to reservation ALOHA (R-ALOHA). This means that

slotted ALOHA (S-ALOHA) is used for reservation inquiries during a

contention phase, and then the actual data is transferred using dynamic

TDMA with first-come first-served scheduling.

[EDIT] CHANNEL ENCODING

Channel encoding is based on a convolutional code at different code rates and

GMSK modulation defined for GSM. The following table summarises the

options:

 Coding scheme

 Speed (kbit/s)

CS-1 8.0

CS-2 12.0

CS-3 14.4

CS-4 20.0

The least robust, but fastest, coding scheme (CS-4) is available near a base

transceiver station (BTS), while the most robust coding scheme (CS-1) is

used when the mobile station (MS) is further away from a BTS.

Using the CS-4 it is possible to achieve a user speed of 20.0 kbit/s per time

slot. However, using this scheme the cell coverage is 25% of normal. CS-1 can

achieve a user speed of only 8.0 kbit/s per time slot, but has 98% of normal

coverage. Newer network equipment can adapt the transfer speed

automatically depending on the mobile location.

In addition to GPRS, there are two other GSM technologies which deliver data

services: circuit-switched data (CSD) and high-speed circuit-switched data

(HSCSD). In contrast to the shared nature of GPRS, these instead establish a

dedicated circuit (usually billed per minute). Some applications such as video

calling may prefer HSCSD, especially when there is a continuous flow of data

between the endpoints.

The following table summarises some possible configurations of GPRS and

circuit switched data services.

 Technology  Download

(kbit/s)  Upload (kbit/s) 

 TDMA Timeslots allocated 

CSD 9.6 9.6 1+1

HSCSD 28.8 14.4 2+1

HSCSD 43.2 14.4 3+1

GPRS 80.020.0 (Class 8 & 10

and CS-4)4+1

GPRS 60.040.0 (Class 10 and

CS-4)3+2

EGPRS (EDGE)

236.859.2 (Class 8, 10

and MCS-9)4+1

EGPRS (EDGE)

177.6118.4 (Class 10

and MCS-9)3+2

[EDIT] MULTISLOT CLASS

Page 54: Line Coding

The multislot class determines the speed of data transfer available in the

Uplink and Downlink directions. It is a value between 1 to 45 which the

network uses to allocate radio channels in the uplink and downlink direction.

Multislot class with values greater than 31 are referred to as high multislot

classes.

A multislot allocation is represented as, for example, 5+2. The first number is

the number of downlink timeslots and the second is the number of uplink

timeslots allocated for use by the mobile station. A commonly used value is

class 10 for many GPRS/EGPRS mobiles which uses a maximum of 4

timeslots in downlink direction and 2 timeslots in uplink direction. However

simultaneously a maximum number of 5 simultaneous timeslots can be used

in both uplink and downlink. The network will automatically configure the

for either 3+2 or 4+1 operation depending on the nature of data transfer.

Some high end mobiles, usually also supporting UMTS also support

GPRS/EDGE multislot class 32. According to 3GPP TS 45.002 (Release 6),

Table B.2, mobile stations of this class support 5 timeslots in downlink and 3

timeslots in uplink with a maximum number of 6 simultaneously used

timeslots. If data traffic is concentrated in downlink direction the network

will configure the connection for 5+1 operation. When more data is

transferred in the uplink the network can at any time change the

constellation to 4+2 or 3+3. Under the best reception conditions, i.e. when

the best EDGE modulation and coding scheme can be used, 5 timeslots can

carry a bandwidth of 5*59.2 kbit/s = 296 kbit/s. In uplink direction, 3

timeslots can carry a bandwidth of 3*59.2 kbit/s = 177.6 kbit/s.[5]

[EDIT ] MULTISLOT CLASSES FOR GPRS/EGPRS

 Multislot Class 

 Downlink TS 

 Uplink TS 

 Active TS 

1 1 1 2

2 2 1 3

3 2 2 3

4 3 1 4

5 2 2 4

6 3 2 4

7 3 3 4

8 4 1 5

9 3 2 5

10 4 2 5

11 4 3 5

12 4 4 5

30 5 1 6

31 5 2 6

32 5 3 6

33 5 4 6

34 5 5 6

[EDIT ] ATTRIBUTES OF A MULTISLOT CLASS

Each multislot class identifies the following:

the maximum number of Timeslots that can be allocated on uplink

the maximum number of Timeslots that can be allocated on downlink

the total number of timeslots which can be allocated by the network

to the mobile

the time needed for the mobile phone to perform adjacent cell signal

level measurement and get ready to transmit

Page 55: Line Coding

the time needed for the MS to get ready to transmit

the time needed for the MS to perform adjacent cell signal level

measurement and get ready to receive

the time needed for the MS to get ready to receive.

The different multislot class specification is detailed in the Annex B of the

3GPP Technical Specification 45.002 (Multiplexing and multiple access on the

radio path)

[EDIT ] USABILITY

The maximum speed of a GPRS connection offered in 2003 was similar to a

modem connection in an analog wire telephone network, about 32-40 kbit/s,

depending on the phone used. Latency is very high; round-trip time (RTT) is

typically about 600-700 ms and often reaches 1 s. GPRS is typically

prioritized lower than speech, and thus the quality of connection varies

greatly.

Devices with latency/RTT improvements (via, for example, the extended UL

TBF mode feature) are generally available. Also, network upgrades of

features are available with certain operators. With these enhancements the

active round-trip time can be reduced, resulting in significant increase in

application-level throughput speeds.

FM BROADCASTING IN INDIA

From Wikipedia, the free encyclopedia

In the mid-nineties, when India first experimented with private FM

broadcasts, the small tourist destination of Goa was the fifth place in this

country of one billion where private players got FM slots. The other four

centres were the big metro cities: Delhi, Mumbai, Kolkata and Chennai. These

were followed by stations in Bangalore, Hyderabad, Jaipur and Lucknow.

Indian policy currently states that these broadcasters are assessed a One-

Time Entry Fee (OTEF), for the entire license period of 10 years. Under the

Indian accounting system, this amount is amortised over the 10 year period

at 10% per annum. Annual license fee for private players is either 4% of

revenue share or 10% of Reserve Price, whichever is higher.

Earlier, India's attempts to privatise its FM channels ran into rough weather

when private players bid heavily and most could not meet their

commitments to pay the government the amounts they owed.

CONTENTS

1 Content 2 FM stations in New Delhi

3 FM stations in MUMBAI

4 FM stations in Bangalore

5 FM stations in chennai

6 Market view

7 List of FM radio Stations in India

8 Current allocation process

[EDIT ] CONTENT

News in not permitted on private FM, although the Federal Minister for

Information-Broadcasting (I. and B. Ministry, Govt. of India) says this may be

reconsidered in two to three years. Nationally, many of the current FM

players, including the Times of India, Hindustan Times, Mid-Day, and BBC are

Page 56: Line Coding

essentially newspaper chains or media, and they are already making a strong

pitch for news on FM.

[EDIT ] FM STATIONS IN NEW DELHI

AIR FM Rainbow / FM-1 (107.1 MHz)

AIR FM Gold /FM-2 (Early Morning till Midnight) (106.4 MHz)

AIR Rajdhani/Gyanvani Channel (Non-Regular broadcast) (105.6

MHz)

Meow FM (104.8 MHz)

Fever 104 (104 MHz)

Radio Mirchi FM (98.3 MHz)

Hit FM (95 MHz)

Radio One FM (94.3 MHz)

Red FM (93.5 MHz)

Big FM (92.7 MHz)

Radio City (91.1 MHz)

Delhi University Educational Radio (Available only in University area)

(DU Radio FM) (90.4 MHz)

[EDIT ] FM STATIONS IN MUMBAI

Radio City 91.1

Big FM 92.7

Red FM 93.5

Radio One 94.3

Win FM 94.6 (The Station is closed)

Radio Mirchi 98.3

AIR FM Gold 100.7

Fever 104 FM 104.0

Meow 104.8

AIR FM Rainbow 107.1

Mumbai One

Gyan Vani

Radio MUST

Radio Jamia 90.4 FM

[EDIT ] FM STATIONS IN BANGALORE

Main article: List of FM radio stations in Bangalore

Radio City 91.1 FM - Kannada

Radio Indigo 91.9 FM - English

Big 92.7 FM - Kannada

Red FM 93.5 FM - Kannada

[EDIT ] FM STATIONS IN CHENNAI

AIR FM - RAINBOW

AIR FM - GOLD

Hello FM (106.4),

suryan FM ,

Page 57: Line Coding

Aaha FM,

Big FM ,

radio city FM ,

radio mirchi FM ,

Radio-1 FM.

[EDIT ] MARKET VIEW

India's new private FM channels could also change the advertising scenario.

Traditionally, radio accounts for 7% to 8% of advertiser expenditures around

the world. In India, it is less than 2% at present.[citation needed]

[EDIT ] LIST OF FM RADIO STATIONS IN INDIA

See also: List of FM radio stations in India

[EDIT ] CURRENT ALLOCATION PROCESS

In FM Phase II — the latest round of the long-delayed opening up of private

FM in India — some 338 frequencies were offered of which about 237 were

sold.[citation needed] The government may go for rebidding of unsold frequencies

quite soon. In Phase III of FM licensing, smaller towns and cities will be

opened up for FM radio.

Reliance and South Asia FM (Sun group) bid for most of the 91 cities,

although they were allowed only 15% of the total allocated frequencies.

Between them, they have had to surrender over 40 licenses.

LIST OF AMATEUR RADIO FREQUENCY BANDS IN INDIA

From Wikipedia, the free encyclopedia

Antennas at a ham operator's station.

Amateur radio or ham radio is a hobby that is practised by over 16,000

licenced users in India.[1] Licences are granted by the Wireless and Planning

and Coordination Wing (WPC), a branch of the Ministry of Communications

and Information Technology. In addition, the WPC allocates frequency

spectrum in India. The Indian Wireless Telegraphs (Amateur Service) Rules,

1978 lists five licence categories:[2]

To obtain a licence in the first four categories, candidates must pass the

Amateur Station Operator's Certificate examination conducted by the WPC.

This exam is held monthly in Delhi, Mumbai, Kolkata and Chennai, every two

months in Ahmedabad, Nagpur and Hyderabad, and every four months in

some smaller cities.[3] The examination consists of two 50-mark written

sections: Radio theory and practice, Regulations; and a practical test

consisting of a demonstration of Morse code proficiency in sending and

receiving.[4] After passing the examination, the candidate must clear a police

interview. After clearance, the WPC grants the licence along with the user-

chosen call sign. This procedure can take up to one year.[5] This licence is

valid for up to five years.[6]

Each licence category has certain privileges allotted to it, including the

allotment of frequencies, output power, and the emission modes. This article

list the various frequencies allotted to various classes, and the corresponding

emission modes and input DC power.

CONTENTS

1 Allotted spectrum 2 Emission designations

Page 58: Line Coding

3 Licence categories

o 3.1 Short Wave Listener

o 3.2 Grade II Restricted

o 3.3 Grade II

o 3.4 Grade I

o 3.5 Advanced Grade

4 See also

5 Notes

6 References

[EDIT ] ALLOTTED SPECTRUM

The following table lists the frequencies that amateur radio operators in

India can operate on.

Band refers to the International Telecommunication Union (ITU)

radio band designation

Frequency is measured in megahertz

Wavelength is measured in metres and centimetres

Type refers to the radio frequency classification

BandFrequency

(MHz)Wavelength Type

6 1.820–1.860 160 m MF

7 3.500–3.700 80 m HF

7 3.890–3.900 80 m HF

7 7.000–7.100 40 m HF

714.000–14.350

20 m HF

718.068–18.168

17 m HF

721.000–21.450

15 m HF

724.890–24.990

12 m HF

728.000–29.700

10 m HF

8 144–146 2 m VHF

9 434–438 70 cm UHF

9 1260–1300 23 cm UHF

10 3300–3400 9 cm SHF

10 5725–5840 5 cm SHF

[EDIT ] EMISSION DESIGNATIONS

Main article: Types of radio emissions

The International Telecommunication Union uses an internationally agreed

system for classifying radio frequency signals. Each Type of radio emission is

classified according to its bandwidth, method of modulation, nature of the

modulating signal, and Type of information transmitted on the carrier signal.

It is based on characteristics of the signal, not on the transmitter used.

An emission designation is of the form BBBB 123 45, where BBBB is the

bandwidth of the signal, 1 is a letter indicating the Type of modulation used,

2 is a digit representing the Type of modulating signal, 3 is a letter

corresponding to the Type of information transmitted, 4 is a letter indicating

Page 59: Line Coding

the practical details of the transmitted information, and 5 is a letter that

represents the method of multiplexing. The 4 and 5 fields are optional. For

example, an emission designation would appear read as 500H A3E, where

500H translates to 500 Hz, and A3E is the emission mode as permitted.

The WPC has authorized the following emission modes:[7]

Emission Details

A1A

Single channel containing digital information, no subcarrier,

Aural telegraphy, intended to be decoded by ear, such as Morse code

A2A

Single channel containing digital information, using a subcarrier,

Aural telegraphy, intended to be decoded by ear, such as Morse code

A3E Double-sideband amplitude modulation (AM radio),

Single channel containing analogue information,

A3X Single channel containing analogue information,

None of the other listed types of emission

A3F[nb 1] Single channel containing analogue information,

Video (television signals)

F1B

Frequency modulation ,

Single channel containing digital information, no subcarrier,

Electronic telegraphy, intended to be decoded by machine (radio teletype and digital modes)

F2B

Frequency modulation,

Single channel containing digital information, using a subcarrier,

Electronic telegraphy, intended to be decoded by machine (radio teletype and digital modes)

F3E

Frequency modulation,

Single channel containing analogue information,

Telephony (audio)

F3C

Frequency modulation,

Single channel containing analogue information,

Facsimile (still images)

H3E

Single-sideband with full carrier,

Single channel containing analogue information,

Telephony (audio)

J3E

Single-sideband with suppressed carrier (e.g. Shortwave utility and amateur stations),

Single channel containing analogue information,

Telephony (audio)

R3E

Single-sideband with reduced or variable carrier,

Single channel containing analogue information,

Telephony (audio)

[EDIT ] LICENCE CATEGORIES

[EDIT] SHORT WAVE LISTENER

The Short Wave Listener's Amateur Wireless Telegraph Station Licence

allows listening on all amateur radio frequency bands, but prohibits

transmission. The minimum age is 12.[8]

[EDIT] GRADE II RESTRICTED

The Restricted Amateur Wireless Telegraph Station Licence licence requires

a minimum score of 40% in each section of the written examination, and 50%

overall.[9] The minimum age is 12 years.[8] The licence allows a user to make

Page 60: Line Coding

terrestrial radiotelephony (voice) transmission in two VHF frequency bands.

The maximum power allowed is 10 W.[2]

BandFrequency

(MHz)Wavelength Type Emission

Power (W)

8 144–146 2 m VHF A3E, H3E, J3E, R3E, F3E 10[nb 2]

9434–438[nb

3] 70 cm UHF A3E, H3E, J3E, R3E, F3E 10[nb 2]

[EDIT] GRADE II

The Amateur Wireless Telegraph Station Licence, Grade–II licence requires

the same scores as the Grade II Restricted, and in addition a demonstration of

proficiency in sending and receiving Morse code at five words a minute.[9]

The minimum age is 12 years.[8] The licence allows the user to make

radiotelegraphy (Morse code) and radiotelephony transmission in 11

frequency bands. The maximum power allowed is 50 W.

A Grade II licence holder can only be authorized the use of radio telephony

emission on frequency bands below 30 MHz on submission of proof that 100

contacts have been made with other amateurs operators using CW (Morse

code).[2]

BandFrequency

(MHz)Wavelength Type Emission

Power (W)

61.820–1.860[nb 4] 160 m MF A1A, A2A, A3E, H3E, J3E, R3E 50

73.500–3.700[nb 4] 80 m HF A1A, A2A, A3E, H3E, J3E, R3E 50

73.890–3.900

80 m HF A1A, A2A, A3E, H3E, J3E, R3E 50

7 7.000– 40 m HF A1A, A2A, A3E, H3E, J3E, R3E 50

7.100

714.000–14.350

20 m HF A1A, A2A, A3E, H3E, J3E, R3E 50

718.068–18.168[nb 5] 17 m HF A1A, A2A, A3E, H3E, J3E, R3E 50

721.000–21.450

15 m HF A1A, A2A, A3E, H3E, J3E, R3E 50

724.890–24.990

12 m HF A1A, A2A, A3E, H3E, J3E, R3E 50

728.000–29.700

10 m HF A1A, A2A, A3E, H3E, J3E, R3E 50

8 144–146 2 m VHF A1A, A2A, A3E, H3E, J3E, R3E 10[nb 2]

9434–438[nb

3] 70 cm UHF A1A, A2A, A3E, H3E, J3E, R3E 10[nb 2]

[EDIT] GRADE I

The Amateur Wireless Telegraph Station Licence, Grade–I requires a

minimum of 50% in each section of the written examination, and 55%

overall, and a demonstration of proficiency in sending and receiving Morse

code at 12 words a minute.[9] The minimum age is 14 years.[8] The licence

allows a user to make radiotelegraphy and radiotelephony transmission in

14 frequency bands. The maximum power allowed is 150 W. In addition,

satellite communication, facsimile, and television modes are permitted.[2]

BandFrequency

(MHz)Wavelength Type Emission

Power (W)

61.820–1.860[nb 4] 160 m MF

A1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

7 3.500– 80 m HF A1A, A2A, A3E, H3E, R3E, J3E, 150

Page 61: Line Coding

3.700[nb 4] F1B, F2B, F3E, F3C, A3X, A3F

73.890–3.900

80 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2A, F3E, F3C, A3C, A3F

150

77.000–7.100

40 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

714.000–14.350

20 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

718.068–18.168[nb 5] 17 m HF

A1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

721.000–21.450

15 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

724.890–24.990

12 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

728.000–29.700

10 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

8 144–146 2 m VHFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

25[nb 2]

9434–438[nb

3] 70 cm UHFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

25[nb 2]

91260–1300[nb 3][nb

6]23 cm UHF

A1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

25[nb 2]

103300–3400[nb 3] 9 cm SHF

A1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

25[nb 2]

105725–5840[nb 3] 5 cm SHF

A1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

25[nb 2]

[EDIT] ADVANCED GRADE

The Advanced Amateur Wireless Telegraph Station Licence is the highest

licence category. To obtain the licence, an applicant must be 18 years of age.[8]

pass an advanced electronics examination, along with the Rules and

Regulations section and Morse code sending and receiving at 12 words per

minute.[9] The maximum power permitted is 400 W in selected sub-bands.[2]

BandFrequency

(MHz)Wavelength Type Emission

Power (W)

61.820–1.860[nb 4] 160 m MF

A1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

73.500–3.700[nb 4] 80 m HF

A1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

73.890–3.900

80 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

77.000–7.100

40 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

714.000–14.350

20 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

718.068–18.168[nb 5] 17 m HF

A1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

721.000–21.450

15 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

724.890–24.990

12 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

728.000–29.700

10 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

150

8 144–146 2 m VHFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

50

9434–438[nb

3] 70 cm UHFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

25[nb 2]

9 1260– 23 cm UHF A1A, A2A, A3E, H3E, R3E, J3E, 25[nb 2]

Page 62: Line Coding

1300[nb 3][nb

6] F1B, F2B, F3E, F3C, A3X, A3F

103300–3400[nb 3] 9 cm SHF

A1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

25[nb 2]

105725–5840[nb 3] 5 cm SHF

A1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

25[nb 2]

400 W sub-bands

BandFrequency

(MHz)Wavelength Type Emission

Power (W)

73.520–3.540[nb 4] 80 m HF

A1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

400

73.890–3.900

80 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

400

77.050–7.100

40 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

400

714.050–14.150

20 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

400

714.220–14.320

20 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

400

721.100–21.400

15 m HFA1A, A2A, A3E, H3E, R3E, J3E, F1B, F2B, F3E, F3C, A3X, A3F

400