42
| PREFACE Everyday innumerable technologies are invented and developed all over the world in many fields. Added to this, many technologies have resulted in failure also. One, who is born, has to die one day. During this short span of life, the thrust for new technologies and developments have not quenched yet. As a result of this, many latest technologies were introduced. This magazine “INFOLINE” was aimed to provide basic necessary information about the latest technologies developed and it also creates awareness to the one who is reading this. Your comments and valuable suggestions for the improvement from the students, teachers and friends are warmly welcomed and will be gratefully acknowledged. Infoline Team

PREFACE - kasc.ac.in · levels of security. Created by telecoms vendor Ericsson in 1994, it was originally conceived as a wireless alternative to RS-232 data cables. It can connect

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

|

PREFACE

Everyday innumerable technologies are invented and developed all over the world in

many fields. Added to this, many technologies have resulted in failure also. One, who is born,

has to die one day. During this short span of life, the thrust for new technologies and

developments have not quenched yet. As a result of this, many latest technologies were

introduced.

This magazine “INFOLINE” was aimed to provide basic necessary information

about the latest technologies developed and it also creates awareness to the one who is

reading this.

Your comments and valuable suggestions for the improvement from the students,

teachers and friends are warmly welcomed and will be gratefully acknowledged.

Infoline Team

|

Acknowledgement

We wish to thank Thiru A.Venkatachalam B.Sc.,Correspondent, Kongu Arts And Science College, Erode and our Management for the support to publish the magazine. We wish to thank Dr. K.R Subramanian M.A., M.Phil., B.Ed., Ph.d., principal(Incharge), Kongu Arts And Science College, Erode has provided considerable support to us during this effort. We proudly thank our Chief Editor, Staff Advisor, Staff Members and the students of The Department of Computer Technology and Information Technology for their guidance and suggestions to complete this magazine.

|

INFOLINEINFOLINEINFOLINEINFOLINE

TECHNOLOGY NAVIGATORTECHNOLOGY NAVIGATORTECHNOLOGY NAVIGATORTECHNOLOGY NAVIGATOR

Executive Committee

CHIEF PATRON : Thiru A.Venkatachalam B.Sc.,

PATRON : Dr. K.R Subramanian M.A., M.Phil., B.Ed., Ph.D.,

EDITOR IN CHIEF : S.Muruganantham M.Sc., M.Phil.,

Staff Advisor:

Ms.C.Uma M.Sc.,

Assistant Professor, Department of CT& IT.

Staff Editor:

R.Rooba M.Sc., M.Phil.,

Assistant Professor, Department of CT & IT.

Student Editors:

Tamilarasi. P III B.Sc(IT)

Suganya .S.V III B.Sc(IT)

Gowtham. T III B.Sc(IT)

Suresh .N III B.Sc(CT)

Divya. S.G III B.Sc(CT)

|

Organizing Members:

Kiruthika.T II B.Sc(IT)

Santhiya.P.V II B.Sc(IT)

Dinesh.C II B.Sc(IT)

Dinesh Kumar.S II B.Sc(IT)

Ramya.R II B.Sc(CT)

Ramesh Kumar.R II B.Sc(CT)

Srikavi.S II B.Sc(CT)

Thangadurai.D II B.Sc(CT)

|

CONTENTS

Preface i

Acknowledgement ii

Executive Committee iii

BLUE TOOTH 1

WI-FI ALLIANCE 12

LIQUID CRYSTAL DISPLAY 13

MODEM 20

MULTIMEDIA 29

PLASMA DISPLAY 31

|

ARTICLES

|

INFOLINE

1

Bluetooth

Bluetooth is a proprietary open wireless technology standard for exchanging data over short distances (using short wavelength radio transmissions) from fixed and mobile devices, creating personal area networks (PANs) with high levels of security. Created by telecoms vendor Ericsson in 1994, it was originally conceived as a wireless alternative to RS-232 data cables. It can connect several devices, overcoming problems of synchronization. Today Bluetooth is managed by the Bluetooth Special Interest Group.

Name and logo

The word Bluetooth is an anglicized version of the Scandinavian Blatant/Blatant, the epithet of the tenth-century king Herald I of Denmark and parts of Norway who united dissonant Danish tribes into a single kingdom. The implication is that Bluetooth does the same with communications protocols, uniting them into one universal standard.

The Bluetooth logo is a bind rune merging the Younger Futhark runes (Hagall) and (Bjarkan), Harald's initials.

Implementation

Bluetooth uses a radio technology called frequency-hopping spread spectrum, which chops up the data being sent and transmits chunks of it on up to 79 bands (1 MHz each) in the range 2402-2480 MHz. This range is in the globally unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio frequency band.

Originally Gaussian frequency-shift keying (GFSK) modulation was the only modulation scheme available; subsequently, since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK and 8DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode where a gross data rate of 1 Mbit/s is possible. The term enhanced data rate (EDR) is used to describe π/4-DPSK and 8DPSK schemes, each giving 2 and 3 Mbit/s respectively. The

combination of these (BR and EDR) modes in Bluetooth radio technology is classified as a "BR/EDR radio".

Bluetooth is a packet-based protocol with a master-slave structure. One master may communicate with up to 7 slaves in a piconet; all devices share the master's clock. Packet exchange is based on the basic clock, defined by the master, which ticks at 312.5 µs intervals. Two clock ticks make up a slot of 625 µs; two slots make up a slot pair of 1250 µs. In the simple case of single-slot packets the master transmits in even slots and receives in odd slots; the slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3 or 5 slots long but in all cases the master transmit will begin in even slots and the slave transmit in odd slots.

Bluetooth provides a secure way to connect and exchange information between devices such as faxes, mobile phones, telephones, laptops, personal computers, printers, Global Positioning System (GPS) receivers, digital cameras, and video game consoles.

The Bluetooth specifications are developed and licensed by the Bluetooth Special Interest Group (SIG). The Bluetooth SIG consists of more than 13,000 companies in the areas of telecommunication, computing, networking, and consumer electronics.

To be marketed as a Bluetooth device, it must be qualified to standards defined by the SIG.

Communication and connection

A master Bluetooth device can communicate with up to seven devices in a piconet. The devices can switch roles, by agreement, and the slave can become the master at any time.

At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in a round-robin fashion.

|

INFOLINE

2

The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatter net, in which certain devices serve as bridges, simultaneously playing the master role in one piconet and the slave role in another.

Many USB Bluetooth adapters or "dongles" are available, some of which also include an IrDA adapter. Older (pre-2003) Bluetooth dongles, however, have limited capabilities, offering only the Bluetooth Enumerator and a less-powerful Bluetooth Radio incarnation. Such devices can link computers with Bluetooth with a distance of 100 meters, but they do not offer much in the way of services that modern adapters do.

Uses

Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range (power-class-dependent: 100 m, 10 m and 1 m, but ranges vary in practice; see table below) based on low-cost transceiver microchips in each device. Because the devices use a radio (broadcast) communications system, they do not have to be in line of sight of each other.[5In most cases the effective range of class 2 devices is extended if they connect to a class 1 transceiver, compared to a pure class 2 network. This is accomplished by the higher sensitivity and transmission power of Class 1 devices.

Version Data Rate

Version 1.2 1 Mbit/s

Version2.0 + EDR 3 Mbit/s

Version 3.0 + HS 24 Mbit/s

While the Bluetooth Core Specification does mandate minimums for range, the range of the technology is application specific and is not limited. Manufacturers may tune their implementations to the range needed to support individual use cases.

Bluetooth profiles

To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles, which are definitions of possible applications and specify general behaviors that Bluetooth enabled devices use to communicate with other Bluetooth devices. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.

List of applications

A typical Bluetooth mobile phone headset.

• Wireless control of and communication between a mobile phone and a hands free headset. This was one of the earliest applications to become popular.

• Wireless networking between PCs in a confined space and where little bandwidth is required.

• Wireless communication with PC input and output devices, the most common being the mouse, keyboard and printer.

• Transfer of files, contact details, calendar appointments, and reminders between devices with OBEX.

• Replacement of traditional wired serial communications in test equipment, GPS receivers, medical equipment, bar code scanners, and traffic control devices.

• For controls where infrared was traditionally used.

• For low bandwidth applications where higher USB bandwidth is not required and cable-free connection desired.

• Sending small advertisements from Bluetooth-enabled advertising hoardings to other, discoverable, Bluetooth devices.

• Wireless bridge between two Industrial Ethernet (e.g., PROFINET) networks.

• Three seventh-generation game consoles, Nintendo's Wii and Sony's PlayStation 3 and PSP Go, use Bluetooth for their respective wireless controllers.

• Dial-up internet access on personal computers or PDAs using a data-capable mobile phone as a wireless modem like Novatel mifi.

|

INFOLINE

3

• Short range transmission of health sensor data from medical devices to mobile phone, set-top box or dedicated telehealth devices.

• Allowing a DECT phone to ring and answer calls on behalf of a nearby cell phone

• Real-time location systems (RTLS), are used to track and identify the location of objects in real-time using “Nodes” or “tags” attached to, or embedded in the objects tracked, and “Readers” that receive and process the wireless signals from these tags to determine their locations.

• Tracking livestock and detainees. According to a leaked diplomatic cable, King Abdullah of Saudi Arabia suggested "implanting detainees with an electronic chip containing information about them and allowing their movements to be tracked with Bluetooth. This was done with horses and falcons, the King said."

• Personal security application on mobile phones for theft prevention. The protected item has a Bluetooth marker (e.g. a headset) that is monitored continuously by the security application. If connection is lost (the marker is out of range) then an alarm is raised. The first known implementation of this security application of Bluetooth is BluCop, which is published in December 2010.

Bluetooth vs. Wi-Fi IEEE 802.11 in networking

Bluetooth and Wi-Fi have many applications: setting up networks, printing, or transferring files.

Wi-Fi is intended for resident equipment and its applications. The category of applications is outlined as WLAN, the wireless local area networks. Wi-Fi is intended as a replacement for cabling for general local area network access in work areas.

Bluetooth is intended for non-resident equipment and its applications. The category of applications is outlined as the wireless personal area network (WPAN). Bluetooth is a replacement for cabling in a variety of personally carried

applications in any ambiance and can also support fixed location applications such as smart energy functionality in the home (thermostats, etc.).

Wi-Fi is wireless version of a traditional Ethernet network, and requires configuration to set up shared resources, transmit files, and to set up audio links (for example, headsets and hands-free devices). Wi-Fi uses the same radio frequencies as Bluetooth, but with higher power, resulting in a faster connection and better range from the base station. The nearest equivalents in Bluetooth are the DUN profile, which allows devices to act as modem interfaces, and the PAN profile, which allows for ad-hoc networking.

Bluetooth devices

A Bluetooth USB dongle with a 100 m range. The MacBook Pro, shown, also has a built in Bluetooth adaptor.

Bluetooth exists in many products, such as the iPod Touch, Lego Mindstorms NXT, PlayStation 3, PSP Go, telephones, the Nintendo Wii, and some high definition headsets, modems, and watches. "Watch". Bluetooth.com. http://www.bluetooth.com/English/Products/Pages/Watch.aspx. Retrieved 2010-09-04. The technology is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).

Bluetooth protocols simplify the discovery and setup of services between devices. Bluetooth devices can advertise all of the services they provide. This makes using services easier because more of the security, network address and permission configuration can be automated than with many other network types.

Computer requirements

A typical Bluetooth USB dongle.An internal notebook Bluetooth card (14×36×4 mm).

|

INFOLINE

4

A personal computer that does not have embedded Bluetooth can be used with a Bluetooth adapter or "dongle" that will enable the PC to communicate with other Bluetooth devices (such as mobile phones, mice and keyboards). While some desktop computers and most recent laptops come with a built-in Bluetooth radio, others will require an external one in the form of a dongle.

Unlike its predecessor, IrDA, which requires a separate adapter for each device, Bluetooth allows multiple devices to communicate with a computer over a single adapter.

Operating system support

Apple has supported Bluetooth since Mac OS X v10.2 which was released in 2002.

For Microsoft platforms, Windows XP Service Pack 2 and SP3 releases have native support for Bluetooth 1.1, 2.0 and 2.0+EDR. Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft. Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 support Bluetooth 2.1+EDR. Windows 7 supports Bluetooth 2.1+EDR and Extended Inquiry Response (EIR).

The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP, DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack which may support more profiles or newer versions of Bluetooth. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring the Microsoft stack to be replaced.

Linux has two popular Bluetooth stacks, BlueZ and Affix. The BlueZ stack is included with most Linux kernels and was originally developed by Qualcomm. The Affix stack was developed by Nokia. FreeBSD features Bluetooth support since its 5.0 release. NetBSD features Bluetooth support

since its 4.0 release. Its Bluetooth stack has been ported to OpenBSD as well.

Mobile phone requirements

A Bluetooth-enabled mobile phone is able to pair with many devices. To ensure the broadest support of feature functionality together with legacy device support, the Open Mobile Terminal Platform (OMTP) forum has published a recommendations paper, entitled "Bluetooth Local Connectivity".

Specifications and features

The Bluetooth specification was developed in 1994 by Jaap Haartsen and Sven Mattisson, who were working for Ericsson in Lund, Sweden. The specification is based on frequency-hopping spread spectrum technology.

The specifications were formalized by the Bluetooth Special Interest Group (SIG). The SIG was formally announced on May 20, 1998. Today it has a membership of over 13,000 companies worldwide. It was established by Ericsson, IBM, Intel, Toshiba, Motorola and Nokia, and later joined by many other companies.

Bluetooth v1.0 and v1.0B

Versions 1.0 and 1.0B had many problems, and manufacturers had difficulty making their products interoperable. Versions 1.0 and 1.0B also included mandatory Bluetooth hardware device address (BD_ADDR) transmission in the Connecting process (rendering anonymity impossible at the protocol level), which was a major setback for certain services planned for use in Bluetooth environments.

Bluetooth v1.1

• Ratified as IEEE Standard 802.15.1-2002. • Many errors found in the 1.0B

specifications were fixed. • Added support for non-encrypted channels. • Received Signal Strength Indicator (RSSI).

|

INFOLINE

5

Bluetooth v1.2

This version is backward compatible with 1.1 and the major enhancements include the following:

• Faster Connection and Discovery • Adaptive frequency-hopping spread

spectrum (AFH), which improves resistance to radio frequency interference by avoiding the use of crowded frequencies in the hopping sequence.

• Higher transmission speeds in practice, up to 721 kbit/s, than in 1.1.

• Extended Synchronous Connections (eSCO), which improve voice quality of audio links by allowing retransmissions of corrupted packets, and may optionally increase audio latency to provide better support for concurrent data transfer.

• Host Controller Interface (HCI) support for three-wire UART.

• Ratified as IEEE Standard 802.15.1-2005 • Introduced Flow Control and

Retransmission Modes for L2CAP.

Bluetooth v2.0 + EDR

This version of the Bluetooth Core Specification was released in 2004 and is backward compatible with the previous version 1.2. The main difference is the introduction of an Enhanced Data Rate (EDR) for faster data transfer. The nominal rate of EDR is about 3 Mbit/s, although the practical data transfer rate is 2.1 Mbit/s. EDR uses a combination of GFSK and Phase Shift Keying modulation (PSK) with two variants, π/4-DQPSK and 8DPSK.EDR can provide a lower power consumption through a reduced duty cycle.

The specification is published as "Bluetooth v2.0 + EDR" which implies that EDR is an optional feature. Aside from EDR, there are other minor improvements to the 2.0 specification, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.

Bluetooth v2.1 + EDR

Bluetooth Core Specification Version 2.1 + EDR is fully backward compatible with 1.2, and was adopted by the Bluetooth SIG on July 26, 2007.

The headline feature of 2.1 is secure simple pairing (SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security. See the section on Pairing below for more details.

2.1 allows various other improvements, including "Extended inquiry response" (EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; sniff subrating, which reduces the power consumption in low-power mode.

Bluetooth v3.0 + HS

Version 3.0 + HS of the Bluetooth Core Specification was adopted by the Bluetooth SIG on April 21, 2009. Bluetooth 3.0+HS supports theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated 802.11 link. Its main new feature is AMP (Alternate MAC/PHY), the addition of 802.11 as a high speed transport. Two technologies had been anticipated for AMP: 802.11 and UWB, but UWB is missing from the specification.

The High-Speed part of the specification is not mandatory, and hence only devices sporting the "+HS" will actually support the Bluetooth over Wifi high-speed data transfer. A Bluetooth 3.0 device without the HS suffix will not support High Speed, and needs to only support Unicast Connectionless Data (UCD), as shown in the Bluetooth 3.0+HS specification, Vol0, section 4.1 Specification Naming Conventions.

|

INFOLINE

6

Unicast connectionless data

Permits service data to be sent without establishing an explicit L2CAP channel. It is intended for use by applications that require low latency between user action and reconnection/transmission of data. This is only appropriate for small amounts of data.

Enhanced Power Control

Updates the power control feature to remove the open loop power control, and also to clarify ambiguities in power control introduced by the new modulation schemes added for EDR. Enhanced power control removes the ambiguities by specifying the behavior that is expected. The feature also adds closed loop power control, meaning RSSI filtering can start as the response is received. Additionally, a "go straight to maximum power" request has been introduced. This is expected to deal with the headset link loss issue typically observed when a user puts their phone into a pocket on the opposite side to the headset. Bluetooth v4.0

On June 12, 2007, Nokia and Bluetooth SIG had announced that Wibree will be a part of the Bluetooth specification, as an ultra-low power Bluetooth technology.

On December 17, 2009, the Bluetooth SIG adopted Bluetooth low energy technology as the hallmark feature of the version 4.0. The provisional names Wibree and Bluetooth ULP (Ultra Low Power) are abandoned.

On April 21, 2010, the Bluetooth SIG completed the Bluetooth Core Specification version 4.0, which includes Classic Bluetooth, Bluetooth high speed and Bluetooth low energy protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.

Bluetooth low energy

Bluetooth low energy is an alternative to the Bluetooth standard that was introduced in Bluetooth v4.0, and is aimed at very low power applications running off a coin cell. It allows two

types of implementation, dual-mode and single-mode. In a dual-mode implementation, Bluetooth low energy functionality is integrated into an existing Classic Bluetooth controller. The resulting architecture shares much of Classic Bluetooth’s existing radio and functionality resulting in a minimal cost increase compared to Classic Bluetooth. Additionally, manufacturers can use current Classic Bluetooth (Bluetooth v2.1 + EDR or Bluetooth v3.0 + HS) chips with the new low energy stack, enhancing the development of Classic Bluetooth enabled devices with new capabilities.

Single-mode chips, which will enable highly integrated and compact devices, will feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost. The Link Layer in these controllers will enable Internet connected sensors to schedule Bluetooth low energy traffic between Bluetooth transmissions.

UWB for AMP

The high speed (AMP) feature of Bluetooth v3.0 is based on 802.11, but the AMP mechanism was designed to be usable with other radios as well. It was originally intended for UWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding.

On March 16, 2009, the WiMedia Alliance announced it was entering into technology transfer agreements for the WiMedia Ultra-wideband (UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high speed and power optimized implementations, to the Bluetooth Special Interest Group (SIG), Wireless USB Promoter Group and the USB Implementers Forum. After the successful completion of the technology transfer, marketing and related administrative items, the WiMedia Alliance will cease operations.

In October 2009 the Bluetooth Special Interest Group suspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 +

|

INFOLINE

7

HS solution. A small, but significant, number of former WiMedia members had not and would not sign up to the necessary agreements for the IP transfer. The Bluetooth SIG is now in the process of evaluating other options for its longer term roadmap.

Technical information

Bluetooth protocol stack

"Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols." Mandatory protocols for all Bluetooth stacks are: LMP, L2CAP and SDP. Additionally, these protocols are almost universally supported: HCI and RFCOMM.

LMP (Link Management Protocol)

Used for control of the radio link between two devices. Implemented on the controller.

L2CAP (Logical Link Control & Adaptation Protocol)

Used to multiplex multiple logical connections between two devices using different higher level protocols. Provides segmentation and reassembly of on-air packets.

In Basic mode, L2CAP provides packets with a payload configurable up to 64kB, with 672 bytes as the default MTU, and 48 bytes as the minimum mandatory supported MTU.

In Retransmission & Flow Control modes, L2CAP can be configured for reliable or isochronous data per channel by performing retransmissions and CRC checks.

Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:

• Enhanced Retransmission Mode (ERTM): This mode is an improved version of the original retransmission

mode. This mode provides a reliable L2CAP channel.

• Streaming Mode (SM): This is a very simple mode, with no retransmission or flow control. This mode provides an unreliable L2CAP channel.

Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio will flush packets). In-order sequencing is guaranteed by the lower layer.

Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.

SDP (Service Discovery Protocol)

Service Discovery Protocol (SDP) allows a device to discover services supported by other devices, and their associated parameters. For example, when connecting a mobile phone to a Bluetooth headset, SDP will be used for determining which Bluetooth profiles are supported by the headset (Headset Profile, Hands Free Profile, Advanced Audio Distribution Profile (A2DP) etc.) and the protocol multiplexer settings needed to connect to each of them. Each service is identified by a Universally Unique Identifier (UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128)

HCI (Host/Controller Interface)

Standardized communication between the host stack (e.g., a PC or mobile phone OS) and the controller (the Bluetooth IC). This standard allows the host stack or controller IC to be swapped with minimal adaptation.

There are several HCI transport layer standards, each using a different hardware interface to transfer the same command, event and data packets. The most commonly used are USB (in PCs) and UART (in mobile phones and PDAs).

In Bluetooth devices with simple functionality (e.g., headsets) the host stack and controller can be implemented on the same

|

INFOLINE

8

microprocessor. In this case the HCI is optional, although often implemented as an internal software interface.

RFCOMM (Serial Port Emulation)

Radio frequency communications (RFCOMM) is a cable replacement protocol used to create a virtual serial data stream. RFCOMM provides for binary data transport and emulates EIA-232 (formerly RS-232) control signals over the Bluetooth baseband layer.

RFCOMM provides a simple reliable data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.

Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.

BNEP (Bluetooth Network Encapsulation Protocol)

BNEP is used for transferring another protocol stack's data via an L2CAP channel. It's main purpose is the transmission of IP packets in the Personal Area Networking Profile. BNEP performs a similar function to SNAP in Wireless LAN.

AVCTP (Audio/Video Control Transport Protocol)

Used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.

AVDTP (Audio/Video Distribution Transport Protocol)

Used by the advanced audio distribution profile (A2DP) to stream music to stereo headsets

over an L2CAP channel. Intended to be used by video distribution profile.

Telephony control protocol

Telephony control protocol-binary (TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."

TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.

Adopted protocols

Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth’s protocol stack, allowing Bluetooth to create protocols only when necessary. The adopted protocols include:

Point-to-Point Protocol (PPP)

Internet standard protocol for transporting IP datagram’s over a point-to-point link.

TCP/IP/UDP

Foundation Protocols for TCP/IP protocol suite

Object Exchange Protocol (OBEX)

Session-layer protocol for the exchange of objects, providing a model for object and operation representation

Wireless Application Environment/Wireless Application Protocol (WAE/WAP)

WAE specifies an application framework for wireless devices and WAP is an open standard to provide mobile users access to telephony and information services.

|

INFOLINE

9

Baseband Error Correction

Three types of error correction are implemented in Bluetooth systems,

• 1/3 rate forward error correction (FEC) • 2/3 rate FEC • Automatic repeat-request (ARQ)

Setting up connections

Any Bluetooth device in discoverable mode will transmit the following information on demand:

• Device name • Device class • List of services • Technical information (for example: device

features, manufacturer, Bluetooth specification used, clock offset)

Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.

Every device has a unique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.

Most phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several phones in range named T610.

Motivation

Many of the services offered over Bluetooth can expose private data or allow the connecting party to control the Bluetooth device. For security reasons it is therefore necessary to control which devices are allowed to connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to automatically establish a connection without user intervention as soon as they are in range.

To resolve this conflict, Bluetooth uses a process called pairing, which is generally manually started by a device user—making that device's Bluetooth link visible to other devices. Two devices need to be paired to communicate with each other; the pairing process is typically triggered automatically the first time a device receives a connection request from a device with which it is not yet paired. Once a pairing has been established it is remembered by the devices, which can then connect to each without user intervention. When desired, the pairing relationship can later be removed by the user.

Implementation

During the pairing process, the two devices involved establish a relationship by creating a shared secret known as a link key. If a link key is stored by both devices they are said to be paired or bonded. A device that wants to communicate only with a bonded device can cryptographically authenticate the identity of the other device, and so be sure that it is the same device it previously paired with. Once a link key has been generated, an authenticated ACL link between the devices may be encrypted so that the data that they exchange over the airwaves is protected against eavesdropping.

Link keys can be deleted at any time by either device. If done by either device this will implicitly remove the bonding between the devices; so it is possible for one of the devices to have a link key stored but not be aware that it is no longer bonded to the device associated with the given link key.

|

INFOLINE

10

Bluetooth services generally require either encryption or authentication, and as such require pairing before they allow a remote device to use the given service. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.

Pairing mechanisms

Pairing mechanisms have changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:

Legacy pairing: This is the only method available in Bluetooth v2.0 and before. Each device must enter a PIN code; pairing is only successful if both devices enter the same PIN code. Any 16-byte UTF-8 string may be used as a PIN code, however not all devices may be capable of entering all possible PIN codes.

Limited input devices: The obvious example of this class of device is a Bluetooth Hands-free headset, which generally have few inputs. These devices usually have a fixed PIN, for example "0000" or "1234", that are hard-coded into the device.

Numeric input devices: Mobile phones are classic examples of these devices. They allow a user to enter a numeric value up to 16 digits in length.

Alpha-numeric input devices: PCs and smart phones are examples of these devices. They allow a user to enter full UTF-8 text as a PIN code. If pairing with a less capable device the user needs to be aware of the input limitations on the other device, there is no mechanism available for a capable device to determine how it should limit the available input a user may use.

Secure Simple Pairing (SSP): This is required by Bluetooth v2.1. A Bluetooth v2.1 device may only use legacy pairing to interoperate with a v2.0 or earlier device. Secure Simple Pairing uses a form of public key cryptography, and has the following modes of operation:

Just works: As implied by the name, this method just works. No user interaction is required; however, a device may prompt the user to confirm the pairing process. This method is typically used by headsets with very limited IO capabilities, and is more secure than the fixed PIN mechanism which is typically used for legacy pairing by this set of limited devices. This method provides no man in the middle (MITM) protection.

Numeric comparison: If both devices have a display and at least one can accept a binary Yes/No user input, they may use Numeric Comparison. This method displays a 6-digit numeric code on each device. The user should compare the numbers to ensure they are identical. If the comparison succeeds, the user(s) should confirm pairing on the device(s) that can accept an input. This method provides MITM protection, assuming the user confirms on both devices and actually performs the comparison properly.

Passkey Entry: This method may be used between a device with a display and a device with numeric keypad entry (such as a keyboard), or two devices with numeric keypad entry. In the first case, the display is used to show a 6-digit numeric code to the user, who then enters the code on the keypad. In the second case, the user of each device enters the same 6-digit number. Both cases provide MITM protection.

Out of band (OOB): This method uses an external means of communication, such as Near Field Communication (NFC) to exchange some information used in the pairing process. Pairing is completed using the Bluetooth radio, but requires information from the OOB mechanism. This provides only the level of MITM protection that is present in the OOB mechanism.

SSP is considered simple for the following reasons:

• In most cases, it does not require a user to generate a passkey.

• For use-cases not requiring MITM protection, user interaction has been eliminated.

|

INFOLINE

11

• For numeric comparison, MITM protection can be achieved with a simple equality comparison by the user.

• Using OOB with NFC will enable pairing when devices simply get close, rather than requiring a lengthy discovery process.

Security Concerns

Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simple XOR attacks to retrieve the encryption key.

• Turning off encryption is required for several normal operations, so it is problematic to detect if encryption is disabled for a valid reason or for a security attack.

• Bluetooth v2.1 addresses this in the following ways:

Encryption is required for all non-SDP (Service Discovery Protocol) connections

A new Encryption Pause and Resume feature is used for all normal operations requiring encryption to be disabled. This enables easy identification of normal operation from security attacks.

The encryption key is required to be refreshed before it expires.

Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers allow link keys to be stored on the device; however, if the device is removable this means that the link key will move with the device.

Air Interface

The protocol operates in the license-free ISM band at 2.402-2.480 GHz.[38] To avoid interfering with other protocols that use the 2.45 GHz band, the Bluetooth protocol divides the band into 79 channels (each 1 MHz wide) and changes channels up to 1600 times per second.

Implementations with versions 1.1 and 1.2 reach speeds of 723.1 kbit/s. Version 2.0 implementations feature Bluetooth Enhanced Data Rate (EDR) and reach 2.1 Mbit/s. Technically, version 2.0 devices have a higher power consumption, but the three times faster rate reduces the transmission times, effectively reducing power consumption to half that of 1.x devices (assuming equal traffic load).

Blue jacking

Blue jacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages (e.g., "You’ve just been bluejacked!"). Bluejacking does not involve the removal or alteration of any data from the device. Bluejacking can also involve taking control of a mobile wirelessly and phoning a premium rate line, owned by the bluejacker.

Bluetooth Innovation World Cup 2009

The first international Bluetooth Innovation World Cup 2009 drew more than 250 international entries illustrating the abundance of opportunities for product development with the new Bluetooth low energy wireless technology.

The Bluetooth Innovation World Cup 2009 was sponsored by Nokia, Freescale Semiconductor, Texas Instruments, Nordic Semiconductor, STMicroelectronics and Brunel.

Bluetooth Innovator of the Year 2009

On February 8, 2010, the Bluetooth SIG has awarded Edward Sazonov, Physical Activity Innovations LLC, the title of Bluetooth Innovator of the Year for 2009. Sazonov received this recognition at the official award ceremony held in-line with the Wearable Technologies Show at ispo 2010, the world’s largest trade show for sporting goods. The award includes a cash prize of €5,000 and a Bluetooth Qualification Program voucher (QDID) valued at up to US$ 10,000. Sazonov’s winning idea, The Fit Companion, is a small, unobtrusive sensor that when clipped-on to a user’s clothing or integrated in to a shoe, provides

|

INFOLINE

12

feedback about their physical activity. The data, transmitted via Bluetooth low energy technology, can help individuals to lose weight and achieve optimal physical activity. Intended for use in both training and daily activities like walking or performing chores, this simple, measuring device may offer a solution for reducing obesity.

Bluetooth Innovation World Cup 2010

The Bluetooth Special Interest Group (SIG) announced the start of the second Bluetooth Innovation World Cup (IWC) on 1 June 2010. The 2010 Bluetooth Innovation World Cup has a focus on applications for the sports & fitness, health care and home information and control markets. The competition will close for registrations on September 15, 2010.

By SANDHYA.K

II-B.Sc(IT) Wi-Fi Alliance

Wi-Fi (pronounced) is a trademark of the Wi-Fi Alliance. It is not a technical term. However, the Alliance has generally enforced its use to describe only a narrow range of connectivity technologies including wireless local area network (WLAN) based on the IEEE 802.11 standards, device to device connectivity [such as Wi-Fi Peer to Peer AKA Wi-Fi Direct], and a range of technologies that support PAN, LAN and even WAN connections. Derivative terms, such as Super Wi-Fi, coined by the U.S. Federal Communications Commission (FCC) to describe proposed networking in the former UHF TV band in the US, may or may not be sanctioned by the Alliance. As of November 2010 this was very unclear.

The technical term "IEEE 802.11" has been used interchangeably with Wi-Fi, however Wi-Fi has become a superset of IEEE 802.11 over the past few years. Wi-Fi is used by over 700 million people, there are over 750,000 hotspots (places with Wi-Fi Internet connectivity) around the world, and about 800 million new Wi-Fi devices every year.

Wi-Fi products that complete the Wi-Fi Alliance interoperability certification testing successfully can use the Wi-Fi CERTIFIED designation and trademark.

Not every Wi-Fi device is submitted for certification to the Wi-Fi Alliance. The lack of Wi-Fi certification does not necessarily imply a device is incompatible with Wi-Fi devices/protocols. If it is compliant or partly compatible the Wi-Fi Alliance may not object to its description as a Wi-Fi device though technically only the CERTIFIED designation carries their approval.

Wi-Fi certified and compliant devices are installed in many personal computers, video game consoles, MP3 players, smartphones printers, digital cameras, and laptop computers.This article focuses on the certification and approvals process and the general growth of wireless networking under the Wi-Fi Alliance certified protocols. For more on the technologies see the appropriate articles with IEEE, ANSI, IETF , W3 and ITU prefixes (acronyms for the accredited standards organizations that have created formal technology standards for the protocols by which devices communicate). Non-Wi-Fi-Alliance wireless technologies intended for fixed points such as Motorola Canopy are usually described as fixed wireless. Non-Wi-Fi-Alliance wireless technologies intended for mobile use are usually described as 3G, 4G or 5G reflecting their origins and promotion by telephone/cell companies.

By DINESH.C

II-B.Sc (IT)

Liquid Crystal Display A liquid crystal display (LCD) is a thin,

flat electronic visual display that uses the light modulating properties of liquid crystals (LCs). LCs do not emit light directly.

They are used in a wide range of applications including: computer monitors, television, instrument panels, aircraft cockpit displays, signage, etc. They are common in consumer devices such as video players, gaming

|

INFOLINE

13

devices, clocks, watches, calculators, and telephones. LCDs have displaced cathode ray tube (CRT) displays in most applications. They are usually more compact, lightweight, portable, less expensive, more reliable, and easier on the eyes. They are available in a wider range of screen sizes than CRT and plasma displays, and since they do not use phosphors, they cannot suffer image burn-in.

LCDs are more energy efficient and offer safer disposal than CRTs. Its low electrical power consumption enables it to be used in battery-powered electronic equipment. It is an electronically-modulated optical device made up of any number of pixels filled with liquid crystals and arrayed in front of a light source (backlight) or reflector to produce images in colour or monochrome. The earliest discovery leading to the development of LCD technology, the discovery of liquid crystals, dates from 1888. By 2008, worldwide sales of televisions with LCD screens had surpassed the sale of CRT units.

LCD alarm clock

Each pixel of an LCD typically consists of a layer of molecules aligned between two transparent electrodes, and two polarizing filters, the axes of transmission of which are (in most of the cases) perpendicular to each other. With no actual liquid crystal between the polarizing filters, light passing through the first filter would be blocked by the second (crossed) polarizer. In most of the cases the liquid crystal has double refraction.

The surface of the electrodes that are in contact with the liquid crystal material are treated so as to align the liquid crystal molecules in a particular direction. This treatment typically consists of a thin polymer layer that is unidirectional rubbed using, for example, a cloth. The direction of the liquid crystal alignment is then defined by the direction of rubbing. Electrodes are made of a transparent conductor called Indium Tin Oxide (ITO).

Before applying an electric field, the orientation of the liquid crystal molecules is determined by the alignment at the surfaces of

electrodes. In a twisted nematic device (still the most common liquid crystal device), the surface alignment directions at the two electrodes are perpendicular to each other, and so the molecules arrange themselves in a helical structure, or twist. This reduces the rotation of the polarization of the incident light, and the device appears grey. If the applied voltage is large enough, the liquid crystal molecules in the center of the layer are almost completely untwisted and the polarization of the incident light is not rotated as it passes through the liquid crystal layer. This light will then be mainly polarized perpendicular to the second filter, and thus be blocked and the pixel will appear black. By controlling the voltage applied across the liquid crystal layer in each pixel, light can be allowed to pass through in varying amounts thus constituting different levels of gray. This electric field also controls (reduces) the double refraction properties of the liquid crystal.

LCD with top polarizer removed from device and placed on top, such that the top and bottom polarizers are parallel.

The optical effect of a twisted nematic device in the voltage-on state is far less dependent on variations in the device thickness than that in the voltage-off state. Because of this, these devices are usually operated between crossed polarizers such that they appear bright with no voltage (the eye is much more sensitive to variations in the dark state than the bright state). These devices can also be operated between parallel polarizers, in which case the bright and dark states are reversed. The voltage-off dark state in this configuration appears blotchy, however, because of small variations of thickness across the device.

Both the liquid crystal material and the alignment layer material contain ionic compounds. If an electric field of one particular polarity is applied for a long period of time, this ionic material is attracted to the surfaces and degrades the device performance. This is avoided either by applying an alternating current or by reversing the polarity of the electric field as the device is addressed (the response of the liquid crystal layer is identical, regardless of the polarity of the applied field).

|

INFOLINE

14

When a large number of pixels are needed in a display, it is not technically possible to drive each directly since then each pixel would require independent electrodes. Instead, the display is multiplexed. In a multiplexed display, electrodes on one side of the display are grouped and wired together (typically in columns), and each group gets its own voltage source. On the other side, the electrodes are also grouped (typically in rows), with each group getting a voltage sink. The groups are designed so each pixel has a unique, unshared combination of source and sink. The electronics, or the software driving the electronics then turns on sinks in sequence, and drives sources for the pixels of each sink.

Illumination

As LCD panels produce no light of their own, they require an external lighting mechanism to be easily visible. On most displays, this consists of a cold cathode fluorescent lamp that is situated behind the LCD panel. Passive-matrix displays are usually not backlit, but active-matrix displays almost always are, with a few exceptions such as the display in the original Gameboy Advance.

Recently, two types of LED backlit LCD displays have appeared in some televisions as an alternative to conventional backlit LCDs. In one scheme, the LEDs are used to backlight the entire LCD panel. In another scheme, a set of red, green and blue LEDs is used to illuminate a small cluster of pixels, which can improve contrast and black level in some situations. For example, the LEDs in one section of the screen can be dimmed to produce a dark section of the image while the LEDs in another section are kept bright. Both schemes also allows for a slimmer panel than on conventional displays.

Passive-matrix and active-matrix addressed LCDs

A general purpose alphanumeric LCD, with two lines of 16 characters.

LCDs with a small number of segments, such as those used in digital watches and pocket

calculators, have individual electrical contacts for each segment. An external dedicated circuit supplies an electric charge to control each segment. This display structure is unwieldy for more than a few display elements.

Small monochrome displays such as those found in personal organizers, electronic weighing scales, older laptop screens, and the original Gameboy have a passive-matrix structure employing super-twisted nematic (STN) or double-layer STN (DSTN) technology (the latter of which addresses a colour-shifting problem with the former), and colour-STN (CSTN) in which colour is added by using an internal filter. Each row or column of the display has a single electrical circuit. The pixels are addressed one at a time by row and column addresses. This type of display is called passive-matrix addressed because the pixel must retain its state between refreshes without the benefit of a steady electrical charge. As the number of pixels (and, correspondingly, columns and rows) increases, this type of display becomes less feasible. Very slow response times and poor contrast are typical of passive-matrix addressed LCDs.

Monochrome passive-matrix LCDs were standard in most early laptops (although a few used plasma displays). The commercially unsuccessful Macintosh Portable (released in 1989) was one of the first to use an active-matrix display (though still monochrome), but passive-matrix was the norm until the mid-1990s, when colour active-matrix became standard on all laptops.

High-resolution colour displays such as modern LCD computer monitors and televisions use an active matrix structure. A matrix of thin-film transistors (TFTs) is added to the polarizing and colour filters. Each pixel has its own dedicated transistor, allowing each column line to access one pixel. When a row line is activated, all of the column lines are connected to a row of pixels and the correct voltage is driven onto all of the column lines. The row line is then deactivated and the next row line is activated. All of the row lines are activated in sequence during a refresh operation. Active-matrix addressed displays look "brighter" and "sharper" than passive-matrix addressed

|

INFOLINE

15

displays of the same size, and generally have quicker response times, producing much better images.

Active matrix technologies

A Casio 1.8 in colour TFT LCD which equips the Sony Cyber-shot DSC-P93A digital compact cameras

Twisted nematic

wisted nematic displays contain liquid crystal elements which twist and untwist at varying degrees to allow light to pass through. When no voltage is applied to a TN liquid crystal cell, the light is polarized to pass through the cell. In proportion to the voltage applied, the LC cells twist up to 90 degrees changing the polarization and blocking the light's path. By properly adjusting the level of the voltage almost any grey level or transmission can be achieved.

In-plane switching (IPS)

In-plane switching is an LCD technology which aligns the liquid crystal cells in a horizontal direction. In this method, the electrical field is applied through each end of the crystal, but this requires two transistors for each pixel instead of the single transistor needed for a standard thin-film transistor (TFT) display. Before LG Enhanced IPS was introduced in 2009, the additional transistors resulted in blocking more transmission area, thus requiring a brighter backlight, which consumed more power, and made this type of display less desirable for notebook computers. This newer, lower power technology can be found in the Apple iMac, iPad, and iPhone 4, as well as the Hewlett-Packard EliteBook 8740w. Currently Panasonic is using an enhanced version eIPS for their large size LCD-TV products.

Advanced fringe field switching (AFFS)

Known as fringe field switching (FFS) until 2003, advanced fringe field switching is a technology similar to IPS or S-IPS offering superior performance and colour gamut with high luminosity. AFFS is developed by Hydis

Technologies Co.,Ltd, Korea (formally Hyundai Electronics, LCD Task Force).

AFFS-applied notebook applications minimize colour distortion while maintaining its superior wide viewing angle for a professional display. Colour shift and deviation caused by light leakage is corrected by optimizing the white gamut which also enhances white/grey reproduction.

In 2004, Hydis Technologies Co.,Ltd licenses AFFS patent to Japan's Hitachi Displays. Hitachi is using AFFS to manufacture high end panels in their product line. In 2006, HYDIS also licenses AFFS to Sanyo Epson Imaging Devices Corporation.

Hydis introduced AFFS+ which improved outdoor readability in 2007.

Vertical alignment (VA)

Vertical alignment displays are a form of LCDs in which the liquid crystal material naturally exists in a vertical state removing the need for extra transistors (as in IPS). When no voltage is applied, the liquid crystal cell remains perpendicular to the substrate creating a black display. When voltage is applied, the liquid crystal cells shift to a horizontal position, parallel to the substrate, allowing light to pass through and create a white display. VA liquid crystal displays provide some of the same advantages as IPS panels, particularly an improved viewing angle and improved black level

Blue Phase mode

Blue phase LCDs do not require a liquid crystal top layer. Blue phase LCDs are relatively new to the market, and very expensive because of the low volume of production. They provide a higher refresh rate than normal LCDs, but normal LCDs are still cheaper to make and actually provide better colours and a sharper image.

Military use of LCD monitors

LCD monitors have been adopted by the United States of America military instead of CRT displays because they are smaller, lighter and more

|

INFOLINE

16

efficient, although monochrome plasma displays are also used, notably for their M1 Abrams tanks. For use with night vision imaging systems a US military LCD monitor must be compliant with MIL-L-3009 (formerly MIL-L-85762A). These LCD monitors go through extensive certification so that they pass the standards for the military. These include MIL-STD-901D - High Shock (Sea Vessels), MIL-STD-167B - Vibration (Sea Vessels), MIL-STD-810F – Field Environmental Conditions (Ground Vehicles and Systems), MIL-STD-461E/F – EMI/RFI (Electromagnetic Interference/Radio Frequency Interference), MIL-STD-740B – Airborne/Structureborne Noise, and TEMPEST - Telecommunications Electronics Material Protected from Emanating Spurious Transmissions.

Quality control

Some LCD panels have defective transistors, causing permanently lit or unlit pixels which are commonly referred to as stuck pixels or dead pixels respectively. Unlike integrated circuits (ICs), LCD panels with a few defective transistors are usually still usable. It is claimed that it is economically prohibitive to discard a panel with just a few defective pixels because LCD panels are much larger than ICs, but this has never been proven. Manufacturers' policies for the acceptable number of defective pixels vary greatly. At one point, Samsung held a zero-tolerance policy for LCD monitors sold in Korea.Currently, though, Samsung adheres to the less restrictive ISO 13406-2 standard. Other companies have been known to tolerate as many as 11 dead pixels in their policies. Dead pixel policies are often hotly debated between manufacturers and customers. To regulate the acceptability of defects and to protect the end user, ISO released the ISO 13406-2 standard. However, not every LCD manufacturer conforms to the ISO standard and the ISO standard is quite often interpreted in different ways.

LCD panels are more likely to have defects than most ICs due to their larger size. For example, a 300 mm SVGA LCD has 8 defects and a 150 mm wafer has only 3 defects. However, 134 of the 137 dies on the wafer will be acceptable, whereas rejection of the LCD panel would be a 0% yield.

Due to competition between manufacturers quality control has been improved. An SVGA LCD panel with 4 defective pixels is usually considered defective and customers can request an exchange for a new one. Some manufacturers, notably in South Korea where some of the largest LCD panel manufacturers, such as LG, are located, now have "zero defective pixel guarantee", which is an extra screening process which can then determine "A" and "B" grade panels. Many manufacturers would replace a product even with one defective pixel. Even where such guarantees do not exist, the location of defective pixels is important. A display with only a few defective pixels may be unacceptable if the defective pixels are near each other. Manufacturers may also relax their replacement criteria when defective pixels are in the center of the viewing area.

LCD panels also have defects known as clouding (or less commonly mura), which describes the uneven patches of changes in luminance. It is most visible in dark or black areas of displayed scenes.

Zero-power (bistable) displays

The zenithal bistable device (ZBD), developed by QinetiQ (formerly DERA), can retain an image without power. The crystals may exist in one of two stable orientations ("Black" and "White") and power is only required to change the image. ZBD Displays is a spin-off company from QinetiQ who manufacture both grayscale and colour ZBD devices.

A French company, Nemoptic, has developed the BiNem zero-power, paper-like LCD technology which has been mass-produced in partnership with Seiko since 2007. This technology is intended for use in applications such as Electronic Shelf Labels, E-books, E-documents, E-newspapers, E-dictionaries, Industrial sensors, Ultra-Mobile PCs, etc.

Kent Displays has also developed a "no power" display that uses Polymer Stabilized Cholesteric Liquid Crystals (ChLCD). A major drawback of ChLCD screens are their slow refresh rate, especially at low temperatures. Kent has

|

INFOLINE

17

recently demonstrated the use of a ChLCD to cover the entire surface of a mobile phone, allowing it to change colours, and keep that colour even when power is cut off.

In 2004 researchers at the University of Oxford demonstrated two new types of zero-power bistable LCDs based on Zenithal bistable techniques.

Several bistable technologies, like the 360° BTN and the bistable cholesteric, depend mainly on the bulk properties of the liquid crystal (LC) and use standard strong anchoring, with alignment films and LC mixtures similar to the traditional monostable materials. Other bistable technologies (i.e. Binem Technology) are based mainly on the surface properties and need specific weak anchoring materials.

Comparison of the OLPC XO-1 display

(left) with a typical colour LCD. The images show 1×1 mm of each screen. A typical LCD addresses groups of 3 locations as pixels. The XO-1 display addresses each location as a separate pixel. Example of how the colours are generated (R-red, G-green and B-blue) An example of a modern LCD display

In colour LCDs each individual pixel is divided into three cells, or subpixels, which are coloured red, green, and blue, respectively, by additional filters (pigment filters, dye filters and metal oxide filters). Each subpixel can be controlled independently to yield thousands or millions of possible colours for each pixel. CRT monitors employ a similar 'subpixel' structures via phosphors, although the electron beam employed in CRTs do not hit exact subpixels. The figure at the left shows the twisted nematic (TN) type of LCD.

Specifications

Important factors to consider when evaluating a Liquid Crystal Display (LCD):

• Resolution versus Range: Fundamentally resolution is the granularity (or number of levels) with which a performance feature of the display is divided. Resolution is often confused with range or the total end-to-end output of the display. Each of the major features of a display has both a resolution and a range that are tied to each other but very different. Frequently the range is an inherent limitation of the display while the resolution is a function of the electronics that make the display work.

Spatial Performance LCDs come in only one size for a variety of applications and a variety of resolutions within each of those applications. LCD spatial performance is also sometimes described in terms of a “dot pitch”. The size (or spatial range) of an LCD is always described in terms of the diagonal distance from one corner to its opposite. This is a historical aspect from the early days of CRT TV when CRT screens were manufactured on the bottoms of a glass bottle. The diameter of the bottle determined the size of the screen. Later, when TVs went to a more square format, the square screens were measured diagonally to compare with the older round screens.

The spatial resolution of an LCD is expressed in terms of the number of columns and rows of pixels (e.g., 1024×768). This had been one of the few features of LCD performance that was easily understood and not subject to interpretation. Each pixel is usually composed of a red, green, and blue sub pixel. However there are newer schemes to share sub-pixels among pixels and to add additional colours of sub-pixels. So going forward, spatial resolution may be more subject to interpretation.

One external factor to consider in evaluating display resolution is the resolution of your own eyes. For a normal person with 20/20 vision, the resolution of your eyes is about one minute of arc. In practical terms that means for an older standard definition TV set the ideal viewing distance was about 8 times the height (not diagonal) of the screen away. At that distance the individual rows of pixels merge into a solid. If you were closer to the screen than that, you would be able to see the individual rows of pixels. If you are further away, the image

|

INFOLINE

18

of the rows of pixels still merge, but the total image becomes smaller as you get further away. For an HDTV set with slightly more than twice the number of rows of pixels, the ideal viewing distance is about half what it is for a standard definition set. The higher the resolution, the closer you can sit to the set or the larger the set can usefully be sitting at the same distance as an older standard definition display.

For a computer monitor or some other LCD that is being viewed from a very close distance, resolution is often expressed in terms of dot pitch or pixels per inch. This is consistent with the printing industry (another form of a display). Magazines, and other premium printed media are often at 300 dots per inch. As with the distance discussion above, this provides a very solid looking and detailed image. LCDs, particularly on mobile devices, are frequently much less than this as the higher the dot pitch, the more optically inefficient the display and the more power it burns. Running the LCD is frequently half, or more, of the power consumed by a mobile device.

An additional consideration in spatial performance are viewing cone and aspect ratio. The Aspect ratio is the ratio of the width to the height (for example, 4:3, 5:4, 16:9 or 16:10). Older, standard definition TVs were 4:3. Newer, HDTV’s are 16:9 as are most new notebook computers. Movies are often filmed in much different (wider) aspect ratios which is why there will frequently still be black bars at the top and bottom of a HDTV screen.

The Viewing Angle of an LCD may be important depending on its use or location. The viewing angle is usually measured as the angle where the contrast of the LCD falls below 10:1. At this point, the colours usually start to change and can even invert, red becoming green and so forth. Viewing angles for LCDs used to be very restrictive however, improved optical films have been developed that give almost 180 degree viewing angles from left to right. Top to bottom viewing angles may still be restrictive, by design, as looking at an LCD from an extreme up or down angle is not a common usage model and these photons are wasted. Manufacturers commonly

focus the light in a left to right plane to obtain a brighter image here.

Temporal/Timing Performance: Contrary to spatial performance, temporal performance is a feature where smaller is better. Specifically, the range is the pixel response time of an LCD, or how quickly you can change a sub-pixel’s brightness from one level to another. For LCD monitors, this is measured in btb (black to black) or gtg (gray to gray). These different types of measurements make comparison difficult.[28] Further, this number is almost never published in sales advertising.

Refresh rate or the temporal resolution of an LCD is the number of times per second in which the display draws the data it is being given. Since activated LCD pixels do not flash on/off between frames, LCD monitors exhibit no refresh-induced flicker, no matter how low the refresh. rate.[29] High-end LCD televisions now feature up to 240 Hz refresh rate, which requires advanced digital processing to insert additional interpolated frames between the real images to smooth the image motion. However, such high refresh rates may not be actually supported by pixel response times and the result can be visual artifacts that distort the image in unpleasant ways.

Temporal performance can be further taxed if it is a 3D display. 3D displays work by showing a different series of images to each eye, alternating from eye to eye. For a 3D display it must display twice as many images in the same period of time as a conventional display and consequently the response time of the LCD becomes more important. 3D LCDs with marginal response times, will exhibit image smearing.

The temporal resolution of human perception is about 1/100th of a second. It is actually greater in your black and white vision (the rods in your eye) than in colour vision (the cones). You are more able to see flicker or any sort of temporal distortion in a display image by not looking directly at it as your rods are mostly grouped at the periphery of your vision.

Colour Performance There are many terms to describe colour performance of an LCD. They

|

INFOLINE

19

include colour gamut which is the range of colours that can be displayed and colour depth which is the colour resolution or the resolution or fineness with which the colour range is divided. Although colour gamut can be expressed as three pairs of numbers, the XY coordinates within colour space of the reddest red, greenest green, and bluest blue, it is usually expressed as a ratio of the total area within colour space that a display can show relative to some standard such as saying that a display was “120% of NTSC”. NTSC is the National Television Standards Committee, the old standard definition TV specification. Colour gamut is a relatively straight forward feature. However with clever optical techniques that are based on the way humans see colour, termed colour stretch . colours can be shown that are outside of the nominal range of the display. In any case, colour range is rarely discussed as a feature of the display as LCDs are designed to match the colour ranges of the content that they are intended to show. Having a colour range that exceeds the content is a useless feature.

Colour Depth or colour support is sometimes expressed in bits, either as the number of bits per sub-pixel or the number of bits per pixel. This can be ambiguous as an 8-bit colour LCD can be 8 total bits spread between red, green, and blue or 8 bits each for each colour in a different display. Further, LCDs sometimes use a technique called dithering which is time averaging colours to get intermediate colours such as alternating between two different colours to get a colour in between. This doubles the number of colours that can be displayed; however this is done at the expense of the temporal performance of the display. Dithering is commonly used on computer displays where the images are mostly static and the temporal performance is unimportant.

When colour depth is reported as colour support, it is usually stated in terms of number of colours the LCD can show. The number of colours is the translation from the base 2-bit numbers into common base-10. For example, s 8-bit, in common terms means 2 to the 8th power or 256 colours. 8-bits per colour or 24-bits would be 256 x 256 x 256 or over 16 Million colours. The colour resolution of the human eye depends on both the range of colours being sliced and the number of slices; but for most common displays the limit is about 28-bit colour.

LCD TVs commonly display more than that as the digital processing can introduce colour distortions and the additional levels of colour are needed to ensure true colours.

There are additional aspects to LCD colour and colour management such as white point and gamma correction which basically describe what colour white is and how the other colours are displayed relative to white. LCD televisions also frequently have facial recognition software which recognizes that an image on the screen is a face and both adjust the colour and the focus differently from the rest of the image. These adjustments can have important impact to the consumer but are not easily quantifiable; people like what they like and everyone does not like the same thing. There is no substitute for looking at the LCD you are going to buy before buying it. Portrait film, another form of display, has similar adjustments built in to it. Many years ago, Kodak had to overcome initial rejection of its portrait film in Japan because of these adjustments. In the US, people generally prefer a more colour facial image than is reality (higher colour saturation). In Japan, consumers generally prefer a less saturated image. The film that Kodak initially sent to Japan was biased in exactly the wrong direction for Japanese consumers. TV sets have their built in biases as well.

Brightness and Contrast ratio: Contrast Ratio is the ratio of the brightness of a full-on pixel to a full-off pixel and, as such, would be directly tied to brightness if not for the invention of the blinking backlight (or burst dimming). The LCD itself is only a light valve, it does not generate light; the light comes from a backlight that is either a florescent tube or a set of LEDs. The blinking backlight was developed to improve the motion performance of LCDs by turning the backlight off while the liquid crystals were in transition from one image to another. However, a side benefit of the blinking backlight was infinite contrast. The contrast reported on most LCDs is what the LCD is qualified at, not it’s actual performance. In any case, there are two large caveats to contrast ratio as a measure of LCD performance.The first caveat is that contrast ratios are measured in a completely dark room. In actual use, the room is never completely dark as you will always have the light from the LCD itself. Beyond that, there may be

|

INFOLINE

20

sunlight coming in through a window or other room lights that reflect off of the surface of the LCD and degrade the contrast. As a practical matter, the contrast of an LCD, or any display, is governed by the amount of surface reflections not by the performance of the display.The second caveat is that the human eye can only image a contrast ratio of a maximum of about 200:1. Black print on a white paper is about 15-20:1. That is why viewing angles are specified to the point where the fall below 10:1. A 10:1 image is not great, but is discernable.

By

DIVEYA.D II-B.Sc(IT)

Modem Modems grew out of the need to connect

teletype machines over ordinary phone lines instead of more expensive leased lines which had previously been used for current loop-based teleprinters and automated telegraphs. George Stibitz connected a New Hampshire teletype to a computer in New York City by a subscriber telephone line in 1940.

In 1943, IBM adapted this technology to their unit record equipment and were able to transmit punched cards at 25 bits/second. Mass-produced modems in the United States began as part of the SAGE air-defense system in 1958, connecting terminals at various airbases, radar sites, and command-and-control centers to the SAGE director centers scattered around the U.S. and Canada. SAGE modems were described by AT&T's Bell Labs as conforming to their newly published Bell 101 dataset standard. While they ran on dedicated telephone lines, the devices at each end were no different from commercial acoustically coupled Bell 101, 110 baud modems.

In the summer of 1960, the name Data-Phone was introduced to replace the earlier term digital subset. The 202 Data-Phone was a half-duplex asynchronous service that was marketed extensively in late 1960. In 1962, the 201A and 201B Data-Phones were introduced. They were

synchronous modems using two-bit-per-baud phase-shift keying (PSK). The 201A operated half-duplex at 2,000 bit/s over normal phone lines, while the 201B provided full duplex 2,400 bit/s service on four-wire leased lines, the send and receive channels running on their own set of two wires each.

The famous Bell 103A dataset standard was also introduced by Bell Labs in 1962. It provided full-duplex service at 300 baud over normal phone lines. Frequency-shift keying was used with the call originator transmitting at 1,070 or 1,270 Hz and the answering modem transmitting at 2,025 or 2,225 Hz. The readily available 103A2 gave an important boost to the use of remote low-speed terminals such as the KSR33, the ASR33, and the IBM 2741. AT&T reduced modem costs by introducing the originate-only 113D and the answer-only 113B/C modems.

The Carterfone decision

For many years, the Bell System (AT&T) maintained a monopoly on the use of its phone lines, allowing only Bell-supplied devices to be attached to its network. Before 1968, AT&T maintained a monopoly on what devices could be electrically connected to its phone lines. This led to a market for 103A-compatible modems that were mechanically connected to the phone, through the handset, known as acoustically coupled modems. Particularly common models from the 1970s were the Novation CAT and the Anderson-Jacobson, spun off from an in-house project at Stanford Research Institute (now SRI International). Hush-a-Phone v. FCC was a seminal ruling in United States telecommunications law decided by the DC Circuit Court of Appeals on November 8, 1956. The District Court found that it was within the FCC's authority to regulate the terms of use of AT&T's equipment. Subsequently, the FCC examiner found that as long as the device was not physically attached it would not threaten to degenerate the system. Later, in the Carterfone decision of 1968, the FCC passed a rule setting stringent AT&T-designed tests for electronically coupling a device to the phone lines. AT&T's tests were complex, making electronically-coupled modems expensive,

|

INFOLINE

21

so acoustically-coupled modems remained common into the early 1980s.

In December 1972, Vadic introduced the VA3400. This device was remarkable because it provided full duplex operation at 1,200 bit/s over the dial network, using methods similar to those of the 103A in that it used different frequency bands for transmit and receive. In November 1976, AT&T introduced the 212A modem to compete with Vadic. It was similar in design to Vadic's model, but used the lower frequency set for transmission. It was also possible to use the 212A with a 103A modem at 300 bit/s. According to Vadic, the change in frequency assignments made the 212 intentionally incompatible with acoustic coupling, thereby locking out many potential modem manufacturers. In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic's 1,200-bit/s mode, AT&T's 212A mode, and 103A operation.

The Smartmodem and the rise of BBSes

The next major advance in modems was the Smartmodem, introduced in 1981 by Hayes Communications. The Smartmodem was an otherwise standard 103A 300-bit/s modem, but was attached to a small controller that let the computer send commands to it and enable it to operate the phone line. The command set included instructions for picking up and hanging up the phone, dialing numbers, and answering calls. The basic Hayes command set remains the basis for computer control of most modern modems.

Prior to the Hayes Smartmodem, dial-up modems almost universally required a two-step process to activate a connection: first, the user had to manually dial the remote number on a standard phone handset, and then secondly, plug the handset into an acoustic coupler. Hardware add-ons, known simply as dialers, were used in special circumstances, and generally operated by emulating someone dialing a handset.

With the Smartmodem, the computer could dial the phone directly by sending the modem a command, thus eliminating the need for an

associated phone instrument for dialing and the need for an acoustic coupler. The Smartmodem instead plugged directly into the phone line. This greatly simplified setup and operation. Terminal programs that maintained lists of phone numbers and sent the dialing commands became common.

The Smartmodem and its clones also aided the spread of bulletin board systems (BBSs). Modems had previously been typically either the call-only, acoustically coupled models used on the client side, or the much more expensive, answer-only models used on the server side. The Smartmodem could operate in either mode depending on the commands sent from the computer. There was now a low-cost server-side modem on the market, and the BBSs flourished.

Almost all modern modems can interoperate with fax machines. Digital faxes, introduced in the 1980s, are simply a particular image format sent over a high-speed (commonly 14.4 kbit/s) modem. Software running on the host computer can convert any image into fax-format, which can then be sent using the modem. Such software was at one time an add-on, but since has

oftmodem (dumb modem)

A PCI Winmodem/softmodem (on the left)

next to a traditional ISA modem (on the right). Notice the less complex circuitry of the modem on the left.

A Winmodem or softmodem is a stripped-down modem that replaces tasks traditionally handled in hardware with software. In this case the modem is a simple interface designed to create voltage variations on the telephone line and to sample the line voltage levels (digital to analog and analog to digital converters). Softmodems are cheaper than traditional modems, since they have fewer hardware components. One downside is that the software generating and interpreting the modem tones is not simple (as most of the protocols are complex), and the performance of the computer as a whole often suffers when it is being used. For online gaming this can be a real concern. Another problem is lack of portability such that non-

|

INFOLINE

22

Windows operating systems (such as Linux) often does not have an equivalent driver to operate the modem.

Narrow-band/phone-line dialup modems

A standard modem of today contains two functional parts: an analog section for generating the signals and operating the phone, and a digital section for setup and control. This functionality is often incorporated into a single chip nowadays, but the division remains in theory. In operation the modem can be in one of two modes, data mode in which data is sent to and from the computer over the phone lines, and command mode in which the modem listens to the data from the computer for commands, and carries them out. A typical session consists of powering up the modem (often inside the computer itself) which automatically assumes command mode, then sending it the command for dialing a number. After the connection is established to the remote modem, the modem automatically goes into data mode, and the user can send and receive data. When the user is finished, the escape sequence, "+++" followed by a pause of about a second, may be sent to the modem to return it to command mode, then a command (e.g. "ATH") to hang up the phone is sent. Note that on many modem controllers it is possible to issue commands to disable the escape sequence so that it is not possible for data being exchanged to trigger the mode change inadvertently.

The commands themselves are typically from the Hayes command set, although that term is somewhat misleading. The original Hayes commands were useful for 300 bit/s operation only, and then extended for their 1,200 bit/s modems. Faster speeds required new commands, leading to a proliferation of command sets in the early 1990s. Things became considerably more standardized in the second half of the 1990s, when most modems were built from one of a very small number of chipsets. We call this the Hayes command set even today, although it has three or four times the numbers of commands as the actual standard.

Increasing speeds (V.21, V.22, V.22bis)

A 2,400 bit/s modem for a laptop.

The 300 bit/s modems used audio frequency-shift keying to send data. In this system the stream of 1s and 0s in computer data is translated into sounds which can be easily sent on the phone lines. In the Bell 103 system the originating modem sends 0s by playing a 1,070 Hz tone, and 1s at 1,270 Hz, with the answering modem putting its 0s on 2,025 Hz and 1s on 2,225 Hz. These frequencies were chosen carefully, they are in the range that suffer minimum distortion on the phone system, and also are not harmonics of each other.

In the 1,200 bit/s and faster systems, phase-shift keying was used. In this system the two tones for any one side of the connection are sent at the similar frequencies as in the 300 bit/s systems, but slightly out of phase. By comparing the phase of the two signals, 1s and 0s could be pulled back out, for instance if the signals were 90 degrees out of phase, this represented two digits, 1, 0, at 180 degrees it was 1, 1. In this way each cycle of the signal represents two digits instead of one. 1,200 bit/s modems were, in effect, 600 symbols per second modems (600 baud modems) with 2 bits per symbol.

Voiceband modems generally remained at 300 and 1,200 bit/s (V.21 and V.22) into the mid 1980s. A V.22bis 2,400-bit/s system similar in concept to the 1,200-bit/s Bell 212 signalling was introduced in the U.S., and a slightly different one in Europe. By the late 1980s, most modems could support all of these standards and 2,400-bit/s operation was becoming common.

For more information on baud rates versus bit rates, see the companion article list of device bandwidths.

Increasing speeds (one-way proprietary standards)

Many other standards were also introduced for special purposes, commonly using a high-speed channel for receiving, and a lower-speed channel for sending. One typical example was used in the French Minitel system, in which the user's terminals spent the majority of their time receiving information. The modem in the Minitel terminal

|

INFOLINE

23

thus operated at 1,200 bit/s for reception, and 75 bit/s for sending commands back to the servers.

Three U.S. companies became famous for high-speed versions of the same concept. Telebit introduced its Trailblazer modem in 1984, which used a large number of 36 bit/s channels to send data one-way at rates up to 18,432 bit/s. A single additional channel in the reverse direction allowed the two modems to communicate how much data was waiting at either end of the link, and the modems could change direction on the fly. The Trailblazer modems also supported a feature that allowed them to spoof the UUCP g protocol, commonly used on Unix systems to send e-mail, and thereby speed UUCP up by a tremendous amount. Trailblazers thus became extremely common on Unix systems, and maintained their dominance in this market well into the 1990s.

U.S. Robotics (USR) introduced a similar system, known as HST, although this supplied only 9,600 bit/s (in early versions at least) and provided for a larger backchannel. Rather than offer spoofing, USR instead created a large market among Fidonet users by offering its modems to BBS sysops at a much lower price, resulting in sales to end users who wanted faster file transfers. Hayes was forced to compete, and introduced its own 9,600-bit/s standard, Express 96 (also known as Ping-Pong), which was generally similar to Telebit's PEP. Hayes, however, offered neither protocol spoofing nor sysop discounts, and its high-speed modems remained rare.

4,800 and 9,600 bit/s (V.27ter, V.32)

Echo cancellation was the next major advance in modem design. Local telephone lines use the same wires to send and receive, which results in a small amount of the outgoing signal bouncing back. This signal can confuse the modem, which was unable to distinguish between the echo and the signal from the remote modem. This was why earlier modems split the signal frequencies into 'answer' and 'originate'; the modem could then ignore its own transmitting frequencies. Even with improvements to the phone system allowing higher speeds, this splitting of available phone signal

bandwidth still imposed a half-speed limit on modems.

Echo cancellation got around this problem. Measuring the echo delays and magnitudes allowed the modem to tell if the received signal was from itself or the remote modem, and create an equal and opposite signal to cancel its own. Modems were then able to send over the whole frequency spectrum in both directions at the same time, leading to the development of 4,800 and 9,600 bit/s modems.

Increases in speed have used increasingly complicated communications theory. 1,200 and 2,400 bit/s modems used the phase shift key (PSK) concept. This could transmit two or three bits per symbol. The next major advance encoded four bits into a combination of amplitude and phase, known as Quadrature Amplitude Modulation (QAM). Best visualized as a constellation diagram, the bits are mapped onto points on a graph with the x (real) and y (quadrature) coordinates transmitted over a single carrier.

The new V.27ter and V.32 standards were able to transmit 4 bits per symbol, at a rate of 1,200 or 2,400 baud, giving an effective bit rate of 4,800 or 9,600 bit/s. The carrier frequency was 1,650 Hz. For many years, most engineers considered this rate to be the limit of data communications over telephone networks.

Error correction and compression

Operations at these speeds pushed the limits of the phone lines, resulting in high error rates. This led to the introduction of error-correction systems built into the modems, made most famous with Microcom's MNP systems. A string of MNP standards came out in the 1980s, each increasing the effective data rate by minimizing overhead, from about 75% theoretical maximum in MNP 1, to 95% in MNP 4. The new method called MNP 5 took this a step further, adding data compression to the system, thereby increasing the data rate above the modem's rating. Generally the user could expect an MNP5 modem to transfer at about 130% the normal data rate of the modem. Details of MNP were later released and

|

INFOLINE

24

became popular on a series of 2,400-bit/s modems, and ultimately led to the development of V.42 and V.42bis ITU standards. V.42 and V.42bis were non-compatible with MNP but were similar in concept: Error correction and compression.

Another common feature of these high-speed modems was the concept of fallback, or speed hunting, allowing them to talk to less-capable modems. During the call initiation the modem would play a series of signals into the line and wait for the remote modem to respond to them. They would start at high speeds and progressively get slower and slower until they heard an answer. Thus, two USR modems would be able to connect at 9,600 bit/s, but, when a user with a 2,400-bit/s modem called in, the USR would fallback to the common 2,400-bit/s speed. This would also happen if a V.32 modem and a HST modem were connected. Because they used a different standard at 9,600 bit/s, they would fall back to their highest commonly supported standard at 2,400 bit/s. The same applies to V.32bis and 14,400 bit/s HST modem, which would still be able to communicate with each other at only 2,400 bit/s.

Breaking the 9.6k barrier

In 1980, Gottfried Ungerboeck from IBM Zurich Research Laboratory applied powerful channel coding techniques to search for new ways to increase the speed of modems. His results were astonishing but only conveyed to a few colleagues. Finally in 1982, he agreed to publish what is now a landmark paper in the theory of information coding.[citation needed] By applying powerful parity check coding to the bits in each symbol, and mapping the encoded bits into a two-dimensional diamond pattern, Ungerboeck showed that it was possible to increase the speed by a factor of two with the same error rate. The new technique was called mapping by set partitions (now known as trellis modulation).

Error correcting codes, which encode code words (sets of bits) in such a way that they are far from each other, so that in case of error they are still closest to the original word (and not confused with another) can be thought of as analogous to sphere packing or packing pennies on a surface: the

further two bit sequences are from one another, the easier it is to correct minor errors.

V.32bis was so successful that the older high-speed standards had little to recommend them. USR fought back with a 16,800 bit/s version of HST, while AT&T introduced a one-off 19,200 bit/s method they referred to as V.32ter (also known as V.32 terbo or tertiary).

V.34/28.8k and 33.6K

An ISA modem manufactured to conform to the V.34 protocol.

Any interest in these systems was destroyed during the lengthy introduction of the 28,800 bit/s V.34 standard. While waiting, several companies decided to release hardware and introduced modems they referred to as V.FAST. In order to guarantee compatibility with V.34 modems once the standard was ratified (1994), the manufacturers were forced to use more flexible parts, generally a DSP and microcontroller, as opposed to purpose-designed ASIC modem chips.

Today, the ITU standard V.34 represents the culmination of the joint efforts. It employs the most powerful coding techniques including channel encoding and shape encoding. From the mere 4 bits per symbol (9.6 kbit/s), the new standards used the functional equivalent of 6 to 10 bits per symbol, plus increasing baud rates from 2,400 to 3,429, to create 14.4, 28.8, and 33.6 kbit/s modems. This rate is near the theoretical Shannon limit. When calculated, the Shannon capacity of a narrowband line is , with the (linear) signal-to-noise ratio. Narrowband phone lines have a bandwidth from 300-4000 Hz, so using (SNR = 30dB): capacity is approximately 35 kbit/s.

Without the discovery and eventual application of trellis modulation, maximum telephone rates would have been limited to 3,429 baud * 4 bit/symbol == approximately 14 kbit/s using traditional QAM.

|

INFOLINE

25

V.61/V.70 Analog/Digital Simultaneous Voice and Data

The V.61 Standard introduced Analog Simultaneous Voice and Data (ASVD). This technology allowed users of v.61 modems to engage in point-to-point voice conversations with each other while their respective modems communicated.

In 1995, the first DSVD (Digital Simultaneous Voice and Data) modems became available to consumers, and the standard was ratified as v.70 by the International Telecommunication Union (ITU) in 1996.

Two DSVD modems can establish a completely digital link between each other over standard phone lines. Sometimes referred to as "the poor man's ISDN," and employing a similar technology, v.70 compatible modems allow for a maximum speed of 33.6 kbps between peers. By using a majority of the bandwidth for data and reserving part for voice transmission, DSVD modems allow users to pick up a telephone handset interfaced with the modem, and initiate a call to the other peer.

One practical use for this technology was realized by early two player video gamers, who could hold voice communication with each other while in game over the PSTN.

Advocates of DSVD envisioned whiteboard sharing and other practical applications for the standard, however, with advent of cheaper 56kbps analog modems intended for Internet connectivity, peer-to-peer data transmission over the PSTN became quickly irrelevant. Also, the standard was never expanded to allow for the making or receiving of arbitrary phone calls while the modem was in use, due to the cost of infrastructure upgrades to telephone companies, and the advent of ISDN and DSL technologies which effectively accomplished the same goal.

Today, Multi-Tech is the only known company to continue to support a v.70 compatible modem. While their device also offers v.92 at

56kbps, it remains significantly more expensive than comparable modems sans v.70 support.

Using digital lines and PCM (V.90/92)

In the late 1990s Rockwell and U.S. Robotics introduced new technology based upon the digital transmission used in modern telephony networks. The standard digital transmission in modern networks is 64 kbit/s but some networks use a part of the bandwidth for remote office signaling (e.g., to hang up the phone), limiting the effective rate to 56 kbit/s DS0. This new technology was adopted into ITU standards V.90 and is common in modern computers. The 56 kbit/s rate is only possible from the central office to the user site (downlink). In the United States, government regulation limits the maximum power output, resulting in a maximum data rate of 53.3 kbit/s. The uplink (from the user to the central office) still uses V.34 technology at 33.6 kbit/s.

Later in V.92, the digital PCM technique was applied to increase the upload speed to a maximum of 48 kbit/s, but at the expense of download rates. For example a 48 kbit/s upstream rate would reduce the downstream as low as 40 kbit/s, due to echo on the telephone line. To avoid this problem, V.92 modems offer the option to turn off the digital upstream and instead use a 33.6 kbit/s analog connection, in order to maintain a high digital downstream of 50 kbit/s or higher.[2] V.92 also adds two other features. The first is the ability for users who have call waiting to put their dial-up Internet connection on hold for extended periods of time while they answer a call. The second feature is the ability to quickly connect to one's ISP. This is achieved by remembering the analog and digital characteristics of the telephone line, and using this saved information to reconnect at a fast pace.

Using compression to exceed 56k

Today's V.42, V.42bis and V.44 standards allow the modem to transmit data faster than its basic rate would imply. For instance, a 53.3 kbit/s connection with V.44 can transmit up to 53.3*6 == 320 kbit/s using pure text. However, the compression ratio tends to vary due to noise on the

|

INFOLINE

26

line, or due to the transfer of already-compressed files (ZIP files, JPEG images, MP3 audio, MPEG video). At some points the modem will be sending compressed files at approximately 50 kbit/s, uncompressed files at 160 kbit/s, and pure text at 320 kbit/s, or any value in between.[4]

In such situations a small amount of memory in the modem, a buffer, is used to hold the data while it is being compressed and sent across the phone line, but in order to prevent overflow of the buffer, it sometimes becomes necessary to tell the computer to pause the datastream. This is accomplished through hardware flow control using extra lines on the modem–computer connection. The computer is then set to supply the modem at some higher rate, such as 320 kbit/s, and the modem will tell the computer when to start or stop sending data.

Compression by the ISP

As telephone-based 56k modems began losing popularity, some Internet service providers such as Netzero and Juno started using pre-compression to increase the throughput and maintain their customer base. As example, the Netscape ISP uses a compression program that squeezes images, text, and other objects at the modem server, just prior to sending them across the phone line. Certain content using lossy compression (e.g., images) may be recompressed (transcoded) using different parameters to the compression algorithm, making the transmitted content smaller but of lower quality. The server-side compression operates much more efficiently than the on-the-fly compression of V.44-enabled modems due to the fact that V.44 is a generalized compression algorithm whereas other compression techniques are application-specific (JPEG, MPEG, Vorbis, etc.). Typically Website text is compacted to 4% thus increasing effective throughput to approximately 1,300 kbit/s. The accelerator also pre-compresses Flash executables and images to approximately 30% and 12%, respectively.

The drawback of this approach is a loss in quality, where the GIF and JPEG images are lossy compressed, which causes the content to become pixelated and smeared. However the speed is

dramatically improved such that Web pages load in less than 5 seconds, and the user can manually choose to view the uncompressed images at any time. The ISPs employing this approach advertise it as "surf 5× faster" or simply "accelerated dial-up".

List of dialup speeds

Note that the values given are maximum values, and actual values may be slower under certain conditions (for example, noisy phone lines).[7] For a complete list see the companion article list of device bandwidths. A baud is one symbol per second; each symbol may encode one or more data bits.

Radio modems

Direct broadcast satellite, WiFi, and mobile phones all use modems to communicate, as do most other wireless services today. Modern telecommunications and data networks also make extensive use of radio modems where long distance data links are required. Such systems are an important part of the PSTN, and are also in common use for high-speed computer network links to outlying areas where fibre is not economical.

Even where a cable is installed, it is often possible to get better performance or make other parts of the system simpler by using radio frequencies and modulation techniques through a cable. Coaxial cable has a very large bandwidth, however signal attenuation becomes a major problem at high data rates if a digital signal is used. By using a modem, a much larger amount of digital data can be transmitted through a single piece of wire. Digital cable television and cable Internet services use radio frequency modems to provide the increasing bandwidth needs of modern households. Using a modem also allows for frequency-division multiple access to be used, making full-duplex digital communication with many users possible using a single wire.

Wireless modems come in a variety of types, bandwidths, and speeds. Wireless modems are often referred to as transparent or smart. They transmit information that is modulated onto a

|

INFOLINE

27

carrier frequency to allow many simultaneous wireless communication links to work simultaneously on different frequencies.

Transparent modems operate in a manner similar to their phone line modem cousins. Typically, they were half duplex, meaning that they could not send and receive data at the same time. Typically transparent modems are polled in a round robin manner to collect small amounts of data from scattered locations that do not have easy access to wired infrastructure. Transparent modems are most commonly used by utility companies for data collection.

Smart modems come with a media access controller inside which prevents random data from colliding and resends data that is not correctly received. Smart modems typically require more bandwidth than transparent modems, and typically achieve higher data rates. The IEEE 802.11 standard defines a short range modulation scheme that is used on a large scale throughout the world.

WiFi and WiMax

Wireless data modems are used in the WiFi and WiMax standards, operating at microwave frequencies. WiFi is principally used in laptops for Internet connections (wireless access point) and

Mobile modems and routers

T-Mobile Universal Mobile

Telecommunications System PC Card modem Huawei CDMA2000 Evolution-Data Optimized USB wireless modem

Modems which use a mobile telephone system (GPRS, UMTS, HSPA, EVDO, WiMax, etc.), are known as wireless modems (sometimes also called cellular modems). Wireless modems can be embedded inside a laptop or appliance or external to it. External wireless modems are connect cards, usb modems for mobile broadband and cellular routers. A connect card is a PC card or ExpressCard which slides into a PCMCIA/PC card/ExpressCard slot on a computer. The best known brand of wireless modem datacards is the

AirCard made by Sierra Wireless. (Many people just refer to all makes and models as AirCards, when in fact this is a trademarked brand name.) USB wireless modems use a USB port on the laptop instead of a PC card or ExpressCard slot. A cellular router may have an external datacard (AirCard) that slides into it. Most cellular routers do allow such datacards or USB modems. Cellular Routers may not be modems per se, but they contain modems or allow modems to be slid into them. The difference between a cellular router and a wireless modem is that a cellular router normally allows multiple people to connect to it (since it can route, or support multipoint to multipoint connections), while the modem is made for one connection.

Most of the GSM wireless modems come with an integrated SIM cardholder (i.e., Huawei E220, Sierra 881, etc.) and some models are also provided by microSD memory slot and/or jack for additional external antenna such as Huawei E1762 and Sierra Wireless Compass 885. The CDMA (EVDO) versions do not use R-UIM cards, but use Electronic Serial Number (ESN) instead.

The cost of using a wireless modem varies from country to country. Some carriers implement flat rate plans for unlimited data transfers. Some have caps (or maximum limits) on the amount of data that can be transferred per month. Other countries have plans that charge a fixed rate per data transferred—per megabyte or even kilobyte of data downloaded; this tends to add up quickly in today's content-filled world, which is why many people are pushing for flat data rates.

The faster data rates of the newest wireless modem technologies (UMTS, HSPA, EVDO, WiMax) are also considered to be broadband wireless modems and compete with other broadband modems below.

Broadband

DSL modem

ADSL modems, a more recent development, are not limited to the telephone's voiceband audio frequencies. Some ADSL modems

|

INFOLINE

28

use coded orthogonal frequency division modulation (DMT).

Cable modems use a range of frequencies originally intended to carry RF television channels. Multiple cable modems attached to a single cable can use the same frequency band, using a low-level media access protocol to allow them to work together within the same channel. Typically, 'up' and 'down' signals are kept separate using frequency division multiple access.

New types of broadband modems are beginning to appear, such as double way satellite and power line modems.

Broadband modems should still be classed as modems, since they use complex waveforms to carry digital data. They are more advanced devices than traditional dial-up modems as they are capable of modulating/demodulating hundreds of channels simultaneously.

Many broadband modems include the functions of a router (with Ethernet and WiFi ports) and other features such as DHCP, NAT and firewall features.

When broadband technology was introduced, networking and routers were unfamiliar to consumers. However, many people knew what a modem was as most internet access was through dial-up. Due to this familiarity, companies started selling broadband modems using the familiar term modem rather than vaguer ones like adapter or transceiver.

Many broadband modems must be configured in bridge mode before they can use a router.

Home networking

Although the name modem is seldom used in this case, modems are also used for high-speed home networking applications, specially those using existing home wiring. One example is the G.hn standard, developed by ITU-T, which provides a high-speed (up to 1 Gbit/s) Local area network using existing home wiring (power lines,

phone lines and coaxial cables). G.hn devices use orthogonal frequency-division multiplexing (OFDM) to modulate a digital signal for transmission over the wire.

Deep-space telecommunications

Many modern modems have their origin in deep space telecommunications systems of the 1960s.

Differences with deep space telecom modems vs landline modems

• digital modulation formats that have high doppler immunity are typically used

• waveform complexity tends to be low, typically binary phase shift keying

• error correction varies mission to mission, but is typically much stronger than most landline modems

Voice modem

Voice modems are regular modems that are capable of recording or playing audio over the telephone line. They are used for telephony applications. See Voice modem command set for more details on voice modems. This type of modem can be used as FXO card for Private branch exchange systems (compare V.92).

Popularity

A CEA study in 2006 found that dial-up Internet access is on a notable decline in the U.S. In 2000, dial-up Internet connections accounted for 74% of all U.S. residential Internet connections. The US demographic pattern for (dial-up modem users per capita) has been more or less mirrored in Canada and Australia for the past 20 years. Dial-up modem use in the US had dropped to 60% by 2003, and in 2006 stood at 36%. Voiceband modems were once the most popular means of Internet access in the U.S., but with the advent of new ways of accessing the Internet, the traditional 56K modem is losing popularity.

By KANIMOZHI.S

II-B.Sc(IT)

|

INFOLINE

29

Multimedia The use of computers to present text,

graphics, video, animation, and sound in an integrated way. Long touted as the future revolution in computing, multimedia applications were, until the mid-90s, uncommon due to the expensive hardware required. With increases in performance and decreases in price, however, multimedia is now commonplace. Nearly all PCs are capable of displaying video, though the resolution available depends on the power of the computer's video adapter and CPU.

Categorization of multimedia:

Multimedia may be broadly divided into linear and non-linear categories. Linear active content progresses without any navigational control for the viewer such as a cinema presentation. Non-linear content offers user interactivity to control progress as used with a computer game or used in self-paced computer based training. Hypermedia is an example of non-linear content. Multimedia presentations can be live or recorded. A recorded presentation may allow interactivity via a navigation system. A live multimedia presentation may allow interactivity via an interaction with the presenter or performer.

Major characteristics of multimedia:

Multimedia presentations may be viewed in person on stage, projected, transmitted, or played locally with a media player. A broadcast may be a live or recorded multimedia presentation. Broadcasts and recordings can be either analog or digital electronic media technology. Digital online multimedia may be downloaded or streamed. Streaming multimedia may be live or on-demand.

Multimedia games and simulations may be used in a physical environment with special effects, with multiple users in an online network, or locally with an offline computer, game system, or simulator.

The various formats of technological or digital multimedia may be intended to enhance the users' experience, for example to make it easier and

faster to convey information. Or in entertainment or art, to transcend everyday experience.

Structuring information in a multimedia form:

Multimedia represents the convergence of text, pictures, video and sound into a single form. The power of multimedia and the Internet lies in the way in which information is linked.Multimedia and the Internet require a completely new approach to writing. The style of writing that is appropriate for the 'on-line world' is highly optimized and designed to be able to be quickly scanned by readers.

A good site must be made with a specific purpose in mind and a site with good interactivity and new technology can also be useful for attracting visitors. The site must be attractive and innovative in its design, function in terms of its purpose, easy to navigate, frequently updated and fast to download. When users view a page, they can only view one page at a time. As a result, multimedia users must create a ‘mental model of information structure’.

Advantages of Multimedia: 1. Enhancement of Text Only Messages: Multimedia enhances text only presentations by adding interesting sounds and compelling visuals.

2. Improves over Traditional Audio-Video Presentations: Audiences are more attentive to multimedia messages than traditional presentations done with slides or overhead transparencies.

3. Gains and Holds Attention: People are more interested in multimedia messages which combine the elements of text, audio, graphics and video. Communication research has shown that the combination of communication modes (aural and visual) offers greater understanding and retention of information.

4. Good for "computer-phobics": Those who are intimidated by computer keyboards and complex instructions are more comfortable with pressing buttons with a mouse or on a screen.

|

INFOLINE

30

Disadvantages of Multimedia:

Unfortunately, multimedia can be quite expensive for the following reasons:

• Acquisition or rental of equipment to produce multimedia can be costly.

• Multimedia in WBT may require a high learning curve for designers since the use of multimedia in WBT is fairly new to most trainers and instructional designers.

• Development may require contracting out for specialized skills.

Another disadvantage is that multimedia can actually distract trainers and learners from objectives and content if it is just used as "bells and whistles." However, when multimedia follows design guidelines, it can be quite motivating and effective.

By

TAMILARASI.P III-B.Sc(IT)

Plasma Display A plasma display panel (PDP) is a type of flat panel display common to large TV displays (80 cm/30 in or larger). They are called "plasma" displays because the pixels rely on plasma cells, or what are in essence chambers more commonly known as fluorescent lamps. A panel typically has millions of tiny cells in compartmentalized space between two panels of glass. These compartments, or "bulbs" or "cells", hold a mixture of noble gases and a minuscule amount of mercury. Just as in the fluorescent lamps over an office desk, when the mercury is vaporized and a voltage is applied across the cell, the gas in the cells form a plasma. (A plasma is a collection of particles that respond

strongly and collectively to electromagnetic fields or electrical charges, taking the form of gas-like clouds or ion beams.) With flow of electricity (electrons), some of the electrons strike mercury particles as the electrons move through the plasma, momentarily increasing the energy level of the molecule until the excess energy is shed. Mercury sheds the energy as ultraviolet (UV) photons. The UV photons then strike phosphor that is painted on the inside of the cell. When the UV photon strikes a phosphor molecule, it momentarily raises the energy level of an outer orbit electron in the phosphor molecule, moving the electron from a stable to an unstable state; the electron then sheds the excess energy as a photon at a lower energy level than UV light; the lower energy photons are mostly in the infrared range but about 40% are in the visible light range. Thus the input energy is shed as mostly heat (infrared) but also as visible light. Depending on the phosphors used, different colors of visible light can be achieved. Each pixel in a plasma display is made up of three cells comprising the primary colors of visible light. Varying the voltage of the signals to the cells thus allows different perceived colors.

Plasma displays should not be confused with liquid crystal displays (LCDs), another lightweight flat-screen display using very different technology. LCDs may use one or two large fluorescent lamps as a backlight source, but the different colors are controlled by LCD units, which in effect behave as gates that allow or block the passage of light from the backlight to red, green, or blue paint on the front of the LCD panel.

General characteristics

A 103" plasma display panel by Panasonic

Plasma displays are bright (1,000 lux or higher for the module), have a wide color gamut, and can be produced in fairly large

|

INFOLINE

31

sizes—up to 150 inches (3.8 m) diagonally. They have a very low-luminance "dark-room" black level compared to the lighter grey of the unilluminated parts of an LCD screen (i.e. the blacks are blacker on plasmas and greyer on LCDs). LED-backlit LCD televisions have been developed to reduce this distinction. The display panel itself is about 6 cm (2.5 inches) thick, generally allowing the device's total thickness (including electronics) to be less than 10 cm (4 inches). Plasma displays use as much power per square meter as a CRT or an AMLCD television. Power consumption varies greatly with picture content, with bright scenes drawing significantly more power than darker ones - this is also true of CRTs. Typical power consumption is 400 watts for a 50-inch (127 cm) screen. 200 to 310 watts for a 50-inch (127 cm) display when set to cinema mode. Most screens are set to 'shop' mode by default, which draws at least twice the power (around 500-700 watts) of a 'home' setting of less extreme brightness. Panasonic has greatly reduced power consumption ("1/3 of 2007 models") Panasonic claims that PDPs will consume only half the power of their previous series of plasma sets to achieve the same overall brightness for a given display size. The lifetime of the latest generation of plasma displays is estimated at 100,000 hours of actual display time, or 27 years at 10 hours per day. This is the estimated time over which maximum picture brightness degrades to half the original value.

Plasma display screens are made from glass, which reflects more light than the material used to make an LCD screen. This causes glare from reflected objects in the viewing area. Companies such as Panasonic coat their newer plasma screens with an anti-glare filter material. Currently, plasma panels cannot be economically manufactured in screen sizes smaller than 32 inches. Although a few companies have been able to make plasma EDTVs this small, even fewer have made 32in

plasma HDTVs. With the trend toward larger and larger displays, the 32in screen size is rapidly disappearing. Though considered bulky and thick compared to their LCD counterparts, some sets such as Panasonic's Z1 and Samsung's B860 series are as slim as one inch thick making them comparable to LCDs in this respect.Competing display technologies include CRT, OLED, LCD, DLP, SED, LED, and FED.

Plasma display advantages and disadvantages

Advantages

• Slim profile • Can be wall mounted • Less bulky than rear-projection

televisions • Produces deep blacks allowing for

superior contrast ratio • Wider viewing angles than those of

LCD, images do not suffer from degradation at high angles unlike LCDs

• Less susceptible to reflection glare in bright rooms due to not needing backlighting

• Virtually no motion blur, thanks in large part to very high refresh rates and a faster response time, contributing to superior performance when displaying content with significant amounts of rapid motion.

Disadvantages

• Heavier screen-door effect when compared to LCD or OLED based TVs.

• Susceptible to screen burn-in and image retention, although most recent models have a pixel orbiter that moves the entire picture faster than is noticeable to the human eye, which reduces the effect of burn-in but does not prevent burn-in.[13] However, turning off individual

|

INFOLINE

32

pixels does counteract screen burn-in on modern plasma displays.

• Phosphors lose luminosity over time, resulting in gradual decline of absolute image brightness (newer models are less susceptible to this, having lifespans exceeding 100,000 hours, far longer than older CRT technology)

• Susceptible to "large area flicker" • Generally do not come in smaller sizes

than 37 inches • Heavier than LCD due to the

requirement of a glass screen to hold the gases

• Use more electricity, on average, than an LCD TV

• Do not work as well at high altitudes due to pressure differential between the gases inside the screen and the air pressure at altitude. It may cause a buzzing noise. Manufacturers rate their screens to indicate the altitude parameters.

• For those who wish to listen to AM radio, or are Amateur Radio operators (Hams) or Shortwave Listeners (SWL) , the Radio Frequency Interference (RFI) from these devices can be irritating or disabling.

Native plasma television resolutions

Fixed-pixel displays such as plasma TVs scale the video image of each incoming signal to the native resolution of the display panel. The most common native resolutions for plasma display panels are 853×480 (EDTV), 1,366×768 or 1,920×1,080 (HDTV). As a result picture quality varies depending on the performance of the video scaling processor and the upscaling and downscaling algorithms used by each display manufacturer.

Enhanced-definition plasma television

Early plasma televisions were enhanced-definition (ED) with a native resolution of 840×480 (discontinued) or 853×480, and down-scaled their incoming high definition signals to match their native display resolution.

ED Resolutions

• 840×480 • 853×480

High-definition plasma television

Early high-definition (HD) plasma displays had a resolution of 1024x1024 and were alternate lighting of surfaces (ALiS) panels made by Fujitsu/Hitachi. These were interlaced displays, with non-square pixels.

Modern HDTV plasma televisions usually have a resolution of 1,024×768 found on many 42 inch plasma screens, 1,280×768, 1,366×768 found on 50 in, 60 in, and 65 in plasma screens, or 1,920×1,080 found in plasma screen sizes from 42 inch to 103 inch. These displays are usually progressive displays, with square pixels, and will up-scale their incoming standard-definition signals to match their native display resolution.

HD Resolutions

• 1024×1024 • 1024×768 • 1280×768 • 1366×768 • 1280×1080 • 1920×1080

|

INFOLINE

33

How plasma displays work

Composition of plasma display panel

A plasma display panel is an array of hundreds of thousands of small, luminous cells positioned between two plates of glass. Each cell is essentially a tiny neon lamp filled with rarefied neon, xenon, and other inert gases; the cells are luminous when they are electrified through "electrodes".

The long electrodes are stripes of electrically conducting material that also lie between the glass plates, in front of and behind the cells. The "address electrodes" sit behind the cells, along the rear glass plate, and can be opaque. The transparent display electrodes are mounted in front of the cell, along the front glass plate. As can be seen in the illustration, the electrodes are covered by an insulating protective layer.[26] Control circuitry charges the electrodes that cross paths at a cell, creating a voltage difference between front and back. Some of the atoms in the gas of a cell then lose electrons and become ionized, which creates an electrically conducting plasma of atoms, free electrons, and ions. The collisions of the flowing electrons in the plasma with the inert gas atoms leads to light emission; such light-emitting plasmas are known as glow discharges.

In a monochrome plasma panel, the gas is usually mostly neon, and the color is the characteristic orange of a neon-filled lamp (or sign). Once a glow discharge has been initiated in a cell, it can be maintained by applying a low-level voltage between all the horizontal and vertical electrodes–even after the ionizing voltage is removed. To erase a cell all voltage is removed from a pair of electrodes. This type of panel has inherent memory. A small amount of nitrogen is added to the neon to increase hysteresis.

In color panels, the back of each cell is coated with a phosphor. The ultraviolet photons emitted by the plasma excite these phosphors, which give off visible light with colors determined by the phosphor materials. This aspect is comparable to fluorescent lamps and to the neon signs that use colored phosphors.

Every pixel is made up of three separate subpixel cells, each with different colored phosphors. One subpixel has a red light phosphor, one subpixel has a green light phosphor and one subpixel has a blue light phosphor. These colors blend together to create the overall color of the pixel, the same as a triad of a shadow mask CRT or color LCD. Plasma panels use pulse-width modulation (PWM) to control brightness: by varying the pulses of current flowing through the different cells thousands of times per second, the control system can increase or decrease the intensity of each subpixel color to create billions of different combinations of red, green and blue. In this way, the control system can produce most of the visible colors. Plasma displays use the same phosphors as CRTs, which accounts for the extremely accurate color reproduction when viewing television or computer video images (which use an RGB color system designed for CRT display technology).

Contrast ratio

Contrast ratio is the difference between the brightest and darkest parts of an image, measured in discrete steps, at any given moment. Generally, the higher the contrast ratio, the more realistic the image is (though the "realism" of an image depends on many factors including color accuracy, luminance linearity, and spatial linearity.) Contrast ratios for plasma displays are often advertised as high as 5,000,000:1. On the surface, this is a significant advantage of plasma over most other current display technologies, a notable

|

INFOLINE

34

exception being organic light-emitting diode. Although there are no industry-wide guidelines for reporting contrast ratio, most manufacturers follow either the ANSI standard or perform a full-on-full-off test. The ANSI standard uses a checkered test pattern whereby the darkest blacks and the lightest whites are simultaneously measured, yielding the most accurate "real-world" ratings. In contrast, a full-on-full-off test measures the ratio using a pure black screen and a pure white screen, which gives higher values but does not represent a typical viewing scenario. Some displays, using many different technologies, have some "leakage" of light, through either optical or electronic means, from lit pixels to adjacent pixels so that dark pixels that are near bright ones appear less dark than they do during a full-off display. Manufacturers can further artificially improve the reported contrast ratio by increasing the contrast and brightness settings to achieve the highest test values. However, a contrast ratio generated by this method is misleading, as content would be essentially unwatchable at such settings.

Plasma is often cited as having better (i.e. darker) black levels (and higher contrast ratios), although both plasma and LCD each have their own technological challenges.

Each cell on a plasma display has to be precharged before it is due to be illuminated (otherwise the cell would not respond quickly enough) and this precharging means the cells cannot achieve a true black, whereas an LED backlit LCD panel can actually turn off parts of the screen. Some manufacturers have worked hard to reduce the precharge and the associated background glow, to the point where black levels on modern plasmas are starting to rival CRT. With LCD technology, black pixels are generated by a light polarization method; many panels are unable to completely block the underlying backlight. However, more recent LCD panels (particularly those using white LED illumination) can compensate by automatically reducing the backlighting on darker scenes, though this method — analogous to the strategy of noise reduction on

analog audio tape — obviously cannot be used in high-contrast scenes, leaving some light showing from black parts of an image with bright parts, such as (at the extreme) a solid black screen with one fine intense bright line. This is called a "halo" effect which has been almost completely minimized on newer LED backlit LCD's with local dimming. Edgelit models cannot compete with this as the light is reflected via a light funnell to distribute the light behind the panel.

Screen burn-in

An example of a plasma display that has suffered severe burn-in from stationary text

Image burn-in occurs on CRTs and plasma panels when the same picture is displayed for long periods of time. This causes the phosphors to overheat, losing some of their luminosity and producing a "shadow" image that is visible with the power off. Burn-in cannot be repaired (except on monochrome CRTs), and is especially a problem on plasma panels because they run hotter than CRTs. Early plasma televisions were plagued by burn-in, making it impossible to use video games or anything else that displayed static images.

Plasma displays also exhibit another image retention issue which is sometimes confused with screen burn-in damage. In this mode, when a group of pixels are run at high brightness (when displaying white, for example) for an extended period of time, a charge build-up in the pixel structure occurs and a ghost image can be seen. However, unlike burn-in, this charge build-up is transient and self corrects after the image condition that caused the effect has been removed and a long enough period of time has passed (with the display either off or on).

Plasma manufacturers have tried various ways of reducing burn-in such as using gray pillarboxes, pixel orbiters and image washing routines, but none to date have eliminated the problem and all plasma manufacturers continue to exclude burn-in from their warranties.

|

INFOLINE

35

Environmental impact

Nitrogen trifluoride, cited as a very potent greenhouse gas, is used during production of plasma screens, which are therefore alleged to contribute to climate change. Plasma screens have also been lagging behind CRT and LCD screens in terms of energy consumption. To reduce the energy consumption, new technologies are also being found. Although it can be expected that plasma screens will continue to become more energy efficient in the future, a growing problem is that people tend to keep their old TVs running and an increasing trend to escalating screen sizes.

History

Plasma displays were first used in PLATO computer terminals. This PLATO V model illustrates the display's monochromatic orange glow as seen in 1988.

In 1936 Kálmán Tihanyi described the principle of "plasma television" and conceived the first flat-panel television system.

The monochrome plasma video display was co-invented in 1964 at the University of Illinois at Urbana-Champaign by Donald Bitzer, H. Gene Slottow, and graduate student Robert Willson for the PLATO Computer System.[45] The original neon orange monochrome Digivue display panels built by glass producer Owens-Illinois were very popular in the early 1970s because they were rugged and needed neither memory nor circuitry to refresh the images. A long period of sales decline occurred in the late 1970s because semiconductor memory made CRT displays cheaper than the US$2500 512 x 512 PLATO plasma displays. Nonetheless, the plasma displays' relatively large screen size and 1 inch thickness made them suitable for high-profile placement in lobbies and stock exchanges.

Electrical engineering student Larry F. Weber became interested in plasma displays while studying at the University of Illinois at Urbana-Champaign in the 1960s, and pursued postgraduate work in the field under Bitzer and Slottow. His research eventually earned him 15 patents relating to plasma displays. One of his early contributions

was development of the power-saving "energy recovery sustain circuit", now included in every color plasma display.

Burroughs Corporation, a maker of adding machines and computers, developed the Panaplex display in the early 1970s. The Panaplex display, generically referred to as a gas-discharge or gas-plasma display, uses the same technology as later plasma video displays, but began life as seven-segment display for use in adding machines. They became popular for their bright orange luminous look and found nearly ubiquitous use in cash registers, calculators, pinball machines, aircraft avionics such as radios, navigational instruments, and stormscopes; test equipment such as frequency counters and multimeters; and generally anything that previously used nixie tube or numitron displays with a high digit-count throughout the late 1970s and into the 1990s. These displays remained popular until LEDs gained popularity because of their low-current draw and module-flexibility, but are still found in some applications where their high-brightness is desired, such as pinball machines and avionics. Pinball displays started with six- and seven-digit seven-segment displays and later evolved into 16-segment alphanumeric displays, and later into 128x32 dot-matrix displays in 1990, which are still used today.

1983

In 1983, IBM introduced a 19-inch (48 cm) orange-on-black monochrome display (model 3290 'information panel') which was able to show up to four simultaneous IBM 3270 terminal sessions. Due to heavy competition from monochrome LCD's, in 1987 IBM planned to shut down its factory in upstate New York, the largest plasma plant in the world, in favor of manufacturing mainframe computers. Consequently, Larry Weber co-founded a startup company Plasmaco with Stephen Globus, as well as James Kehoe, who was the IBM plant manager, and bought the plant from IBM. Weber stayed in Urbana as CTO until 1990, then moved to upstate New York to work at Plasmaco.

|

INFOLINE

36

1992

In 1992, Fujitsu introduced the world's first 21-inch (53 cm) full-color display. It was a hybrid, the plasma display created at the University of Illinois at Urbana-Champaign and NHK STRL.

1994

In 1994, Weber demonstrated color plasma technology at an industry convention in San Jose. Panasonic Corporation began a joint development project with Plasmaco, which led in 1996 to the purchase of Plasmaaco, its color AC technology, and its American factory.

1997

In 1997, Fujitsu introduced the first 42-inch (107 cm) plasma display; it had 852x480 resolution and was progressively scanned. Also in 1997, Philips introduced a 42-inch (107 cm) display, with 852x480 resolution. It was the only plasma to be displayed to the retail public in 4 Sears locations in the US. The price was US$14,999 and included in-home installation. Later in 1997, Pioneer started selling their first plasma television to the public, and others followed.

2006 - Present

In late 2006, analysts noted that LCDs overtook plasmas, particularly in the 40-inch (1.0 m) and above segment where plasma had previously gained market share. Another industry trend is the consolidation of manufacturers of plasma displays, with around fifty brands available but only five manufacturers. In the first quarter of 2008 a comparison of worldwide TV sales breaks down to 22.1 million for direct-view CRT, 21.1 million for LCD, 2.8 million for Plasma, and 0.1 million for rear-projection.

Until the early 2000s, plasma displays were the most popular choice for HDTV flat panel display as they had many benefits over LCDs. Beyond plasma's deeper blacks, increased contrast, faster response time, greater color spectrum, and wider viewing angle; they were also much bigger than LCDs, and it was believed that LCD

technology was suited only to smaller sized televisions. However, improvements in VLSI fabrication technology have since narrowed the technological gap. The increased size, lower weight, falling prices, and often lower electrical power consumption of LCDs now make them competitive with plasma television sets.

Screen sizes have increased since the introduction of plasma displays. The largest plasma video display in the world at the 2008 Consumer Electronics Show in Las Vegas, Nevada, was a 150-inch (381 cm) unit manufactured by Matsushita Electrical Industries (Panasonic) standing 6 ft (180 cm) tall by 11 ft (330 cm) wide. At the 2010 Consumer Electronics Show in Las Vegas, Panasonic introduced their 152" 2160p 3D plasma.

By SUSISTHIRA.M

II-B.Sc(IT)