71
Section I Information Technology in Management Prof. H. A. Simon views the computer as the fourth great breakthrough in history to aid man in his thinking process and decision-making ability. The first was the invention of writing which gave human beings a memory in performing mental tasks. The remaining two events prior to the computer were the de vising of the Arabic number system with its zero and positional notation, and the invention of analytic geometry and calculus, which permitted the solution of complex problems in scientific theory. Now the electronic digital computers combine the advantages and attributes of all these breakthroughs and make them available for decision-making and management of organizations. Management Information System (MIS) can be defined, according to Joel E. Ross, as a communication process wherein information (input) is recorded, stored, processed and retrieved for decisions (output) regarding the managerial process of planning, organising and controlling. If we now define decision- making as the process of selecting from among alternatives a course of action to achieve an objective, the link between information and decision becomes clear. Indeed, decision-making and information processing are so inter- dependent that they become inseparable, if not identical, in practice. You will learn more about MIS. Computerized MIS cannot technically make a decision but it can yield processed data and follow instructions to the extent of its capacity. For example, the computer can be properly instructed to compare inventory levels with programmed decision-rules on re-order level and re-order quantity, and generate purchase requisition, purchase enquiry and purchase 1

DocumentIT

Embed Size (px)

DESCRIPTION

IT

Citation preview

Page 1: DocumentIT

Section I Information Technology in Management

Prof. H. A. Simon views the computer as the fourth great breakthrough in history to aid man in his thinking process and decision-making ability. The first was the invention of writing which gave human beings a memory in performing mental tasks. The remaining two events prior to the computer were the de

vising of the Arabic number system with its zero and positional notation, and the invention of analytic geometry and calculus, which permitted the solution of complex problems in scientific theory. Now the electronic digital computers combine the advantages and attributes of all these breakthroughs and make them available for decision-making and management of organizations.

Management Information System (MIS) can be defined, according to Joel E. Ross, as a communication process wherein information (input) is recorded, stored, processed and retrieved for decisions (output) regarding the managerial process of planning, organising and controlling. If we now define decision-making as the process of selecting from among alternatives a course of action to achieve an objective, the link between information and decision becomes clear. Indeed, decision-making and information processing are so inter-dependent that they become inseparable, if not identical, in practice. You will learn more about MIS. Computerized MIS cannot technically make a decision but it can yield processed data and follow instructions to the extent of its capacity. For example, the computer can be properly instructed to compare inventory levels with programmed decision-rules on re-order level and re-order quantity, and generate purchase requisition, purchase enquiry and purchase order, etc. This can resemble an automatic control of purchase documents. The modern role of MIS for managerial decision-making in a complex organization has been compared to that of a military commander. Commanders often adopt a strategy built by direct observation of partial situations. This is the style used by the managers who track operations by periodic communications with remote sales depots, plant divisions and other offices. For instance, the central marketing organization of a travel agency has to keep track of all its booking offices spread all over India for marketing related decision-making.

Levels of Information Handling

There are four levels of computerized information handling. These are:

1

Page 2: DocumentIT

In a modern complex organization, the levels of information handling can be divided as decision support system, management information system, transaction processing system, and office (and other) automation system. At the apex, the top level managers may need decision support system (DSS). This would be an inter-active system that provides the user-manager with easy access to decision models and data in order to support semi-structured and non-structured decision-making tasks. Inputs for DSS can be some processed data, and mostly management-originated data along with some unique models. The DSS would involve queries and responses, operations research models, and simulation. The output from DSS would be special reports to resolve difficult questions and replies to management queries.

At the middle management level (if there exists one), MIS would deal with an organized set of procedures to provide information for middle managers to support their operations and decision-making within the organization. At this level, inputs for MIS would be both processed and raw-data and some management-originated data, along with pre-programmed models. The MIS process would involve report generation data management, simple models and statistical methods. The outputs from MIS would be filtered and screened for semi-routine decisions and replies to simple management queries.

At the shop-floor management level, transaction processing system (TPS) is a computer-based system that would capture, classify, store, maintain, update and retrieve simple transaction data for record keeping and for feeding MIS and DSS. The TPS would have transaction data as inputs. The processing for TPS would involve classification, codification, sorting, merging, adding, deleting and updating. Outputs for TPS would be detailed reports relating to routine decisions and processed data.

At the clerical level, office and other automation control system can be in operation. Office automation system (OAS) is simple in an automated office having multiple functions, where the integrated and computer-aided system allows many office activities to be performed with electronic equipment. The OAS would have inputs such as appointments, documents, addresses, etc. The OAS processing would be scheduling word-processor, data storage and retrieval. Outputs from OAS would be schedules, memoranda, bulk mail and administrative reports. Computer Reservation Systems used in hotels and travel agencies are also operated at this level.

ADVANTAGES OF COMPUTERISATION

The advantages associated with computer-based managerial decision-making can be the following:

1) Response time is greatly reduced;

2) Very large data are stored for information and decision-making;

2

Page 3: DocumentIT

3) Accuracy of information is considerably improved, thereby improving the quality of the decision;

4) Problems are handled more easily by using various operation research models;

5) The cost involved in the decision-making process is reduced;

6) More secrecy is observed as compared to manual file system;

7) Ability to take quick decision improves considerably as the time for retrieval of information is very fast;

8) Paper work is reduced to the minimum as all the information is stored in the computer itself;

9) Lots of information are stored for future reference;

10) Chances of leakage of classified information are reduced;

11) Accuracy in manipulation is increased very much; and

12) Time spent in various decision-making activities is reduced to a minimum.

Emanating from the above, the following benefits for a commercial organization can be attributed to computerization:

1) the availability of accurate forecasts within 1 per cent of net income;

2) the preparation of short-term profit plans and long-term projections;

3) the provision of pre-plan information in budget preparation;

4) the calculation of variances between budgeted and actual results;

5) the triggering of revised forecasts if not proceeding in accordance with plans;

6) the early warning system for monitoring activities and the signalling of necessary reactive plans;

7) the indication of income and cash flow by following alternate investment strategies;

8) the assistance to the planning of new facilities and a host of special strides; and

9) the accomplishment of the preceding items at a great speed.

While TPS has been in use over several decades, OAS is coming into practice only now in a number of organisations. The TPS has brought its own benefits for speedy execution, accurate performance and quite often confidential handling. Such benefits will become evident if one considers a couple of very common TPS applications.

The first is examination result processing which the bulk of Indian universities are doing on computer today, either with in-house systems or with hired service bureaus. The massive nature of such processing can be visualised by looking at one State alone, namely, U.P., where 13 lakh candidates go through high school stream and 6 lakhs through the intermediate stream in any single year. The processing and publication of their results in time would not have been possible without computerisation. Besides, it is possible to maintain some confidentiality on computer processing. Another application is computerised electricity billing adopted by several State Electricity Boards in India. Under the computerised system, every meter for light and fan, or for power is invariably supported by a billing raised by computer and every such bill is again invariably despatched by computer centre. Both these actions guarantee improvement over the manual system where there is usually little certainty of bills being raised or being actually despatched due to adoption of foul means. In a single state undertaking like UPSEB it was estimated that the

3

Page 4: DocumentIT

computerisation in the Western U.P. district for electricity billing had resulted in a considerable amount of additional revenue. Now, consider the effect of computerisation on an airlines. The computerisation of the Ticketing Section has resulted in an easy ticketing procedure and also helps the ticketing agents of the airlines who can get the system on their computers. Moreover, now a days the customer care cargo, marketing departments and others are also computerised. This makes the customer needs easy to handle. Similarly most of the travel agencies have linked themselves to Computerised Reservation Systems of hotels, airlines, car rentals, etc. This not only saves time and energy, but also helps them in providing better services and instant bookings to the customers. Advantages of MIS can be manifold because of the aid to higher level decision-making. Once the planning, monitoring, reviewing and control process are facilitated, the benefits can literally multiply several times, over and above the mere shop-floor or clerical TPS applications.

APPROACH TO COMPUTERISATION

The first important stage of organizing MIS at the corporate level is to build up comprehensive data-base from TPS for the clerical systems. Valid data should be initially classified and codes attached to each data-set. Thereafter data-base should be constantly updated. The analogy to a reference library system is almost uncanny, where books have to be classified according to the subjects (e.g., reference, economics, management, etc.) and then codes attached to each book (e.g., 001 for reference, 338 for economics, 658 for management, etc.). Thereafter the books need constant updating through cataloguing and indexing. A library, however, is not as amenable to easy cross-reference among a vast number of books, as a computerized data-base is. With classification, codification and updating, a computerized data-base can help the user with almost instant retrieval of any amount of cross-classified and cross-revised data, thus helping tremendously the decision-making process. The second important stage to MIS at corporate level is to decide on the principles of evaluating the raw data for decision-making. For this purpose, the four principles that can be unhesitatingly recommended are: selection, pattern, linkage and overview. The first principle of selection looks at a screened segment of data which can focus attention on variances from standards, deviations from norms, fluctuations from targets and differences from budgets. It is presumed that whatever data are related to the initially fixed standards, norms, targets and budgets they are, to that extent, not required to be looked at any further. But whatever are not conforming to the steady state are worth looking at for decision-making purposes. The second principle of pattern is to look at the collection of data and to derive insight by virtue of management ratios, trends, correlations and forecasts. Essentially this is a principle of gaining insight into the given mass of data. The third principle of linkage is a way of looking at a number of widely dispersed data-sets and to formulate a coherent picture. The last principle of overview is to derive a total picture which cuts across a number of control parameters and sums up the managerial position. The third stage of MIS at corporate level is to realize the above four principles in actual practice. The first principle of selection can be implemented by generating exception-based reports. This requires the safe-keeping of classified, codified and updated data on the computer and retrieving only specially meaningful reports on the basis of exception. The second principle of pattern can be implemented by using mathematical modeling and statistical analysis. Such analytical approach requires the data-sets to be treated with mathematical models and statistical methods in order to derive meaningful indicators for decision-making. The third principle of linkage can be implemented by inter-relating different data-sets from disparate files or data-bases. The inter-relationships would provide again available insight across the board. The fourth principle of overview can be implemented by aggregating data. Such a process of aggregation can connect together the classified and codified data for purposes of deriving a managerial insight into the total span of operations

STRATEGIC ISSUES OF COMPUTER-AIDED DECISION-MAKING

4

Page 5: DocumentIT

Transaction processing systems using computers have played a relatively limited role as a management tool. This has been so because decision-making has not been their central theme. Instead, they have been speciality-oriented for on-going clerical needs in personnel (pay roll), book-keeping (accounting), technical data (capital projects) or specific functional areas (materials). Alternatively, they have been project-oriented, used to manage a specific programme of limited time and scope, such as, examination result processing, or, they have been problem-oriented for emergency and random retrieval of information to meet a crisis situation of limited duration and scope.

According to Robert Anderson, the corporate MIS should assist such material functions as:

• manufacturing, marketing and other real-time operations,

• futuristic improvement and problem-solving, instead of historical reports of the past actions,

• necessary corrective action rather than book-keeping, and

• monitoring of outside conditions affecting the organisational plans.

Joel E. Ross identifies the reasons for corporate MIS as the same for planning in general. It should offset uncertainty, improve economy of operations, focus on the objectives and provide a device for control operations. Such an approach is radically different from the patch-work approach of the transaction processing system. (What follows is identification of some of the strategic issues identified by Ross and others, and their suggested solutions in the Indian context).

Communication Gap One of the reasons for the over-emphasis on the transaction processing system is the communication gap between the computer professional and the user-manager of the system. In India, far too many organisations have become used to separate EDP departments, now increasingly called computer services departments. Because of the training interest and peer pressure, Ross suggests that there is a compulsive tendency for the computer professional to generate massive data-bases, to install display devices and glittering data-communications techniques, and to install newer and grander design. This only serves the purpose of empire building and not improved management. There is a familiar situation where the computer professional is engaged in developing the computer-aided decision-making but is not able to communicate to the user-manager. The information that the user needs is called for, but the user cannot adequately express them as he or she has not been accustomed to a rigorous self-analysis. Thereafter, the computer professional works out a plan based on his or her own understanding of the user-needs, to convert them into the flow-charts and programming. In the process, the information needs themselves get altered. When the programmer codifies and implements the system, his or her own interpretation gets incorporated, thus further changing the user-needs. All these end up by frustrating the user-manager. This can be called “ten-minute syndrome” where sufficient time has not been spent between the user-manager and the computer professional to get all the needs clearly conveyed and understood. A situation arose where during examination processing grace marks had to be allocated by way of moderation. Computer professionals allocated grace marks to all students which resulted in glaring anomalies where some top ranking students secured more than 100 per cent marks by virtue of additional grace marks. Obviously, the Controller of examinations had not explained properly the mystique of grace marks to the computer professionals!

Reliance on Service Bureaus or Computer Vendors Quite often, Indian user-manager is approached by computer vendor who brain washes the management into buying a system, indicating that the system has all the solutions to the managerial problems. The end-result is that either the user gets a system which is too large for him or her with a lot of computer “fat” or gets inadequate computing power for his or her needs. Ross suggests that there should not be any technical romance with the computer vendor but a return on investment (ROI) approach to expenditure. Further, the user-manager should operate with a master plan, rather than react to the vendor’s suggestions. There have been cases where an organization had appointed a service bureau for a large sum

5

Page 6: DocumentIT

of money to develop a corporate MIS. After spending a year as well as a couple of lakhs of rupees, the user-organization was thoroughly dissatisfied with the recommendations of the service bureau and did not implement it.

Lack of Master Plan The bulk of computer failures are due to the lack of master plans to which hardware acquisition, software development and individual MIS design can be related. Without such a plan, “islands of mechanization” result with little integration between separate systems. We can cite two successful cases in this regard. The TISCO studied the interface of various systems like production planning and control system, financial control system, and sales invoicing and order processing system. It was observed that if individual systems were developed without regard to their mutual interfaces, the result would be an absence of communication between the systems and the incompatibility of the systems would prevail throughout the company. This was prevented by building up sufficient linkages among these systems and developing an integrated approach according to a master plan. A similar approach was also adopted by TELCO with encouraging results.

Organization of the MIS Function Since clerical systems came first involving accounting, pay roll, inventory returns and similar financial jobs, the transaction processing system developed around all of them. Following the normal principle of assigning a service activity by “familiarity”, the historical trend in India has been to assign the computer to the Controller of Finance or Chief Accountant. This has been the case in many sectors. Only now the situation is being reversed, MIS function has been placed under the user-manager. With more distributed processing becoming possible, the trend has been to place computer-aided decision-making where it belongs, mainly under the user-manager with his or her own computing power. Already, the personal computers (PCs) have made this trend possible in practice, with individual data-bases available to the users. Similarly terminals are available to most important users to share central computing power. In both cases, all PCs as well as terminals, the control of the computer-aided activity has to remain with the user-manager.

Lack of Good Management System It is imperative for successful corporate MIS on computer that there is good planning and control within the framework of an efficient organizational structure. No degree of sophistication with computers can cure the basic ill of chaotic data management. There have been many organizations where computerization has not brought any tangible improvements because there has been no systematic handling of data or attention paid to the data management. In such cases, there would have been considerable gain by first conducting a good Organization and Method (O&M) study. MIS has to be built on top by a management system which should include the organizational arrangements, the structure and procedures for adequate planning and control, the clear establishment of objective, and all other manifestations of good organization in management. It is interesting to note that good computer professionals know their craft but are simply not oriented to managerial jobs. In other words, the road-based skills, which are necessary to function both in the computer room and in meeting with user-manager for the MIS, are conspicuous by their absence. This phenomenon has been known globally and that is why computer professionals are often called “machine-mesmerized”, where they are more loyal to their profession than to their organization.

Managerial Participation The single most critical problem in effective computer utilization is the need for understanding and support from top management. Even after top management support is ensured, it is necessary that there is user participation in the design phase on corporate MIS so as to avoid subsequent extensive and time-consuming re-work. This can be called overnight syndrome where users spell out their needs and expect the computer professionals to deliver the outputs immediately thereafter. Converting jobs eventually for computerization needs a stabilization period, which is all too easily forgotten. It makes good sense, when the user-manager picks up minimum familiarity with the MIS at the beginning. From the point of view of the organization, corporate MIS is as much a vital part of the operation as marketing operations and finance are today. Indian Airlines, too, discovered that managers had to be involved in order to get better and more

6

Page 7: DocumentIT

effective information systems by virtue of their participation. A similar approach is being followed in many other organisations.

Failure to Identify Information Needs A clear identification of information needs is fundamental and necessary to go for design of a corporate MIS. Recently, a Central Government department spent lavish sums on hardware and software to perpetuate the existing 53 MIS reports and to build a sophisticated data-bank without first determining the real information needs of management. It is often forgotten that only that information should go into the corporate MIS which can increase the perception of managers in critical areas such as problems, alternatives, opportunities and plans. It is the user-manager who is to provide the specification for what he or she wants out of his or her corporate MIS. If the manager fails to do so, the computer professional by default would provide his or her own objectives and own information needs. These would seldom meet the needs of the user-manager.

Poor Systems Prior to Computerization It has been observed that computerization of a poor system will merely increase inefficiency at an accelerating rate. The user-manager gets irrelevant or bad information faster and the bad decisions are made sooner! Hindustan Zinc Ltd., for instance, planned to upgrade and improve their transaction-processing system in a methodical manner. Such clerical systems as ledger accounting were to be upgraded to financial planning; invoicing to sales analysis; inventory accounting to inventory management; and production reports to production planning and control. Well established procedures helped them to make a smooth transition.

Overlooking Human Acceptance A new MIS quite often meets resistance from the user-organisation because people do not accept what they do not understand. Such reasons for resistance have to be analysed and a new attitude brought in to overcome it. Ross identified the reasons as threat to the status of the salesmen; threat to the ego of the managers; economic threat to the clerical persons (fear of job loss), insecurity for the managers having personal powers and political base; loss of autonomy and control for the production managers and engineers; and frayed and inter-personal relations for all others. A number of public and private sector organisations such as BHEL, Indian Airlines, ITDC, NTPC, etc. have started a process of systematic programme of training and user-education. It is imperative that such education begins at the top level for computer appreciation, at the middle management for specific computer applications in their own domains, and at the working level for direct involvement in input and output quality control. It is good to see the bulk of Indian organisations going through such an elaborate process of computer initiation as there is no short-cut to it.

MODELS OF COMMUNICATIONThe term 'communication' originates from the Latin word communicare, which means to share or impart. When used as per its function, it means a common ground of understanding. Communication is the process of exchanging of facts, ideas and opinions and a means that individuals or organisations use for sharing meaning and understanding with one another. In other words, it is the transmission and interaction of facts, ideas, opinions, feelings or attitudes. Communication is an interdisciplinary concept as theoretically it is approached from various disciplines such as mathematics, accounting, psychology, ecology, linguistics, systems analysis, etymology, cybernetics, auditing etc. Communication enables us to do important things: to grow, to learn, to be aware of ourselves and to adjust to ourenvironment.

Meaning and definition of communicationCommunication is a process, which involves organising, selecting and transmitting symbols in an appropriate way to ensure the listener perceives and recreates in his own mind the intended meaning of the communicator.

7

Page 8: DocumentIT

Communication involves the initiation of meaning in the listener, the transmission of information and thousands of probable stimuli. Human beings have a compulsive urge to communicate with each other. Mutual understanding is not only based on communication but also is the core of human relations. There can be no mutual understanding without communication; mutual understanding is the core of human relations. Communication is like birth, death, breath and wanting to be loveda part of life itself. Man is a communicating animal; he alone has the power to express in words. Sight, sound, touch, smell and taste are the modes of exchange of messages. The story of man's progress is the story of his progress in communication skills. The degree to which a civilisation or culture progresses is reflected in the state of its communication process.

Communication is a two-fold process between two parties- the sender and the receiver. It involves an exchange and progression of thoughts, ideas, knowledge and information towards a mutually accepted goal or direction. Here are some definitions by experts:

Some interesting comments about communicationDavis: "Process of passing information and understanding from one person to another... The only way that management can be achieved in an organisation is through the process of communication".Chester Barnard: In exhaustive theory of organisation, communication would occupy a central place because the structure, extensiveness and scope of organisations are almost entirely determined by communication techniques.

Importance of CommunicationIt is an established fact that the present era is often called the ‘Age of Communication and Information.’ The importance of communication has been greatly emphasized by all management experts. Communication, like birth, death, growth and decay, is a part of individual life as well as organisational existence. Its importance is selfexplanatory and is a common experience of all as well. In recent times, communication has turned into business; rarely would you find managers, subordinates, salesmen, technicians, foremen, lawyers, auditors, consultants, teachers, doctors or anyone else who is not concerned with the difficulties associated with communication.

Managerial Skill DevelopmentIt has been rightly observed that ‘the number one management problem today is miscommunication. Group activities in context with common goals cannot be accomplished without communication. The entire organisation control, coordination and motivation cannot be accomplished in case of lapses in communication. A common practice amongmany organisations is moving messages vertically, horizontally and diagonally between various officially designated positions. The modern industrial scenario relies heavily on communication for its augmentation and survival. George R. Terry states: "Communication serves as the lubricant, posturing for the smooth operations of the management process". The reasons for the growing significance of communication can be judged from the following paragraphs:

• Coordination: Modern complex organisations are large, consisting of numerous employees working towards accomplishing common goals. The organisational structure illustrates many levels of organisation hierarchy- both horizontally and vertically. More often than not, this leads to issues related to coordination. Effectual systems ofcommunication encourage better coordination. Coordination is viewed as a necessity among groups; channels are vital for efficient functioning of the organisation as a whole.. Communication encourages better coordination.• Smooth Working: Smooth and uninterrupted working of an enterprise, largely depends on good communication network. Communication takes on a greater role in this direction. Accurate decision-making and efficiency of the organisation is anchored in information supply. If messages have obstacles in the course of their flow, it is impossible to bring about a smooth functioning and uninterrupted working of the organisation. According to Herbert G. Micks, “Communication is basic to an organisation’s existence from the birth of the organisation through its continuing life”.

• Effective Decision-Making: It is essential to have a record of past and present data for immediate and effective decision-making. Communication is the primary base by means of which information is supplied to further help in

8

Page 9: DocumentIT

making decisions. Problem-defining, alternative courses of action, selecting the best option available, can be possible with the provision of relevant and adequate information conveyed to the decision-maker. In event of inadequate or no information, it would be relatively impossible even for the top management to take important decisions. Conversely, it is unlikely to achieve goals and objectives unless the top management has a smooth interaction with all levels of the organisation.

Managerial Efficiency: As quoted in George Terry's remark earlier, communication encourages managerial efficiency. Efficiency lays in the manner individuals and groups are assigned their respective targets.. Managerial functions like planning, control, coordination, motivation cannot be discharged without communication. As management is an art of ensuring targets are achieved in collaboration with other people, communication educates personnel working in the organisation about the desires of the management. Management communicates goals, policies and targets by issuing verbal and written orders and instructions. The yardstick for measuring managerial efficiency is communication.

• Co-operation: Co-operation among workers is possible only when there is an exchange of information between individuals and groups and between the management and the employees. This not only promotes the industrial peace but also maximizes production. The two-way communication network enhances co-operation between people. The flow of communication can be smooth and receptive with co-operation, confidence and message flow vertically, horizontally and across the organisation. In short, communication promotes co-operation and understanding among employees.

• Effective Leadership: Leadership implies the presence of a leader and followers. There is always a continuous process of communication between them. Communication is the basis for direction, motivation as well as establishment of effective leadership. The followers have to follow the leader and through conveying of ideas, opinions, feelings and be in constant communication with them. Thus, transmission and reception ensures a two-way traffic, the sine qua non for effective leadership. A manager with good communication skills can become a successful leader of his subordinates. E.g. In 1981, Narayana Murthy, with an investment of Rs. 10, 000 ($250 at the time) from his wife, founded Infosys with six other software professionals. Under his leadership, Infosys was listed on NASDAQ in 1999. Today, Infosys is acknowledged by customers, employees, investors and the public as a highly respected, dynamic and innovative company. The Economist ranked Narayana Murthy among the ten most admired global business leaders in 2005.

• Job Satisfaction: Communication is essential for achieving job satisfaction. Management conveys messages, which promote mutual understanding. Reception and recognition provide job satisfaction to employees. Two-way communication creates confidence, which leads to job satisfaction among employees. Openness, straightforward expression of opinions is necessary in this direction.

Increase Productivity: Communication helps the management in achieving maximum productivity with minimum cost and eliminating waste. These are the main objectives of the management. It is remarked that an archenemy of communication is the very illusion of it. This illusion can be avoided only with an effective system of communication. It is through communication that the workers can be well informed about the process of production, new methods of production and the activities of the workers in a similar organisation. Thus, a good system of communication helps the management to achieve maximum productivity with minimum cost, elimination of waste, reduction of cost etc. Inter-firm comparison is not possible without effective communication.• Morale Building: Morale and good relations in the organisation are essential for achieving goals of the organisation and promoting its benevolence goodwill in the public. An effective system, of communication builds good morale and improves human relations. Participatory communication is the best technique of morale building and motivation. S. Khandwala remarked, "Most of the conflicts in business are not basic but are caused by misunderstood motives and ignorance of facts. Proper communication between the interested parties reduces the points of friction and minimises those that inevitably arise".

9

Page 10: DocumentIT

• Achieving Managerial Roles: Henry Mintzberg has described a manager’s job by assigning three roles, namely inter-personal roles, informational roles and decisional roles. Communication plays a vital role in these three types of role. In case of interpersonal role, a manager has to constantly interact with subordinates. In informational role, a manager has to collect information from various people and supply the necessary information to others both inside and outside the organisation. A manager in a decisional role or written media of communication discharges interpersonal, informational and decisional roles as well.The importance of communication may be concluded with the remark of Chester I. Barnard: The first executive function is to develop and maintain a system of communication.

ROLE OF COMMUNICATION IN BUSINESSCommunication plays a very important role in an organisation. In fact, it is said to be the lifeline of the organisation. Everything in the universe, human or otherwise, communicates; though the means of communication may be very different. Communication is very crucial and unavoidable, as we have certain views and opinions, which we want to convey to another person, group or even to the outside world.

Communication in an organisation is inevitable. Departments communicate on a periodic basis in respect to daily activities and the organisation's relationship with the external world. This is done via written and unwritten means, either planned or impromptu. It could be hierarchical, that is, from top to bottom or vice versa. It could be formal, informal, vertical, horizontal or diagonal. Irrespective of the means, modes or types of communication, occurrence of communication is essential and of prime importance. Communication within an organisation could be grapevine or rumour. In totality, communication in an organisation is very complex and needs to be correctly managed handled and monitored to avert chaos, crisis or conflict. The basic functions and roles of the management cannot be conducted without communication. Planning organising, coordinating, budgeting, monitoring, controlling, staffing, delegation; including marketing, production, financing, staffing (human resource managing), research and development, purchasing, selling, etc cannot be coordinated, harnessed and their goals achieved devoid of communication. Communication plays a key role in meetings, annual general meeting ordinary meeting, urgent meeting, etc. The effectiveness of an organisation also depends on the success of its meetings where goals to be achieved, targets to be met and activities to be carried out are ironed out and discussed. If the ideas are not comprehended at meetings, the workers are bound to then one need to be sure that the workers will mess up everything. Thus, the chairman of the meeting must be an effective speaker orcommunication capable of ensuring that everyone got what has been discussed correctly. This will help eradicate rumours and grapevine and eventually achieve set standards, goals and/or objectives. In conclusion, everyone in an organisation needs to have good communication skill, not the boss only, but also the subordinates. It is what all of us (workers) need to jointly strive to achieve the set goals. Remove communication in an organisation, we are going to have dead entity, good for nothing and worth been shut down. Communication is the backbone for organisation's success.

OBJECTIVES OF COMMUNICATIONThe basic objective of human communication is trying to elicit a reaction from the person we are trying to communicate with. From a business or commercial angle, if we observe any small or large business around us we will be able to notice that the amount of success the business has achieved mainly relies on its power of communication. Communication defines the level of success that the company has attained. Following are a few of the main objectives of business communication.

1. Information: The core objective of a business is to convey information and making individuals more up to date, E.g.- all the advertisement campaigns that we notice around us are an attempt to inform and convey the information across to others, and in case of companies, this information is generally regarding the product or services at offer. However, the method of communication may be verbal, written, visual or any other. All companies thrive on information pertinent to their business activity. They must have excellent knowledge regarding the market, their competitors, the government policies, the type of credit they can gain from; the existing economic situation etc. Pertinent information is the main aspect for successful business. However, in the recent times, because of the arrival of the World Wide Web, there has been a swift outburst in the quantity of information that is accessible to a

10

Page 11: DocumentIT

company and it is turning out to be gradually more difficult for a company to come across information that is genuine, comprehensive, up-to-date and new. Furthermore, it has become very important for any company to get hold of that information. Moreover, this demand for correct information has initiated a new faction of people called the infomederies, who do not handle any type of goods but provide information. A company not only acquires information but furthermore provides information as well, for e.g.- The company has to provide factual information about profitability, quality of products, facilities provided to the workers or services rendered towards the community.

2. Motivation: Communication in business is moreover essential to boost the workers' motivation. Thus if the communication is carried out correctly and is successful in encouraging the workers and workers are sufficiently encouraged, the work gets completed easily, proficiently and the workers will carry out their functions by themselves without supervision. Communication should be utilised to construct a proper working atmosphere. In order to create a strong competitive atmosphere between the workers and furthermore can be acknowledged and rewarded for their accomplishments. Employees who work at a lower level in the chain of command of the organization should be motivated to give ideas and inputs on the methods to improve the functioning of an organisation, this type of communication brings about a feeling of involvement and connection and creates more loyalty towards the company.

3. Raising Morale: Another extremely significant objective of business communication (internal) is maintaining a sense of high morale amongst the workers, so that they perform their tasks with dynamism and resilience as a team. This is a key aspect that can However as morale is a psychological aspect, the condition of high morale is not a lasting feature. An organisation could have a sense of high morale between the workers for a particular phase but could discover that the employees have lost their morale in the following phase. Therefore, to keep the sense of high morale amongst the employees, an organisation has to constantly put in their efforts in that course. It can be managed by maintaining an open door policy, keeping tabs on the gossip and not permitting destructive rumours to spread among employees. 4. Order and instructions: An order is an oral or written rule influencing the start, end or adjusting an activity. This form of communication is internal and is executed within a company. Order may be in written or verbal form. Written orders are given when the type of job is extremely vital or the person who would carry out the task is far off. Care must be taken at the time of handing out written orders; a copy of the order should always be maintained so that it is easy during the follow up. Oral orders come into play at the time of urgency in the work and when the person is in close proximity. However, it is extremely vital to follow up in both the cases.5. Education and training: These days, communication can be additionally used in business to enhance the scope of knowledge. The goal of education is attained by business communication on three levels (a) Management (b) employees (c) general public a. Education for future managers: At this juncture, junior personnel in the organization are taught to deal with vital assignments comprising of responsibility, so that they can achieve something more than their superiors in the long run. b. Education for newbie’s: When new personnel join an organisation they are introduced by enlightening them in relation to the culture of the company, code of discipline, work ethos etc. This is generally carried out by way of a training method to accustom the new recruits with the working style of the organisation. c. Educating the public: This is carried out by advertising, informative seminars, newspapers, journals to notify the public regarding the product, the working style of the company and different schemes presented by the company.

Nature of CommunicationThe Nature of communication can be explained using following characteristics of communication:• Two-way process: Communication can occur only when there are at least two individuals. As shown in fig 1.1, one person has to convey some message and another has to receive it. However, the receiver need not necessarily be an individual. Information may be conveyed to a group of persons collectively. For example, in a classroom, the teacher conveys information to a group of students. If the receiver needs any clarification, he can ask the sender of message immediately, for example, face to face or telephonic conversation. Communication may carried by means of letters,

11

Page 12: DocumentIT

circulars etc. If communication is conducted via post or email, the receiver may respond by a letter or as per the mode desired by him or the respective sender.

Knowledge of language: For successful communication, it is essential that the receiver have thorough understanding of the message. To heighten the possibility of effectual communication, senders must speak in a language the receiver is familiar with. For example, if the receiver cannot understand English and the subsequent sender conveyshis ideas in English, the communication will inevitably be a failure.• Meeting of minds necessary: The receiver must comprehend the intended meaning of the message the sender wants him to understand. A consensus is essential, which is nothing but recognizing the meaning of identity of minds. If weekly target declared by a supervisor is misconstrued by a worker as monthly target, there is dearth of agreement. Inattention, poor vocabulary, faulty pronunciation etc., may result in lack of consensus.• The message must have substance: The gist of the message holds importance only until the receiver shows interest in the subject matter. In other words, the sender of message must have something worthwhile for the receiver. E.g., any discussion about cricket will be well received by a cricket fanatic.• Communication can also be conducted through gestures: Communication should not necessarily be verbal or written. Certain gestures or actions can also depict an individual's willingness or understanding of a given problem. Nodding of heads, rolling of eyes, movement of lips etc., are some of the gestures used for convey certain basicideas.• Communication is all-pervasive: Communication is omnipresent; it exists in all levels of management. The top management conveys information to the middle management and vice versa. Similarly, the middle management conveys information to the supervisory staff and vice versa. There is flow of communication in all directions in aworkplace. • Communication is a continuous process: In every workplace, someone will always be conveying or receiving information in some form. Sharing or exchanging information is a continual process. As long as there is work – personal, official or unofficial, communication will exist.Communication may be formal or informal: Formal communication follows the hierarchy- the official channel established. For example, when a worker wishes to convey certain information to the production manager, it can be channelised only through the foreman. He cannot bypass the foreman and convey information directly to the production manager. Informal communication does not follow the official channel. Itprovides individuals with the liberty to freely convey information to anybody else without considering the hierarchy. For example, discussion among friends.

Process of CommunicationThe process of communication as shown in fig 1.2 involves exchange of ideas and it can be verbal or non-verbal in nature. The pre-requisite of communication is a message and this message must be conveyed through some medium to the recipient in such a way that it is understood by the recipient in the same manner as intended by the sender. The recipient must respond within a period. The response from the recipient to the sender is called feedback. Therefore, communication is said to be a two way process, which is incomplete without a feedback from the recipient to the sender on how well the message is understood by him.

12

Page 13: DocumentIT

Following are the components of the process of communication. Context: Communication is affected by the context in which it be physical, social, chronological or cultural.

Every communication proceeds with context. The sender chooses the message to communicate within a context. the term Labour in relation to

Sender / Encoder: Sender / Encoder is a person who sends the message. A sender utilizes symbols (words, graphic required response. For instance new joinees. Sender may be an background, approach, skills, competencies impact on the message. The verbal ascertaining interpretation of the message by the recipient in the same terms as intended by the sender.

Message: Message is a key idea that the sender response of recipient. Communication process begins with conveyed. One must ensure

Medium: Medium is a means used must choose an appropriate medium for transmitting the message probabilities of the message appropriate medium of communication is essential for making the message effective and correctly interpreted by the recipient. This choice of communication medium varies based on the features of communication. Written medium a message has to be conveyed to a small group of people, while an oral medium is chosen when spontaneous feedback is required from the recipient and queries are addressed on the spot.

Recipient / Decoder: Recipient / Decoder is a person for whom the message is intended / aimed / targeted. The degree to which the decoder understands the message is depends on various factors like knowledge of recipient, their responsiveness to the message and the reliance of encoder on decoder.

Feedback: Feedback is the main component of communication process as it permits the sender to analyze the efficacy of the message. It helps the sender in confirming the correct interpretation of message by the decoder. Feedback may be verbal or non-verbal (in form of smiles, sighs, etc.). It could also be in written form (memos, reports, etc).

Models of communication refers to the conceptual model used to explain the human communication process. The first major model for communication came in 1949 by Claude Elwood Shannon and Warren Weaver for Bell Laboratories[1] Following the basic concept, communication is the process of sending and receiving messages or transferring information from one part (sender) to another (receiver). Models:

Aim to present communication as a process. It is like a map, representing features of a territory. But it cannot be comprehensive. We need therefore to be selective, knowing why we are using it and what we hope to gain from it.

Transmission models - criticism

13

Page 14: DocumentIT

The Shannon and Weaver and Lasswell model are typical of so-called transmission models of communication. These two models also typically underlie many others in the American tradition of research, showing Source-Message/Channel-Receiver as the basic process of communication. In such models, communication is reduced to a question of transmitting information.

Although transmission models have been highly influential in the study of human communication, it can be argued that, although Shannon's and Weaver's work was very fertile in fields such as information theory and cybernetics, it may actually be misleading in the study of human communication.

Some criticisms which could be made of such models are:

The conduit metaphor

Their model presents us with what has been called the 'conduit metaphor' of communication (Reddy (1979) The source puts ideas into words and sends the words to the receiver, who therefore receives the ideas. The whole notion of 'sending' and 'receiving' may be misleading, since, after all, once I've 'sent' a message, I still have it. The underlying metaphor is of putting objects into a container and sending them through some sort of conduit to the receiver who receives the containers and takes the objects out. The important question which is overlooked is: How do the 'objects' get into the 'containers'? In other words, how do we succeed in putting meanings 'into' words and how does somebody else succeed in taking the meanings 'out of' words? Transmission models don't deal with meaning.

It's probably worth saying that that's not really a criticism of them, since they weren't intended to deal with meaning, but rather a criticism of their (mis)application to human-to-human communication. One might question how useful the application of information theory is. It may be helpful to academics in that it supplies them with an arcane vocabulary which gives them some kind of kudos. It also appears to offer a 'scientific' methodology, but it's worth bearing in mind Cherry's warning (speaking of the relationship between entropy and information):

...when such an important relationship ... has been exhibited, there are two ways in which it may become exploited; precisely and mathematically, taking due care about the validity of applying the methods; or vaguely and descriptively. Since this relationship has been pointed out, we have heard of 'entropies' of languages, of social systems, and economic systems and of its use in various method-starved studies. It is the kind of sweeping generality which people will clutch like a straw.

1950s: Early models

Mass communication research was always traditionally concerned with political influence over the mass press, and then over the influences of films and radio. The 1950s was fertile for model-building, accompanying the rise in sociology and psychology. It was in the USA that a science of communication was first discussed.

The earliest model was a simple sender-channel-message-receiver model.

Modifications added the concept of feedback, leading to a loop.

14

Page 15: DocumentIT

The next development was that receivers normally selectively perceive, interpret and retain messages.

Gerbner is important because he recognises the TRANSACTIONAL nature of much communication – ie the “intersubjectivity of communication”. The result is that communication is always a matter of negotiation and cannot be predicted in advance.

Communication to mass communication

Early on, a sub-set of models began to refer specifically to mass communication. Westley and Maclean were important in this. Their model emphasises the significance of audience demand rather than just the communicator’s purpose.

1960s and 1970s

The attention now moved away from the effects of the mass media on opinions, behaviour and attitudes, and began to focus on the longer-term and socialising effects of the mass media. The audience were less victims of the media, and more active in adopting or rejecting the guidelines offered by the mass media. This an emphasis on “an active audience”.

Nevertheless a healthy suspicion of the mass media has continued through the 1970s and 1980s, especially in terms of news selection and presentation.

A more recent development is an interest in the ‘information society’ when the ‘boundary separating mass communication from other communication processes is becoming much less clear”. There has also been an accelerating “internationalisation” of mass communication.

Basic models include:

Model Comment

Lasswell formula (1948)

Useful but too simple. It assumes the communicator wishes to influence the receiver

and therefore sees communication as a persuasive process. It assumes that messages always have effects. It exaggerates the effects of mass communication. It omits feedback. On the other hand, it was devised in an era of political

propaganda It remains a useful INTRODUCTORY model Braddock (1958) modified it to include circumstances, purpose

and effectShannon and Weaver (1949)

Highly influential and sometimes described as “the most important” model (Johnson and Klare)

Communication is presented as a linear, one-way process Osgood and Schramm developed it into a more circular model

15

Page 16: DocumentIT

Shannon and Weaver make a distinction between source and transmitter, and receiver and destination – ie there are two functions at the transmitting end and two at the receiving end

Criticised for suggesting a definite start and finish to the communication process, which in fact is often endless

Gerbner (1956) Special feature of this model is that is can be given different shapes depending on the situation it describes

There is a verbal as well as visual formula (like Lasswell):1 someone

2 perceives an event

3 and reacts

4 in a situation

5 through some means

6 to make available materials

7 in some form

8 and context

9 conveying content

10 with some consequence

The flexible nature of the model makes it useful. It also allows an emphasis on perception It could explain, for example, the perceptual problems of a

witness in court and, in the media, a model which helps us to explore the connection between reality and the stories given on the news

Westley & MacLean (1957)

Another influential model The authors were keen to create a model which showed the

complexities of mass communication - hence the emphasis on having to interpret a mass of Xs (events which are communicated in the media)

It oversimplifies the relationships between participants by not showing power relations between participants

It makes the media process seem more integrated than it may actually be

It doesn’t show the way different media may have different interests of the state (eg difference between a state broadcaster and private one)

Linear ModelAccording to the Encyclopaedia Britannica, one of the most productive schematic models of a communications system that has been proposed as an answer to Lasswell’s question (Who (says) What (to) Whom (in) What Channel (with) What Effect?) in relation to communication that emerged in the late 1940s, largely from the speculations of

16

Page 17: DocumentIT

two American mathematicians, Claude Shannon and Warren Weaver. The simplicity of their model, its clarity and its surface generality proved attractive to many students of communication in various disciplines; although it neither is the only model of the communication process extant nor is, it universally accepted. As originally conceived, the model contained five elements—an information source, a transmitter, a channel of transmission, a receiver and a destination—all arranged in linear order. Messages (electronic messages, initially) were supposed to travel along this course, to be transformed into electric energy by the transmitter and to be reconstituted into intelligible language by the receiver. In due course, the five elements of the model were renamed to specify components for other types of communication transmitted in various manners. The information source was split into its components (both source and message) as a provision for wider range of applicability. The six constituents of the revised model are:

A source An encoder A message A channel A decoder A receiver

Some communication systems have simple A person on a landline telephone The mouthpiece of the telephone The words spoken The electrical wires along which the words (now electrical impulses) travel The earpiece of another telephone The mind of the listener

In other communication systems, the components are more e.g., the communication of the emotions of a act in response to the message long after the Begging a multitude of psychological, aesthetic and sociological questions concerning the exact nature of each component, the linear model appeared, from the commonsense perspective, at least, to explain communication occurred. It did not indicate the reason for the inability of certain communications—obvious in daily life to fit its neat model

Aristotle’s ModelAristotle took the first step towards the development of a communication model. He developed an easy, simple and elementary model of the communication process. in the figure 1.4, in a communication event, there are three main ingredients,

The Speaker The Speech and The Audience

17

Page 18: DocumentIT

Shannon and Weaver

The original model was designed to mirror the functioning of radio and telephone technologies. Their initial model consisted of three primary parts: sender, channel, and receiver. The sender was the part of a telephone a person spoke into, the channel was the telephone itself, and the receiver was the part of the phone where one could hear the other person. Shannon and Weaver also recognized that often there is static that interferes with one listening to a telephone conversation, which they deemed noise. The noise could also mean the absence of signal.[1]

In a simple model, often referred to as the transmission model or standard view of communication, information or content (e.g. a message in natural language) is sent in some form (as spoken language) from an emisor/ sender/ encoder to a destination/ receiver/ decoder. This common conception of communication views communication as a means of sending and receiving information. The strengths of this model are simplicity, generality, and quantifiability. Social scientists Claude Shannon and Warren Weaver structured this model based on the following elements:

1. An information source, which produces a message.2. A transmitter, which encodes the message into signals3. A channel, to which signals are adapted for transmission4. A receiver, which 'decodes' (reconstructs) the message from the signal.5. A destination, where the message arrives.

Shannon and Weaver argued that there were three levels of problems for communication within this theory.

The technical problem: how accurately can the message be transmitted?

The semantic problem: how precisely is the meaning 'conveyed'?

The effectiveness problem: how effectively does the received meaning affect behavior?

Daniel Chandler critiques the transmission model by stating:[3]

It assumes communicators are isolated individuals.

No allowance for differing purposes.

No allowance for differing interpretations.

No allowance for unequal power relations.

No allowance for situational contexts.

David berlo

In 1960, David Berlo expanded on Shannon and Weaver’s (1949) linear model of communication and created the

SMCR Model of Communication. The Sender-Message-Channel-Receiver Model of communication separated the

model into clear parts and has been expanded upon by other scholars.

18

Page 19: DocumentIT

Schramm

Communication is usually described along a few major dimensions: Message (what type of things are communicated), source / emisor / sender / encoder (by whom), form (in which form), channel (through which medium), destination / receiver / target / decoder (to whom), and Receiver.Wilbur Schramm (1954) also indicated that we should also examine the impact that a message has (both desired and undesired) on the target of the message.[5] Between parties, communication includes acts that confer knowledge and experiences, give advice and commands, and ask questions. These acts may take many forms, in one of the various manners of communication. The form depends on the abilities of the group communicating. Together, communication content and form make messages that are sent towards a destination. The target can be oneself, another person or being, another entity (such as a corporation or group of beings).

Communication can be seen as processes of information transmission governed by three levels of semiotic rules:

1. Syntactic (formal properties of signs and symbols),2. Pragmatic (concerned with the relations between signs/expressions and their users) and3. Semantic (study of relationships between signs and symbols and what they represent).

Therefore, communication is social interaction where at least two interacting agents share a common set of signs and a common set ofsemiotic rules. This commonly held rule in some sense ignores autocommunication, including intrapersonal communication via diaries or self-talk, both secondary phenomena that followed the primary acquisition of communicative competences within social interactions.

Barnlund

In light of these weaknesses, Barnlund (2008) proposed a transactional model of communication. [6] The basic premise of the transactional model of communication is that individuals are simultaneously engaging in the sending and receiving of messages.

In a slightly more complex form a sender and a receiver are linked reciprocally. This second attitude of communication, referred to as the constitutive model or constructionist view, focuses on how an individual communicates as the determining factor of the way the message will be interpreted. Communication is viewed as a conduit; a passage in which information travels from one individual to another and this information becomes separate from the communication itself. A particular instance of communication is called a speech act. The sender's personal filters and the receiver's personal filters may vary depending upon different regional traditions, cultures, or gender; which may alter the intended meaning of message contents. In the presence of "communication noise" on the transmission channel (air, in this case), reception and decoding of content may be faulty, and thus the speech act may not achieve the desired effect. One problem with this encode-transmit-receive-decode model is that the processes of encoding and decoding imply that the sender and receiver each possess something that functions as a codebook, and

19

Page 20: DocumentIT

that these two code books are, at the very least, similar if not identical. Although something like code books is implied by the model, they are nowhere represented in the model, which creates many conceptual difficulties.

Theories of coregulation describe communication as a creative and dynamic continuous process, rather than a discrete exchange of information. Canadian media scholar Harold Innis had the theory that people use different types of media to communicate and which one they choose to use will offer different possibilities for the shape and durability of society (Wark, McKenzie 1997). His famous example of this is using ancient Egypt and looking at the ways they built themselves out of media with very different properties stone and papyrus. Papyrus is what he called 'Space Binding'. it made possible the transmission of written orders across space, empires and enables the waging of distant military campaigns and colonial administration. The other is stone and 'Time Binding', through the construction of temples and the pyramids can sustain their authority generation to generation, through this media they can change and shape communication in their society (Wark, McKenzie 1997).

Ritual models of communication

Early models were based on a transmissive or transportation approach (ie assuming that communication was one-way). James Carey in 1975 was the first to challenge this. He suggested an alternative view of communication as ritual in which communication is “linked to sharing, participation, association, fellowship … the maintenance of society in time; not the act of imparting information but the representation of shared beliefs”.

As a result there is more emphasis on signs and symbols. Medium and message are harder to separate. Communication is seen as timeless and unchanging. The Christmas tree represents the model – it symbolises ideas and values of friendship and celebration but has no instrumental purpose. The tree is both medium and message.

Communication as display and attention

As well as transmissive and ritual models, there is a third. This aims to catch and hold our attention. The main goal is economic = consumption. This makes sense in terms of a mass media audience who use the media for entertainment and escapism. The media here works like a magnet, attracting the audience temporarily and sometimes repulsing. The theory is associated with Altheide & Snow (1979) and McQuail (1987).

Communication NetworkCommunication Network is divided into two parts in an organization

20

Page 21: DocumentIT

An organisation is a composite of various individuals working in unison towards a common goal. They are constantly interacting with each other and with people outside the company. The communication network in an organisation is bifurcated into two parts:

INTERNAL COMMUNICATIONInteraction among members of the same organization is termed as internal communication. It could be both formal and informal. Large organizations with hundreds of people employees face inability in communicating and directly interacting with everyone. They adopt a number of strategies e.g. newsletters, annual reports to communicate the essential message. In such large setups, it is impossible and unnecessary to transmit information to everyone. Informal communication is prevalent in organizations with a preliminary work force of approximately 20 people, all of whom have direct interaction with each other every day. Almost all messages are communicated back and forth in an informal manner.The channels of communication may be as follows:

Vertical Horizontal Diagonal

VERTICAL COMMUNICATION21

Page 22: DocumentIT

Vertical communication is upward and downward flow of messages. Information is transmitted from the top management to the employees working in the organisation or vice versa. Since it is impossible to have upward and downward direct interface on all occasions, especially when the downward number of people working is high, messages navigate or break through with assistance from a mediator or an opinion leader. In such situations, it is highly possible that the message would be distorted as it travels from one person to another. Let us use the analogy of a game, viz. Chinese Whisper, to understand how distortion in message occurs. The content of the original message changes as it advances from one person to another with the addition or deletion of words. Eventually, when the message reaches the last person, it is observed, that amusingly the message has totally lost its meaning. If the observations from this game were transformed into the organizational set-up, it would be perceived that those messages are distorted when they are passed on upwards or downwards. Distortion of original message can be evaded by ensuring the passed on is not fragmented and there are lesser people to pass the information further. Further efforts could be made to ensure that there is direct communication within the departments. The managing heads of the various departments could form a close link and disperse information. Besides, distortions can he minimised with the usage of the electronic media and e-mails.

LATERAL/HORIZONTAL COMMUNICATIONInteraction with peers or colleagues is called lateral/horizontal communication. This could prove to be the most effectual form of communication, as peers are not stalled by the ‘chain-of-command’ methods. The volume of horizontal communication that a company benefits from would be subject to the interdependence of various departments. In fact, if the work is conducted by considering operations of various departments, communication isimproved and more inclusive. Without lateral communication, there cannot be productive development at the organisational level. In similar situations, there would be lack of coordination, cooperation and numerous forced attempts would be carried out to amalgamate activities of various departments. Further, it could also cause repetition of work and poor employee relationships.

DIAGONAL COMMUNICATIONIn an organisation, communication does not necessarily move across a specific path. Vertical and lateral forms or informal and diagonal forms of transferring messages are vital. As shown in figure 1.9, in diagonal communication, there is no direct path planned for transmission of information. At certain stages, it could take on the upward trend, then a lateral direction and, finally, move downward or even skip a few stages. This channel is considered extremely effective as hierarchical bindings are removed and there is a free flow of communication, irrespective of position or status. Furthermore, it facilitates in building relationships and bonding between the superior and the subordinate. In fact, in many Western countries, managers are trained to harmonise with the employees; it works to eradicate fear of status and position. Nonetheless, this channel could lead to gossip, grapevine and rumours. Since nobody is directly accountable for the flow of information, nobody is prepared to assume responsibility. Only a sensible manager can filter through the information, in the midst of rumours and gossip, note the aim of the sender and finally reach a definite conclusion. This channel could, nevertheless turn a little challenging for managers who aspire to control flow of information. They might feel vulnerable that their controlling authority is under observation. However, this is a temporary phase and with continuous and mature communication, it can be straightened out.

EXTERNAL COMMUNICATIONCommunication is an ongoing process. It not only occurs with people both within and outside the organisation. If a company plans to exist in a competitive environment, it has to implement the latter form of communication also. The image of the company is reliant on external communication. External communication can take on a number of forms:

Advertising Media interaction Public relations Presentations Negotiations Mails Telegrams

22

Page 23: DocumentIT

Letters

External communication could be oral or written. The first three forms of communication mentioned above, i.e. advertising, media interaction and public relations are mainly included within the field of corporate communications. Establishing good relations, negotiating or conducting a deal, interacting with clients, issuing lenders, soliciting proposals, sending letters are conducted in external communication. This is a complex job since interaction varies between various people from myriad disciplines, with diverse personalities and varied expectations. When communication is persevered with external customers, nearly all skills required for proficient communication have to be brought to the forefront in order to avoid any humiliation or drop in performance. On certain occasions, in the course of internal level communications, individuals could seem lackadaisical. The same laid-back behaviour would not be considered appropriate in case of external communication. As employees are the face of the company, they have to, take into account the image of the organisation and generate a positive.

Development of Modern Models of communication their role in Business organizationCommunication experts and scholars have described different models for communication. These models are highlighted below:

1. Aristotles Model: Such model has been developed by the greek philosopher Aristotle. Where three elements of communication have been identified clearly, which are : (a) Sender (b) Message (c) Receivers

When sender sends any message to the receiver, communication starts in the theme of such model. Here the response of the receiver is not focused and therefore it is known as one way communication model.

1. New Comb’s Communication Model: New Comb has given a triangular communication model. According to him, if any third party exists between sender and receiver then he (third party) would like to affect the communication. The main theme of this model is that sender and receiver relies upon third party to communicate instead of communication. Here both the parties which are concerned should establish trust upon the third party. If any two parties among AB and C want to communicate by taking the help of another party when they need harmony and confidence to each other. Otherwise communication will be hampered.

1. Thayer’s Organizational Model: Thayer in his model clearly stated that an individual can influence the organizational structure through his communication process. He has mentioned four aspects which affect the communication process between sender and receiver, which are:

(a) Individual

(b) Inter personal

(c) Organizational

(d) Technical

1. Circular Model: This model highlights two way communications. Here the response of the receiver is given importance. So, sender always anticipates feedback from receiver. Therefore the receiver of the message reacts in the context of the message forwarded by the sender. As a result, communication is completed.

2. Shannon-Weaver Model: This model shows the transmission of a message from a source to the destination. The sender initiates communication based on thoughts and transmits information through selected media to the receiver. On the basis of the received message the receiver transmits his emotion and feeling as a feedback. During communication process there is a noise which influences and affects the whole communication. Such model gives importance on two way flow of communication and therefore helps to measure the effectiveness of communication.

23

Page 24: DocumentIT

3. Modern Communication Model: Due to the development in communication system such model has been arrived adding some more important matters. In this model, sender prepares communication in a planned way and sends message to the receiver through an appropriate podium. When receiver reacts, communication process is performed. Such model is shown below:

(a) Transactional model,

(b) International model,

(c) Linear model,

(d) Berlo’s model,

(e) Exchange theory model

COMMUNICATION SOFTWARE

Communication software is used to provide remote access to systems and exchange files and messages in text, audio and/or video formats between different computers or users. This includes terminal emulators, file transfer programs, chat and instant messaging programs, as well as similar functionality integrated within MUDs. The term is also applied to software operating a bulletin board system but seldom to that operating a computer network or Stored Program Control exchange.

24

Page 25: DocumentIT

25

Page 26: DocumentIT

Section II Computer N/W

A computer network consists of a collection of computers, printers and other equipment that is connected together so that they can communicate with each other. Fig 1 gives an example of a network in a school comprising of a local area network or LAN connecting computers with each other, the internet, and various servers.

26

Page 27: DocumentIT

Broadly speaking, there are two types of network configuration, peer-to-peer networks and client/server networks.

Peer-to-peer networks are more commonly implemented where less then ten computers are involved and where strict security is not necessary. All computers have the same status, hence the term 'peer', and they communicate with each other on an equal footing. Files, such as word processing or spreadsheet documents, can be shared across the network and all the computers on the network can share devices, such as printers or scanners, which are connected to any one computer.

Client/server networks are more suitable for larger networks. A central computer, or 'server', acts as the storage location for files and applications shared on the network. Usually the server is a higher than average performance computer. The server also controls the network access of the other computers which are referred to as the 'client' computers. Typically, teachers and students in a school will use the client computers for their work and only the network administrator (usually a designated staff member) will have access rights to the server.

27

Page 28: DocumentIT

Table 1 provides a summary comparison between Peer-to-Peer and Client/Server Networks.

Components of a Network

A computer network comprises the following components:

A minimum of at least 2 computers

Cables that connect the computers to each other, although wireless communication is becoming more common (see Advice Sheet 20 for more information)

28

Page 29: DocumentIT

A network interface device on each computer (this is called a network interface card or NIC)

A ‘Switch’ used to switch the data from one point to another. Hubs are outdated and are little used for new installations.

Network operating system software

Structured Cabling

The two most popular types of structured network cabling are twisted-pair (also known as 10BaseT) and thin coax (also known as 10Base2). 10BaseT cabling looks like ordinary telephone wire, except that it has 8 wires inside instead of 4. Thin coax looks like the copper coaxial cabling that's often used to connect a Video Recorder to a TV.

A network consists of two or more computers that are linked in order to share resources (such as printers and CDs), exchange files, or allow electronic communications. The computers on a network may be linked through cables, telephone lines, radio waves, satellites, or infrared light beams.

Two very common types of networks include:

Local Area Network (LAN) Wide Area Network (WAN)

You may also see references to a Metropolitan Area Networks (MAN), a Wireless LAN (WLAN), or a Wireless WAN (WWAN).

Local Area Network

A Local Area Network (LAN) is a network that is confined to a relatively small area. It is generally limited to a geographic area such as a writing lab, school, or building.

Computers connected to a network are broadly categorized as servers or workstations. Servers are generally not used by humans directly, but rather run continuously to provide "services" to the other computers (and their human users) on the network. Services provided can include printing and faxing, software hosting, file storage and sharing, messaging, data storage and retrieval, complete access control (security) for the network's resources, and many others.

Workstations are called such because they typically do have a human user which interacts with the network through them. Workstations were traditionally considered a desktop, consisting of a computer, keyboard, display, and mouse, or a laptop, with with integrated keyboard, display, and touchpad. With the advent of the tablet computer, and the touch screen devices such as iPad and iPhone, our definition of workstation is quickly evolving to include those devices, because of their ability to interact with the network and utilize network services.

Servers tend to be more powerful than workstations, although configurations are guided by needs. For example, a group of servers might be located in a secure area, away from humans, and only accessed through the network. In such cases, it would be common for the servers to operate without a dedicated display or keyboard. However, the size and speed of the server's processor(s), hard drive, and main memory might add dramatically to the cost of the system. On the other hand, a workstation might not need as much storage or working memory, but might require an expensive display to accommodate the needs of its user. Every computer on a network should be appropriately configured for its use.

29

Page 30: DocumentIT

On a single LAN, computers and servers may be connected by cables or wirelessly. Wireless access to a wired network is made possible by wireless access points (WAPs). These WAP devices provide a bridge between computers and networks. A typical WAP might have the theoretical capacity to connect hundreds or even thousands of wireless users to a network, although practical capacity might be far less.

Nearly always servers will be connected by cables to the network, because the cable connections remain the fastest. Workstations which are stationary (desktops) are also usually connected by a cable to the network, although the cost of wireless adapters has dropped to the point that, when installing workstations in an existing facility with inadequate wiring, it can be easier and less expensive to use wireless for a desktop.

See the Topology, Cabling, and Hardware sections of this tutorial for more information on the configuration of a LAN.

Wide Area Network

Wide Area Networks (WANs) connect networks in larger geographic areas, such as Florida, the United States, or the world. Dedicated transoceanic cabling or satellite uplinks may be used to connect this type of global network.

Using a WAN, schools in Florida can communicate with places like Tokyo in a matter of seconds, without paying enormous phone bills. Two users a half-world apart with workstations equipped with microphones and a webcams might teleconference in real time. A WAN is complicated. It uses multiplexers, bridges, and routers to connect local and metropolitan networks to global communications networks like the Internet. To users, however, a WAN will not appear to be much different than a LAN.

Advantages of Installing a School NetworkUser access control.

Modern networks almost always have one or more servers which allows centralized management for users and for network resources to which they have access. User credentials on a privately-owned and operated network may be as simple as a user name and password, but with ever-increasing attention to computing security issues, these servers are critical to ensuring that sensitive information is only available to authorized users.

Information storing and sharing.

Computers allow users to create and manipulate information. Information takes on a life of its own on a network. The network provides both a place to store the information and mechanisms to share that information with other network users.

Connections.

Administrators, instructors, and even students and guests can be connected using the campus network.

Services.

The school can provide services, such as registration, school directories, course schedules, access to research, and email accounts, and many others. (Remember, network services are generally provided by servers).

Internet.

The school can provide network users with access to the internet, via an internet gateway.

30

Page 31: DocumentIT

Computing resources.

The school can provide access to special purpose computing devices which individual users would not normally own. For example, a school network might have high-speed high quality printers strategically located around a campus for instructor or student use.

Flexible Access.

School networks allow students to access their information from connected devices throughout the school. Students can begin an assignment in their classroom, save part of it on a public access area of the network, then go to the media center after school to finish their work. Students can also work cooperatively through the network.

Workgroup Computing.

Collaborative software allows many users to work on a document or project concurrently. For example, educators located at various schools within a county could simultaneously contribute their ideas about new curriculum standards to the same document, spreadsheets, or website.

Disadvantages of Installing a School NetworkExpensive to Install.

Large campus networks can carry hefty price tags. Cabling, network cards, routers, bridges, firewalls, wireless access points, and software can get expensive, and the installation would certainly require the services of technicians. But, with the ease of setup of home networks, a simple network with internet access can be setup for a small campus in an afternoon.

Requires Administrative Time.

Proper maintenance of a network requires considerable time and expertise. Many schools have installed a network, only to find that they did not budget for the necessary administrative support.

Servers Fail.

Although a network server is no more susceptible to failure than any other computer, when the files server "goes down" the entire network may come to a halt. Good network design practices say that critical network services (provided by servers) should be redundant on the network whenever possible.

Cables May Break.

The Topology chapter presents information about the various configurations of cables. Some of the configurations are designed to minimize the inconvenience of a broken cable; with other configurations, one broken cable can stop the entire network.

Security and compliance.

Network security is expensive. It is also very important. A school network would possibly be subject to more stringent security requirements than a similarly-sized corporate network, because of its likelihood of storing personal and confidential information of network users, the danger of which can be compounded if any

31

Page 32: DocumentIT

network users are minors. A great deal of attention must be paid to network services to ensure all network content is appropriate for the network community it serves.

Networking goals and applications

There are many organization which use computer for management of various fields. They may have a number of computers performing different jobs in different departments of same organization. They can have many branches of that organization in different cities. Now some times it becomes necessary to load same program files. Software's and some times same data file on all the computers with same information. This wastes time as well as memory space. Starting from the basic needs these computers were connected to each other resulting into a computer network. Various transmission media channels were used to make a network. The main goals of these networks are as follows:

Resource Sharing : This is the main aim of a computer network. It means to make all programs peripherals and data available to any one computer on the network to all other computers in the network without regard to the physical locations of them. Thus a user at a large distances can share the resources or can see data of a computer in the same way that a local user uses them. Another aspect of resource sharing is load sharing. That is if required, a job can be performed using various computers in network by portioning it which reduces time consumption and load both for a particular computer.

Cost Reduction : Another goal of networking is reduction of cost. Resource sharing automatically reduces cost and hence money can be saved. One more aspect is that the price of small computers is very less as compared to main frames. Though main frames are roughly ten times faster as compared to micro computers but even then the price to performance ration is much better for small computers as compared to large computers. The large computer cost thousand times more than small computers. Because of this imbalances more powerful personal computers are developed and are able to share data and other resources kept on one or more shared file server machines. Thus one goal of network is to do same job in minimum cost in terms of money when is possible on large computers only which are very expansive. 

Communication Medium :The goal of a computer network is to provide a powerful communication medium among widely separated people. It is easy for two or more people living far apart to work on same project by portioning it using a network. They can make programs, can discuss or can even write a report using a network while they are far off. Some times a change is required in some data file or document. It is done on fine, others can see them immediately which is possible only through network, otherwise they can have to wait for this several days through letter or some other media. Thus it makes speedy co-operations and enhances human to human communication.

Improve Performance :The goal of a network is to improve accessibility as well as performance of a system. The performance of a computer can be improved by adding one or more processors to it as the work load on it grows. For example if the system is full instead of replacing it buy a larger one at large expansive it is better to add more processors to it on less cost and less disruption to the user. This improve both accessibility as well as performance of a system. 

32

Page 33: DocumentIT

PRIVATE NETWORK

In the Internet addressing architecture, a private network is a network that uses private IP address space, following the standards set by RFC 1918 and RFC 4193. These addresses are commonly used for home, office, and enterprise local area networks (LANs), when globally routable addresses are not mandatory, or are not available for the intended network applications. Unde r Internet Protocol IPv4, private IP address spaces were originally defined in an effort to delay IPv4 address exhaustion, but they are also a feature of the next generation Internet Protocol, IPv6.

These addresses are characterized as private because they are not globally delegated, meaning they are not allocated to any specific organization, and IP packets addressed by them cannot be transmitted onto the public Internet. Anyone may use these addresses without approval from a regional Internet registry (RIR). If such a private network needs to connect to the Internet, it must use either a network address translator (NAT) gateway, or a proxy server.

Public network

A public data network is a network established and operated by a telecommunications administration, or a

recognized private operating agency, for the specific purpose of providing data transmission services for the public.

In communications, a PDN is a circuit- or packet-switched network that is available to the public and that can

transmit data in digital form. A PDN provider is a company that provides access to a PDN and that provides any

of X.25, frame relay, or cell relay (ATM) services. Access to a PDN generally includes a guaranteed bandwidth,

known as the committed information rate (CIR). Costs for the access depend on the guaranteed rate. PDN providers

differ in how they charge for temporary increases in required bandwidth (known as surges). Some use the amount of

overrun; others use the surge duration.

33

Page 34: DocumentIT

National network

The National Network (or National Truck Network) is a network of approved state highways and interstates for commercial truck drivers in the United States. The Surface Transportation Assistance Act of 1982 authorized the establishment of a national network of highways designated for use by largetrucks. On these highways, Federal width and length limits apply. The National Network (NN) includes almost all of the Interstate Highway System and other, specified non-Interstate highways. The network comprises more than 200,000 miles of highways.

International network

A global network is any communication network which spans the entire Earth. The term, as used in this article refers in a more restricted way to bidirectional communication networks, and to technology-based networks. Early networks such as international mail and unidirectional communication networks, such as radio and television, are described elsewhere.

The first global network was established using electrical telegraphy and global span was achieved in 1899. The telephony network was the second to achieve global status, in the 1950s. More recently, interconnected IP networks (principally the Internet, with estimated 360 million users worldwide in 2009), and the GSM mobile communication network (with over 3 billion worldwide users in 2009) form the largest global networks of all.

Setting up global networks requires immense, costly and lengthy efforts lasting for decades. Elaborate interconnections, switching and routing devices, laying out physical carriers of information, such as land and submarine cables and earth stations must be set in operation. In addition, international communication protocols, legislation and agreements are involved.

Networking aspects of video conferencing, imaging and multimedia

Videoconferencing is the conduct of a videoconference (also known as a video conference or videoteleconference) by a set oftelecommunication technologies which allow two or more locations to communicate by simultaneous two-way video and audio transmissions. It has also been called 'visual collaboration' and is a type of groupware.

Videoconferencing differs from videophone calls in that it's designed to serve a conference or multiple locations rather than individuals.[1] It is an intermediate form of videotelephony, first deployed commercially in the United States by AT&T Corporation during the early 1970s as part of their development of Picturephone technology.

With the introduction of relatively low cost, high capacity broadband telecommunication services in the late 1990s, coupled with powerful computing processors and video compression techniques, videoconferencing usage has made significant inroads in business, education, medicine and media. Like all long distance communications technologies

34

Page 35: DocumentIT

(such as phone and Internet), by reducing the need to travel to bring people together the technology also contributes to reductions in carbon emissions, thereby helping to reduce global warming.

The core technology used in a videoconferencing system is digital compression of audio and video streams in real time. The hardware or software that performs compression is called a codec (coder/decoder). Compression rates of up to 1:500 can be achieved. The resulting digital stream of 1s and 0s is subdivided into labeled  packets, which are then transmitted through a digital network of some kind (usually ISDN or IP). The use of audio modems in the transmission line allow for the use of POTS, or the Plain Old Telephone System, in some low-speed applications, such as videotelephony, because they convert the digital pulses to/from analog waves in the audio spectrum range.

The other components required for a videoconferencing system include:

Video input : video camera or webcam Video output: computer monitor, television or projector Audio input: microphones, CD/DVD player, cassette player, or any other source of PreAmp audio outlet. Audio output: usually loudspeakers associated with the display device or telephone Data transfer: analog or digital telephone network, LAN or Internet Computer: a data processing unit that ties together the other components, does the compressing and

decompressing, and initiates and maintains the data linkage via the network.

There are basically two kinds of videoconferencing systems:

1. Dedicated systems have all required components packaged into a single piece of equipment, usually a console with a high quality remote controlled video camera. These cameras can be controlled at a distance to pan left and right, tilt up and down, and zoom. They became known as PTZ cameras. The console contains all electrical interfaces, the control computer, and the software or hardware-based codec. Omnidirectional microphones are connected to the console, as well as a TV monitor with loudspeakers and/or a video projector. There are several types of dedicated videoconferencing devices:

1. Large group videoconferencing are non-portable, large, more expensive devices used for large rooms

and auditoriums.

2. Small group videoconferencing are non-portable or portable, smaller, less expensive devices used for

small meeting rooms.

3. Individual videoconferencing are usually portable devices, meant for single users, have fixed

cameras, microphones and loudspeakers integrated into the console.2. Desktop systems are add-ons (hardware boards, usually) to normal PCs, transforming them into

videoconferencing devices. A range of different cameras and microphones can be used with the board, which contains the necessary codec and transmission interfaces. Most of the desktops systems work with the H.323 standard. Videoconferences carried out via dispersed PCs are also known as e-meetings.

Multipoint videoconferencing

Simultaneous videoconferencing among three or more remote points is possible by means of a Multipoint Control

Unit (MCU). This is a bridge that interconnects calls from several sources (in a similar way to the audio conference

call). All parties call the MCU, or the MCU can also call the parties which are going to participate, in sequence.

There are MCU bridges for IP and ISDN-based videoconferencing. There are MCUs which are pure software, and

others which are a combination of hardware and software. An MCU is characterised according to the number of

simultaneous calls it can handle, its ability to conduct transposing of data rates and protocols, and features such as 35

Page 36: DocumentIT

Continuous Presence, in which multiple parties can be seen on-screen at once. MCUs can be stand-alone hardware

devices, or they can be embedded into dedicated videoconferencing units.

The MCU consists of two logical components:

1. A single multipoint controller (MC), and

2. Multipoint Processors (MP), sometimes referred to as the mixer.

The MC controls the conferencing while it is active on the signaling plane, which is simply where the system

manages conferencing creation, endpoint signaling and in-conferencing controls. This component negotiates

parameters with every endpoint in the network and controls conferencing resources While the MC controls resources

and signaling negotiations, the MP operates on the media plane and receives media from each endpoint. The MP

generates output streams from each endpoint and redirects the information to other endpoints in the conference.

Some systems are capable of multipoint conferencing with no MCU, stand-alone, embedded or otherwise. These use

a standards-based H.323 technique known as "decentralized multipoint", where each station in a multipoint call

exchanges video and audio directly with the other stations with no central "manager" or other bottleneck. The

advantages of this technique are that the video and audio will generally be of higher quality because they don't have

to be relayed through a central point. Also, users can make ad-hoc multipoint calls without any concern for the

availability or control of an MCU. This added convenience and quality comes at the expense of some increased

network bandwidth, because every station must transmit to every other station directly.[15]

[edit]Videoconferencing modes

Videoconferencing systems use several common operating modes:

1. Voice-Activated Switch (VAS);

2. Continuous Presence.

In VAS mode, the MCU switches which endpoint can be seen by the other endpoints by the levels of one’s voice. If

there are four people in a conference, the only one that will be seen in the conference is the site which is talking; the

location with the loudest voice will be seen by the other participants.

Continuous Presence mode displays multiple participants at the same time. The MP in this mode takes the streams

from the different endpoints and puts them all together into a single video image. In this mode, the MCU normally

sends the same type of images to all participants. Typically these types of images are called “layouts” and can vary

depending on the number of participants in a conference.

36

Page 37: DocumentIT

Introduction to Imaging and Multimedia

What is Multimedia

When different people mention the term multimedia, they often have quite different, or even opposing, viewpoints.

A PC vendor: a PC that has sound capability, a DVD-ROM drive, and perhaps the superiority of multimedia-enabled microprocessors that understand additional multimedia instructions.

A consumer entertainment vendor: interactive cable TV with hundreds of digital channels available, or a cable TV-like service delivered over a high-speed Internet connection.

A Computer Science (CS) student: applications that use multiple modalities, including text, images, drawings

(graphics), animation, video, sound including speech, and interactivity.

Multimedia in Computer Science

Digital Multimedia - Computational Multimedia Digital Multimedia is the field concerned with computer-controlled integration of text, graphics, images, videos, audio, and any other medium where every type of information can be represented, transmitted and processed digitally

Digital Multimedia - Computational Multimedia Digital Multimedia is the field concerned with computer-controlled integration of text, graphics, images, videos, audio, and any other medium where every type of information can be represented, transmitted and processed digitally

Multimedia involves multiple modalities of text, audio, images, drawings, animation, and video. Examples of how these modalities are put to use:

Video teleconferencing.

Distributed lectures for higher education.

Tele-medicine. 37

Page 38: DocumentIT

Co-operative work environments.

Searching in (very) large video and image databases for target visual objects.

“Augmented” reality: placing real-appearing computer graphics and video objects into scenes.

Including audio cues for where video-conference participants are located. Tacking into account gaze direction and attention of participants as well.

Building searchable features into new video, and enabling very high to very low-bit-rate use of new, scalable multimedia products.

Making multimedia components editable. allow the user side to decide what components, video, graphics, etc., are actually viewed; allow the client to move components around or delete them. Making components distributed.

Building “inverse-Hollywood” applications that can recreate the process by which a video was made. This then allows storyboard pruning and concise video summarization.

Using voice-recognition to build an interactive environment, say a kitchen-wall web browser

Multimedia and Computer Science: Multimedia is in the intersection among different areas

Graphics,

HCI,

Visualization,

Computer vision,

Data compression,

Graph theory,

Networking,

Database systems,

Architecture and operating systems.

A Multimedia Application is an application which uses a collection of multiple media sources e.g. text, graphics, images, sound/audio, animation and/or video.

A Multimedia System is a system capable of processing multimedia data and applications.

A Multimedia System is characterized by the processing, storage, generation, manipulation and rendition of Multimedia information.

38

Page 39: DocumentIT

Components of a Multimedia System

Capture devices

e.g. Video Camera, Microphone, Digitising/Sampling Hardware, etc.

Storage Devices

e.g. Hard disks, CD-ROMs, DVD, Blu-ray, etc

Communication Networks

Internet, wireless internet, etc.

Computer Systems

e.g. Multimedia Desktop machines, Workstations, smart phones, iPads

Rendering Devices

e.g. CD-quality speakers, HDTV, Hi-Res monitors, Color printers etc.

Challenges for Multimedia Systems

Distributed Networks

Temporal relationship between data

Render different data at same time continuously.

Sequencing within the media (e.g. playing frames in correct order/time frame in video)

Synchronization — inter-media scheduling

E.g. Video and Audio — Lip synchronization is clearly important for humans to watch playback of video and audio and even animation and audio.

Multimedia is media and content that uses a combination of different content forms. This contrasts with media that use only rudimentary computer displays such as text-only or traditional forms of printed or hand-produced material. Multimedia includes a combination of text, audio, still images, animation, video, or interactivity content forms.

39

Page 40: DocumentIT

Multimedia is usually recorded and played, displayed, or accessed by information content processing devices, such as computerized and electronic devices, but can also be part of a live performance. Multimedia devices are electronic media devices used to store and experience multimedia content. Multimedia is distinguished from mixed media in fine art; by including audio, for example, it has a broader scope. The term "rich media" is synonymous for interactive multimedia. Hypermedia can be considered one particular multimedia application.

Major characteristics of multimedia

Multimedia presentations may be viewed by person on stage, projected, transmitted, or played locally with a media

player. A broadcast may be a live or recorded multimedia presentation. Broadcasts and recordings can be

either analog or digital electronic media technology. Digital online multimedia may be downloaded or streamed.

Streaming multimedia may be live or on-demand.

Multimedia games and simulations may be used in a physical environment with special effects, with multiple

users in an online network, or locally with an offline computer, game system, o rsimulator.

The various formats of technological or digital multimedia may be intended to enhance the users' experience, for

example to make it easier and faster to convey information. Or in entertainment or art, to transcend everyday

experience.

Enhanced levels of interactivity are made possible by combining multiple forms of media

content. Online multimedia is increasingly becoming object-oriented and data-driven, enabling applications

with collaborative end-user innovation and personalization on multiple forms of content over time. Examples of

these range from multiple forms of content on Web sites like photo galleries with both images (pictures) and title

(text) user-updated, to simulations whose co-efficients, events, illustrations, animations or videos are modifiable,

allowing the multimedia "experience" to be altered without reprogramming. In addition to seeing and

hearing, Haptic technology enables virtual objects to be felt. Emerging technology involving illusions

of taste and smell may also enhance the multimedia experience.

Multimedia may be broadly divided into linear and non-linear categories. Linear active content progresses often without any navigational control for the viewer such as a cinema presentation. Non-linear uses interactivity to control progress as with a video game or self-paced computer based training. Hypermedia is an example of non-linear content.

Multimedia presentations can be live or recorded. A recorded presentation may allow interactivity via a navigation system. A live multimedia presentation may allow interactivity via an interaction with the presenter or performer.

40

Page 41: DocumentIT

Section III Transmission facilities

Teleprocessing

A teleprocessing monitor (also: Transaction Processing Monitor) is a control program that monitors the transfer of

data between multiple local and remote terminals to ensure that the transaction processes completely or, if an error

occurs, to take appropriate actions.[1]

It is frequently used for mainframe-based wide area networks, where TP monitors manage the transfer of data

between several clients making requests to a server. TP monitors will control and manage the data smoothly to

available servers by detecting hardware failures and switching to another node. Telecommunications monitors were

originally developed to allow several clients to connect to one server. However, it developed to what is now known

as Transaction Processing Monitors (TPMs). A TPM breaks down applications or code into transactions and ensures

that all databases are updated in a single transaction. This is useful for airline reservations, car rentals, hotel

accommodations, ATM transactions or other high volume transaction locations. TP monitors ensure that transactions

are not lost or destroyed. Sometimes they are referred to as middleware, because the client sends the data for query

or processing to the server database and then it is sent back to the user terminal. This can be accomplished remotely

and by multiple users simultaneously. TP monitors are easily scalable allowing for increase in users and data

processed.

Examples include the CICS (Customer Information Control System) for IBM mainframes introduced in July 1969.

CICS can perform thousands of transactions per second. Encina and BEA Tuxedo are major TP monitors in

the Unix client/server environment.

Real time control system

real-time Control System (RCS) is a reference model architecture, suitable for many software-intensive, real-time control problem domains. RCS is a reference model architecture that defines the types of functions that are required in a real-time intelligent control system, and how these functions are related to each other.

41

Page 42: DocumentIT

Example of a RCS-3 application of a machining workstation containing a machine tool, part buffer,

and robot with vision system. RCS-3 produces a layered graph of processing nodes, each of which contains a task

decomposition (TD), world modeling (WM), and sensory processing (SP) module. These modules are richly

interconnected to each other by a communications system.

RCS is not a system design, nor is it a specification of how to implement specific systems. RCS prescribes a hierarchical control model based on a set of well-founded engineering principles to organize system complexity. All the control nodes at all levels share a generic node model.[1]

Also RCS provides a comprehensive methodology for designing, engineering, integrating, and testing control systems. Architects iteratively partition system tasks and information into finer, finite subsets that are controllable and efficient. RCS focuses on intelligent control that adapts to uncertain and unstructured operating environments. The key concerns are sensing, perception, knowledge, costs, learning, planning, and execution.[1]

A reference model architecture is a canonical form, not a system design specification. The RCS reference model architecture combines real-time motion planning and control with high level task planning, problem solving, world modeling, recursive state estimation, tactile and visual image processing, and acoustic signature analysis. In fact, the evolution of the RCS concept has been driven by an effort to include the best properties and capabilities of most, if not all, the intelligent control systems currently known in the literature, from subsumption to SOAR, from blackboards to object-oriented programming.[2]

RCS (real-time control system) is developed into an intelligent agent architecture designed to enable any level of intelligent behavior, up to and including human levels of performance. RCS was inspired 30 years ago by a theoretical model of the cerebellum, the portion of the brain responsible for fine motor coordination and control of conscious motions. It was originally designed for sensory-interactive goal-directed control of laboratory manipulators. Over three decades, it has evolved into a real-time control architecture for intelligent machine tools, factory automation systems, and intelligent autonomous vehicles.[3]

RCS applies to many problem domains including manufacturing examples and vehicle systems examples. Systems based on the RCS architecture have been designed and implemented to varying degrees for a wide variety of

42

Page 43: DocumentIT

applications that include loading and unloading of parts and tools in machine tools, controlling machining workstations, performing robotic deburring and chamfering, and controlling space station telerobots, multiple autonomous undersea vehicles, unmanned land vehicles, coal mining automation systems, postal service mail handling systems, and submarine operational automation systems.[2]

Message controlled system

A method for controlling messages in a software system. The method activates a report-handling module when a

subroutine has a message to send. The subroutine passes an identification to the report-handling module. The

subroutine then passes a message and message level to the report handling module. The report-handling module then

determines the message level to be reported for that subroutine, the process from which that subroutine is sending

messages and the message level to be reported for that process. If the message level of the message compares

correctly to the message level of the subroutine and the process, the message is reported.

Teleconferencing

A teleconference or teleseminar is the live exchange and mass articulation of information among several persons and machines remote from one another but linked by a telecommunications system. Terms such as audio conferencing, telephone conferencing and phone conferencing are also sometimes used to refer to teleconferencing.

The telecommunications system may support the teleconference by providing one or more of the following:

audio, video, and/or dataservices by one or more means, such s telephone, computer, telegraph, teletypewriter, radio,

and television.

Internet teleconferencing includes internet telephone conferencing, videoconferencing, web conferencing, and Augmented Realityconferencing.

Internet telephony involves conducting a teleconference over the Internet or a Wide Area Network. One key technology in this area is Voice over Internet Protocol (VOIP). Popular software for personal use includes Skype, Google Talk, Windows Live Messenger and Yahoo! Messenger.

A working example of an Augmented Reality conferencing was demonstrated at the Salone di Mobile in Milano by AR+RFID Lab.  is another AR teleconferencing tool.[3]

How Teleconferencing Works

n the past few years, corporations have gotten bigger and more spread out. Many American employees -- more than 44 million in 2004 -- also do at least some of their work from home [ref]. Since offices and employees can be thousands of miles apart, getting everyone into the same room for meetings and training has become decidedly impractical for a lot of companies.

That's why teleconferencing -- the real-time exchange of information between people who are not in the same physical space -- has become such a big industry. The American audio conferencing industry alone reported $2.25 billion in revenue in 2004 [ref]. Through teleconferencing, companies can conduct meetings, customer briefs, training, demonstrations and workshops by phone or online instead of in person.

The simplest phone teleconference is a three-way call, available in many homes as a service from thetelephone company. Another very simple (but not necessarily effective) method is to have two groups of people

43

Page 44: DocumentIT

talk to one another via speakerphone. The limits of three-way calling and the sound quality of speakerphones make both of these options impractical for most businesses.

Conference calls let groups of people -- from a few to hundreds -- communicate by phone. Banks and brokerages often use conference calls to give status reports to large numbers of listeners. Other businesses use conference calls to help coworkers communicate, plan and brainstorm. To connect to the call, attendees call a designated number (MeetMe conferencing), or an operator or moderator calls each participant (ad hoc conferencing).

Conference calls connect people through a conference bridge, which is essentially a server that acts like a telephone and can answer multiple calls simultaneously. Software plays a large role in the bridge's capabilities beyond simply connecting multiple callers.

A company can have its own bridge or can contract with a service provider for conference call hosting. Providers frequently offer add-on features for conference calls, such as:

Attendee polling

Call recording

In-call operators or attendants

Companies using Voice over IP (VoIP) telephones can also host conference calls themselves if the VoIP software supports them.

Many phone conferencing systems require a login and personal identification number (PIN) to access the system. This helps protect confidential and proprietary information during the call.

Video phones can add a visual element to conference calls, but businesses often need to share other visual information.

FAX

Fax (short for facsimile), sometimes called telecopying, is the telephonic transmission of scanned printed material

(both text and images), normally to a telephone number connected to a printer or other output device. The original

document is scanned with a fax machine (or a telecopier), which processes the contents (text or images) as a single

fixed graphic image, converting it into a bitmap, and then transmitting it through the telephone system. The

receiving fax machine reconverts the coded image, printing a paper copy.[1]Before digital technology became

widespread, for many decades, the scanned data was transmitted as analog.

Although businesses usually maintain some kind of fax capability, the technology has faced increasing competition

from Internet-based alternatives. Fax machines still retain some advantages, particularly in the transmission of

sensitive material which, if sent over the Internet unencrypted, may be vulnerable to interception, without the need

for telephone tapping. In some countries, because electronic signatures on contracts are not recognized by law while

faxed contracts with copies of signatures are, fax machines enjoy continuing support in business.

In many corporate environments, standalone fax machines have been replaced by fax servers and other computerized

systems capable of receiving and storing incoming faxes electronically, and then routing them to users on paper or

44

Page 45: DocumentIT

via an email (which may be secured). Such systems have the advantage of reducing costs by eliminating unnecessary

printouts and reducing the number of inbound analog phone lines needed by an office.

The once ubiquitous fax machine has also begun to disappear from the small office / home office environment too.

Remotely hosted fax-server services are widely available from VoIP and e-mail providers allowing users to send and

receive faxes using their existing e-mail accounts without the need for any hardware or dedicated fax lines. Personal

computers have also long been able to handle incoming and outgoing faxes using analogue modems or ISDN,

eliminating the need for a stand-alone fax machine. These solutions are often ideally suited for users who only very

occasionally need to use fax services.

E-MAIL

Electronic mail, also known as email or e-mail, is a method of exchanging digital messages from an author to one or more recipients. Modern email operates across the Internet or other computer networks. Some early email systems required that the author and the recipient both be online at the same time, in common with instant messaging. Today's email systems are based on a store-and-forward model. Email servers accept, forward, deliver and store messages. Neither the users nor their computers are required to be online simultaneously; they need connect only briefly, typically to an email server, for as long as it takes to send or receive messages.

Historically, the term electronic mail was used generically for any electronic document transmission. For example, several writers in the early 1970s used the term to describe fax document transmission.[2][3] As a result, it is difficult to find the first citation for the use of the term with the more specific meaning it has today.

An Internet email message[NB 1] consists of three components, the message envelope, the message header, and the message body. The message header contains control information, including, minimally, an originator's email address and one or more recipient addresses. Usually descriptive information is also added, such as a subject header field and a message submission date/time stamp.

Originally a text-only (7-bit ASCII and others) communications medium, email was extended to carry multi-media content attachments, a process standardized inRFC 2045 through 2049. Collectively, these RFCs have come to be called Multipurpose Internet Mail Extensions (MIME).

Electronic mail predates the inception of the Internet, and was in fact a crucial tool in creating it, [4] but the history of modern, global Internet email services reaches back to the early ARPANET. Standards for encoding email messages were proposed as early as 1973 (RFC 561). Conversion from ARPANET to the Internet in the early 1980s produced the core of the current services. An email sent in the early 1970s looks quite similar to a basic text message sent on the Internet today.

Network-based email was initially exchanged on the ARPANET in extensions to the File Transfer Protocol (FTP), but is now carried by the Simple Mail Transfer Protocol (SMTP), first published asInternet standard 10 (RFC 821) in 1982. In the process of transporting email messages between systems, SMTP communicates delivery parameters using a message envelope separate from the message (header and body) itself.

VIDEO TEX

Videotex (or "interactive videotex") was one of the earliest implementations of an end-user information system. From the late 1970s to mid-1980s, it was used to deliver information (usually pages of text) to a user in computer-like format, typically to be displayed on a television.

45

Page 46: DocumentIT

In a strict definition, videotex is any system that provides interactive content and displays it on a television, typically using modems to send data in both directions. A close relative is teletext, which sends data in one direction only, typically encoded in a television signal. All such systems are occasionally referred to as  viewdata. Unlike the modern Internet, traditional videotex services were highly centralized.

Videotex in its broader definition can be used to refer to any such service, including the  Internet, bulletin board systems, online service providers, and even the arrival/departure displays at an airport. This usage is no longer common.

With the exception of Minitel in France, videotex elsewhere never managed to attract any more than a very small percentage of the universal mass market once envisaged. By the end of the 1980s its use was essentially limited to a few niche applications

Microwave transmission 

Microwave transmission refers to the technology of transmitting information or energy by the use of radio waves whose wavelengths are conveniently measured in small numbers of centimetre; these are called microwaves. This part of the radio spectrum ranges across frequencies of roughly 1.0 gigahertz (GHz) to 30 GHz. These correspond to wavelengths from 30 centimeters down to 1.0 cm.

Microwaves are widely used for point-to-point communications because their small wavelength allows conveniently-sized antennas to direct them in narrow beams, which can be pointed directly at the receiving antenna. This allows nearby microwave equipment to use the same frequencies without interfering with each other, as lower frequency radio waves do. Another advantage is that the high frequency of microwaves gives the microwave band a very large information-carrying capacity; the microwave band has a bandwidth 30 times that of all the rest of the radio spectrum below it. A disadvantage is that microwaves are limited to line of sight propagation; they cannot pass around hills or mountains as lower frequency radio waves can.

Microwave radio transmission is commonly used in point-to-point communication systems on the surface of the Earth, in satellite communications, and in deep space radio communications. Other parts of the microwave radio band are used for radars, radio navigation systems, sensor systems, and radio astronomy.

The next higher part of the radio electromagnetic spectrum, where the frequencies are above 30 GHz and below 100 GHz, are called "millimeter waves" because their wavelengths are conveniently measured in millimeters, and their wavelengths range from 10 mm down to 3.0 mm. Radio waves in this band are usually strongly attenuated by the Earthly atmosphere and particles contained in it, especially during wet weather. Also, in wide band of frequencies around 60 GHz, the radio waves are strongly attenuated by molecular oxygen in the atmosphere. The electronic technologies needed in the millimeter wave band are also much more difficult to utilize than those of the microwave band.

Data transmission facilities

Sending and receiving data via cables (e.g., telephone lines or fibre optics) or wireless relay systems. Because ordinary telephone circuits pass signals that fall within the frequency range of voice communication (about 300–3,500 hertz), the high frequencies associated with data transmission suffer a loss of amplitude and transmission speed. Data signals must therefore be translated into a format compatible with the signals used in telephone lines. Digital computers use a modem to transform outgoing digital electronic data; a similar system at the receiving end translates the incoming signal back to the original electronic data. Specialized data-transmission links carry signals at frequencies higher than those used by the public telephone network. See also broadband technology; cable modem; DSL; ISDN; fax; radio; teletype; T1; wireless communications.

46

Page 47: DocumentIT

or data-communications, the branch of telecommunications concerned with the transmission of information represented, on the basis of predetermined rules, in a formalized form by symbols or analog signals; the information either is intended for machine processing (for example, by computers) or has already undergone machine processing. The term “data transmission” is also applied to the actual process of transmitting the information. Such information is called data.

The principal difference between data transmission and telegraph, telephone, and other types of communication is that the information, or data, is sent or received by a machine rather than a human being; in data transmission from computer to computer there is no human being on either end of the communication line. Data transmission frequently requires greater reliability, rate, and accuracy of transmission because of the greater importance of the information being transmitted and the impossibility of logical monitoring by human beings during the transmitting and receiving processes. Together with computer technology, data transmission serves as the technical base for information and computing systems, including automatic control systems of various levels of complexity. The use of data transmission facilities speeds up the collection and dissemination of information and permits subscribers with inexpensive terminal equipment to enjoy the services of large computer centers.

Data transmission originated in the United States in the early 1950’s. In the early 1960’s it began developing extensively in many other countries as well. Data transmission systems for space flights have been functioning in the USSR since the early 1960’s. In 1965 a data transmission system was put into operation in the Onega automatic control system for checking money remittances. In this system, data are transmitted along telegraph and telephone channels at speeds of 50 and 600 bits/sec, respectively. Data transmission was later incorporated in the Pogoda system for collecting meteorological data and in many automatic control systems in industry and the state administration. Organization of the National Data Transmission System, which will eventually provide data transmission services to all ministries and departments, began in 1972; the first stage of the system to be set up is a telegraph-type network that transmits information at rates of up to 200 bits/sec. At present, data transmission is one of the most swiftly developing areas of technology. In 1955 there were no more than 1,000 data transmission terminal units in the entire world. The figure rose to 35,000 by 1965 and 150,000 by 1970. The number was expected to exceed 1 million by 1975. In many countries the average annual growth has been 70-100 percent.

Data are transmitted in many countries primarily through switched telegraph or telephone networks. Because these networks are designed basically for transmitting telegrams or telephone messages, special terminal devices are added for data transmission. In addition to the standard telegraph or telephone unit (TU) (Figure l,a), a data transmission device (DTD), which makes the computer equipment compatible with the communications channel, and a communciations channel switch (S) are installed at the subscriber’s terminal. A switched connection is made “manually” by means of the TU. At the end of their conversation, or after exchanging telegraph messages, the participants agree to change to the data transmission mode, and they connect the communications channel to the DTD. With the conclusion of data transmission, they return to the initial mode. The sign-off is done in the normal way by means of the TU. The connection can also be made automatically, with computer control. The connection of the DTD to a switched telephone or telegraph network is expedient for small volumes of transmitted data, when the total time the subscriber line will be used for conversation and data transmission will not exceed 6–12 minutes at times of peak load. Telephone networks are used to transmit not only digital data but also analog data, or data in a continuous form, for example, cardiograms. To send large volumes of data —for example, between two computer centers—nonswitched (direct, leased) communications channels are used; information can be transmitted at rates of 2,400 bits/sec or more with non-switched telephone channels.

Telephone and telegraph networks cannot satisfy the greatest demands made of data transmission. Beginning with the late 1960’s, therefore, special switched data transmission networks have come into use. They are able to provide higher quality service to subscribers in terms of accuracy and rate of transmission, the possibility of choosing the category of work priority and speed, and the possibility of multiaddress communication, and they offer additional services as well. Two principles of switching are in use: channel switching, where a through channel from subscriber to subscriber is organized for the time of communication, and message switching. In the latter case, the message is

47

Page 48: DocumentIT

transmitted in full from the sender-subscriber to the nearest switching center, where it is temporarily stored; when a channel in the necessary (assigned) direction becomes free, the message is sent on in stages, from center to center, until it is received by the receiver-subscriber. Computers are being used more and more frequently to control switching at the centers.

Figure 1. Diagrams of data transmission channels: (a) with information input-output by an intermediate carrier, (b) with electrical information input-output; symbols: (IOD) input-output devices, (P/t) punched tape, (ECD) error control device, (SCD) signal conversion device, (TU) telegraph or telephone unit, (S) switch, (SL) subscriber line, (SC) switching center, (DTM) data transmission multiplexer, (CU) control unit, (DTD) data transmission device

A subscriber’s DTD (Figure 1,a) converts data signals into a form suitable for transmission along the communication channel. In the case of telephone channels, for example, it makes use of frequency, phase, and other more complex types of modulation, as well as various forms of signal coding and recoding. Where necessary, the DTD includes a device to protect data against errors that arise in the communications channel because of noise. Since the early 1970’s, channels have provided data transmission with a probability of error of 10–3–10–5; by means of error control devices the probability can be reduced to 10–6–10–8. The use of error-correcting codes makes possible the detection of most errors, and the errors are usually corrected by an automatically repeated transmission. Error detection can also be done by noncode methods—through the use of a quality detector, which analyzes the known parameters of the signal, such as amplitude, frequency, and length. If the subscriber has adequate protection against errors in his computer equipment, other protection is not provided in the DTD. The DTD may also contain auxiliary units, such as calling and talk units and monitoring devices. The DTD is linked with the computer equipment through an intermediate information carrier, which usually is punched tape (Figure 1,a), or through electric circuits (Figure l,b). The latter type of DTD permits subscribers to “communicate” directly with a computer whose software has a set of programs that control the system of remote data processing (the exchange of data with subscriber terminals and with other computers). This type of DTD has no input-output devices. Examples of the first type of DTD are the Akkord-50 standardized DTD, which is used in the USSR with telegraph channels and has a rate of up to 50 bits/sec, and the Akkord-1200, which is used with telephone channels and has rates of 600 or 1,200 bits/sec. An example of the second type is the general-purpose equipment of the United System of Computers of the socialist countries.

Data transmission is in a formative stage and is developing in the following main directions: (1) the creation of special data transmission networks that involve both the development of switching centers providing improved service to subscribers and the introduction of digital communications channels formed by systems with time-division line multiplexing; (2) the optimal combination of the development of new networks with the use of existing telephone and telegraph networks; (3) the increase of the efficiency of high-rate communication channels—including the achievement of transmission rates of 4,800 bits/sec or more along telephone channels; (4) the simplification of the DTD for low-rate communication; and (5) the increase of the accuracy and reliability of communications.

radio and satellite communications48

Page 49: DocumentIT

Radio communications is one of Customs key enabling technologies. All around Australia, and in offshore air and marine environments, Customs rely on an extensive network of radio and satellite communications to fulfil its role in protecting Australia’s borders. Customs extensive high frequency (HF), and ultra high frequency (UHF) networks are vital to maintaining operational communications, both within the organisation, and between Customs and its partner agencies.

LONG-RANGE RADIO COMMUNICATIONS

HF radio is the backbone of Customs long-range communications. HF radio allows transmission over very long distances, but its narrow bandwidth limits transmission largely to voice or low speed data. Ensuring effective communications coverage around Australia’s vast coastline and maritime territories requires a huge network of HF radio infrastructure.

The Customs HF network is one of the largest in the country, second only to the Department of Defence. HF radio systems are installed in all Customs vessels, Coastwatch aircraft, regional and district offices and vehicles. Transportable units are also deployed to support operations in remote areas.

LONG-RANGE SATELLITE

COMMUNICATIONS

Customs also makes extensive use of satellite equipment to enhance its long-range communications capabilities. As well as standard Iridium and InMarSat systems, Customs has developed and deployed a single box satellite solution called the TacPac (Tactical Pack), which provides encrypted voice and data capabilities.

SHORT-RANGE COMMUNICATIONS

Customs uses digital UHF radios to support operations in local areas within a radius of 40-50 kilometres. The network is constantly being upgraded and expanded to meet changing needs of Customs. It is fully encrypted to provide secured support to Customs short-range tactical operations. UHF radios are installed in all vessels, Coastwatch aircraft, regional and district offices and vehicles. Customs officers in airports and wharf/cargo areas also use portable UHF headsets. To support operations that extend beyond a fixed UHF network, Customs has more than 30 portable repeater stations. These units can be deployed to relay signals and expand the communication radius available during an operation. Coastwatch aircraft are also capable of providing an airborne repeater function.

As well as being able to communicate within Customs and with partner agencies, Customs officers need to make radio contact with members of the public, commercial ships, yachts and other recreational craft. Where this need exists, Customs officers are able to use UHF citizen band (CB), 26 MHz CB, and VHF radios. All customs vessels are fitted with marine VHF radios to converse with civilian vessels.

FUTURE OF RADIO COMMUNICATIONS

As with all of its technologies Customs works to keep abreast of continuing advances in radio and satellite technology, ensuring that its communication requirements are met in an ever changing environment.

A communications satellite or comsat is an artificial satellite sent to space for the purpose of telecommunications. Modern communications satellites use a variety of orbits includinggeostationary orbits, Molniya orbits, elliptical orbits and low (polar and non-polar Earth orbits).

49

Page 50: DocumentIT

For fixed (point-to-point) services, communications satellites provide a microwave radio relay technology complementary to that of communication cables. They are also used for mobile applications such as communications to ships, vehicles, planes and hand-held terminals, and for TV and radio broadcasting.

Public message switching system

In telecommunications, message switching was the precursor of packet switching, where messages were routed in their entirety, one hop at a time. It was first introduced by Leonard Kleinrockin 1961. Message switching systems are nowadays mostly implemented over packet-switched or circuit-switched data networks. Each message is treated as a separate entity. Each message contains addressing information, and at each switch this information is read and the transfer path to the next switch is decided. Depending on network conditions, a conversation of several messages may not be transferred over the same path. Each message is stored (usually on hard drive due to RAM limitations) before being transmitted to the next switch. Because of this it is also known as a 'store-and-forward' network. Email is a common application for Message Switching. A delay in delivering email is allowed unlike real time data transfer between two computers.

Telephone network

A telephone network is a telecommunications network used for telephone calls between two or more parties.

There are a number of different types of telephone network:

A fixed line network where the telephones must be directly wired into a single telephone exchange. This is known as the public switched telephone network or PSTN.

A wireless network where the telephones are mobile and can move around anywhere within the coverage area. A private network where a closed group of telephones are connected primarily to each other and use a gateway

to reach the outside world. This is usually used inside companies and call centres and is called a private branch exchange (PBX).

Public telephone operators (PTOs) own and build networks of the first two types and provide services to the public under license from the national government. Virtual Network Operators (VNOs) lease capacity wholesale from the PTOs and sell on telephony service to the public directly.

50