51
FINAL PROJECT REPORT On Development Communication in India Submitted in partial fulfillment for the requirement of Bachelors in Journalism and Mass Communication AFFILIATED TO Guru Gobind Singh Indraprastha University Submitted by:-

Vd Editing

Embed Size (px)

Citation preview

Page 1: Vd Editing

FINAL PROJECT REPORT

On

‘Development Communication in India’

Submitted in partial fulfillment for the requirement of Bachelors in Journalism and Mass

Communication

AFFILIATED TO

Guru Gobind Singh Indraprastha University

Submitted by:-

PAVAN SINGH

Enrollment No. ~ 02114202409

Batch ~ 2009-2012

Page 2: Vd Editing

(Jagannath International Management School)

Vasant Kunj, New Delhi

CERTIFICATEThis is to certify that PAVAN SINGH, a student of Bachelor of Journalism

(Mass Communication), Jagannath International Management School, Guru

Gobind Singh Indraprastha University, enrolled for the batch 2009-2012 with

Enrollment No. 02114202409 has completed the project on the topic “ Post

–production in Electronic media ” as part of his dissertation under my

guidance and supervision. He has accomplished the task by following the

parameters of final project report. This project report is an original document

prepared by his efforts independently.

Dr. Neeru Johri Dr. Ravi K Dhar

(Supervisor) (Director)

Date:

Page 3: Vd Editing

ACKNOWLEDGEMENT

I have taken efforts in this project. However, it would not have been

possible without the kind support and help of many individuals and

organizations. I would like to extend my sincere thanks to all of them. The

special thank goes to my helpful supervisor, Mrs. Neeru Johri. The

supervision and support that she gave truly helped the progression and

smoothness of the project report. The co-operation is much indeed

appreciated.

Finally, yet importantly, I would like to express my heartfelt thanks to my

beloved parents for their blessings, my friends/classmates for their help and

wishes for the successful completion of this project.

.

Page 4: Vd Editing

CONTENTS

CERTIFICATE

ACKNOWLEDGMENT

CHAPTER I INTRODUCTION

POST- PRODUCTION IN ELECTRONIC MEDIA

CHAPTER II VIDEO EDITING

LINEAR EDITING

NON-LINEAR EDITING

OFFLINE EDITING

ONLINE EDITING

VIDEO MIXER

CHAPTER III GRAPHICS

DRAWING

LINE ART

ILLUSTRATION

GRAPHS

DIAGRAMS

SYMBOLS

GEOMETRIC DESIGN

COMPUTER GRAPHICS

Page 5: Vd Editing

WEB GRAPHICS

PHOTOGRAGHY

CHAPTER IV CONCLUSION

CHAPTER V WORK EXPERIENCE

CHAPTER VI BIBLIOGRAPHY

Chapter I

Page 6: Vd Editing

“Post production in Electronic media”

INTRODUCTION

Post-production is part of filmmaking and the video production process. It occurs in the making of motion pictures, television programs, radio programs, advertising, audio recordings, photography, and digital art. It is a term for all stages of production occurring after the actual end of shooting and/or recording the completed work.Post-production is, in fact, many different processes grouped under one name. These typically include:

Video editing the picture of a television program using an edit decision list (EDL) Writing, (re)recording, and editing the soundtrack.Adding visual special effects - mainly computer-generated imagery (CGI) and digital copy from which release prints will be made (although this may be made obsolete by digital-cinema technologies).Sound design, Sound effects, ADR, Foley and Music, culminating in a process known as sound re-recording or mixing with professional audio equipment.Transfer of Color motion picture film to Video or DPX with a telecine and color grading (correction) in a color suite.

Typically, the post-production phase of creating a film takes longer than the actual shooting of the film, and can take several months to complete because it includes the complete editing, color correction and the addition of music and sound. The process of editing a movie is also seen as the second directing because through the post production it is possible to change the intention of the movie. Furthermore through the use of color correcting tools and the addition of music and sound, the atmosphere of the movie can be heavily influenced for instance a blue-tinted movie is associated with a cold atmosphere and the choice of music and sound increases the effect of the shown scenes to the audience.

Post-production was named the one of the 'Dying Industries' by IBISWorld. The once exclusive service offered by high end post houses or boutique facilities have been eroded away by video editing software that operates on a non-linear editing system (NLE). However, traditional (analogue) post-

Page 7: Vd Editing

production services are being surpassed by digital, leading to sales of over $6 billion annually.

The digital revolution has made the video editing workflow process immeasurably quicker, as practitioners moved from time-consuming (tape to tape) linear video editing online editing suites, to computer hardware and video editing software such as Adobe Premiere, Final Cut Pro, Avid, Sony Vegas and Lightworks

Chapter II Video editing

Page 8: Vd Editing

Video editing is the process of editing segments of motion video production footage, special effects and sound recordings in the post-production process. Motion picture film editing is a predecessor to video editing and, in several ways, video editing simulates motion picture film editing, in theory and the use of linear video editing and video editing software on non-linear editing systems (NLE). Using video, a director can communicate non-fictional and fictional events. The goals of editing is to manipulate these events to bring the communication closer to the original goal or target. It is a visual art.

Early video tape recorders (VTR) were so expensive, and the quality degradation caused by copying was so great, that 2 inch Quadruplex videotape was edited by visualizing the recorded track with ferrofluid and cutting with a razor blade or guillotine cutter and splicing with video tape. The two pieces of tape to be joined were painted with a solution of extremely fine iron filings suspended in carbon tetrachloride, a toxic and carcinogenic compound. This "developed" the magnetic tracks, making them visible when viewed through a microscope so that they could be aligned in a splicer designed for this task.

Improvements in quality and economy, and the invention of the flying erase-head, allowed new video and audio material to be recorded over the material already recorded on an existing magnetic tape and was introduced into the linear editing technique. If a scene closer to the beginning of the video tape needed to be changed in length, all later scenes would need to be recorded onto the video tape again in sequence. In addition, sources could be played back simultaneously through a vision mixer (video switcher) to create more complex transitions between scenes.

There was a transitional analog period using multiple source videocassette recorder (VCR)s with the EditDroid using LaserDisc players, but modern NLE systems edit video digitally captured onto a hard drive from an analog video or digital video source. Content is ingested and recorded natively with the appropriate codec which will be used by video editing software such as Sony Vegas, CyberLink PowerDirector, Avid Technology's Media Composer and Xpress Pro, Apple's Final Cut Pro (FCP), Adobe Systems's Premiere, and EditShare's Lightworks to manipulate the captured footage. High-definition video is becoming more popular and can be readily edited using the same video editing software along with related motion graphics programs. Video clips are arranged on a timeline, music tracks, titles, digital on-screen graphics are added, special effects can be created, and the finished program is "rendered" into a finished video. The video may then be distributed in a variety of ways including DVD, web streaming, QuickTime Movies, iPod, CD-ROM, or video tape.

Linear editing

Page 9: Vd Editing

Linear video editing is a video editing post-production process of selecting, arranging and modifying images and sound in a predetermined, ordered sequence. Regardless whether captured by a video camera, tapeless camcorder, recorded in a television studio on a video tape recorder (VTR) the content must be accessed sequentially. For the most part video editing software has replaced linear editing.

Until the advent of computer-based random access non-linear editing systems (NLE) in the early 1990s "linear video editing" was simply called "video editing".

Actual live television is still basically produced in the same manner as it was in the 1950s, although transformed by modern technical advances. Before videotape, the only way of airing the same shows again was by filming shows using a kinescope, essentially a video monitor paired with a movie camera. However, kinescopes (the films of television shows) suffered from various sorts of picture degradation, from image distortion and apparent scan lines to artifacts in contrast and loss of detail. Kinescopes had to be processed and printed in a film laboratory, making them unreliable for broadcasts delayed for different time zones. The primary motivation for the development of video tape was as a short or long-term archival medium. Only after a series of technical advances spanning decades did video tape editing finally become a viable production tool, up to par with film editing.

Early technology

The first widely accepted video tape in the United States was 2 inch Quadruplex videotape and traveled at 15 inches per second. To gain enough head-to-tape speed, four video recording and playback heads were spun on a head wheel across most of the two-inch width of the tape. (Audio and synchronization tracks were recorded along the sides of the tape with stationary heads.) This system was known as Quad, for quadruplex recording.

The resulting video tracks were slightly less than a ninety-degree angle (considering the vector addition of high-speed spinning heads tracing across the 15 inches per second forward motion of the tape).

Originally, video was edited by visualizing the recorded track with ferrofluid and cutting it with a razor blade or guillotine cutter and splicing with video tape, in a manner similar to film editing. This was an arduous process and avoided where possible. When it was used, the two pieces of tape to be joined were painted with a solution of extremely fine iron filings suspended in carbon tetrachloride, a toxic and carcinogenic compound. This "developed" the magnetic tracks, making them visible when viewed through a microscope so that they could be aligned in a splicer

Page 10: Vd Editing

designed for this task. The tracks had to be cut during a vertical retrace, without disturbing the odd-field/even-field ordering. The cut also had to be at the same angle that the video tracks were laid down on the tape. Since the video and audio read heads were several inches apart it was not possible to make a physical edit that would function correctly in both video and audio. The cut was made for video and a portion of audio then re-copied into the correct relationship, the same technique as for editing 16mm film with a combined magnetic audio track.

The disadvantages of physically editing tapes were many. Some broadcasters decreed that edited tapes could not be reused, in an era when the relatively high cost of the machines and tapes was balanced by the savings involved in being able to wipe and reuse the media. Others, such as the BBC, allowed reuse of spliced tape in certain circumstances as long as it conformed to strict criteria about the number of splices in a given duration. The process required great skill, and often resulted in edits that would roll (lose sync) and each edit required several minutes to perform, although this was also initially true of the electronic editing that came later.

In the USA, the television show Rowan & Martin's Laugh-In was the first and possibly only TV show to make extensive use of splice editing of videotape.

Introduction of computerized systems

A system for editing Quad tape "by hand" was developed by the 1960s. It was really just a means of synchronizing the playback of two machines so that the signal of the new shot could be "punched in" similar to with a reasonable chance at success. One problem with this and early computer-controlled systems was that the audio track was prone to suffer artifacts (i.e. a short buzzing sound) because the video of the newly-recorded shot would record into the side of the audio track. A commercial solution known as "Buzz Off" was used to minimize this effect.

For more than a decade, computer-controlled Quad editing systems were the standard post-production tool for television. Quad tape involved expensive hardware, time-consuming setup, relatively long rollback times for each edit and showed misalignment as disagreeable "banding" in the video. That said, it should be mentioned that Quad tape has a better bandwidth than any smaller-format analogue tape, and properly handled could produce a picture indistinguishable from that of a live camera. When helical scan video recorders became the standard it was no longer possible to physically cut the tape. At this point video editing became a process of using two video tape machines, playing back the source tape (or "raw footage") from one machine and copying just the portions desired on to a second tape (the "edit master").

Page 11: Vd Editing

The bulk of linear editing is done simply, with two machines and a edit controller device to control them. Many video tape machines are capable of controlling a second machine, eliminating the need for an external editing control device.

This process is "linear", rather than non-linear editing, as the nature of the tape-to-tape copying requires that all shots be laid out in the final edited order. Once a shot is on tape, nothing can be placed ahead of it without overwriting whatever is there already. If absolutely necessary, material can be dubbed by copying the edited content onto another tape, however as each copy generation degrades the image cumulatively, this is not desirable.

One drawback of early video editing technique was that it was impractical to produce a rough cut for presentation to an Executive producer. Since Executive Producers are never familiar enough with the material to be able to visualise the finished product from inspection of a edit decision list (EDL), they were deprived of the opportunity to voice their opinions at a time when those opinions could be easily acted upon. Thus, particularly in documentary television, video was resisted for quite a long time.

NON-LINEAR EDITING

A non-linear editing system (NLE) is a video editing (NLVE) or audio editing (NLAE) digital audio workstation (DAW) system which can perform random access non-destructive editing on the source material. It is named in contrast to 20th century methods of linear video editing and film editing.

Non-linear editing is a video editing method which enables direct access to any video frame in a digital video clip, without needing to play or scrub/shuttle through adjacent footage to reach it, as was necessary with historical video tape linear editing systems. It is the most natural approach when all assets are available as files on hard disks rather than recordings on reels or tapes, while linear editing is related to the need to sequentially view a film or read a tape to edit it. On the other hand, the NLE method is similar in concept to the "cut and paste" technique used in film editing. However, with the use of non-linear editing systems, the destructive act of cutting of film negatives is eliminated. Non-linear, non-destructive editing methods began to appear with the introduction of digital video technology. It can also be viewed as the audio/video equivalent of word processing, which is why it is called desktop video editing in the consumer space.[1]

Page 12: Vd Editing

Operation

Video and audio data are first captured to hard disks, video server, or other digital storage devices. The data are either direct to disk recording or are imported from another source. Once imported, the source material can be edited on a computer using application software, any of a wide range of video editing software. For a comprehensive list of available software, see List of video editing software, whereas Comparison of video editing software gives more detail of features and functionality.

In non-linear editing, the original source files are not lost or modified during editing. Professional editing software records the decisions of the editor in an edit decision list (EDL) which can be interchanged with other editing tools. Many generations and variations of the original source files can exist without needing to store many different copies, allowing for very flexible editing. It also makes it easy to change cuts and undo previous decisions simply by editing the edit decision list (without having to have the actual film data duplicated). Generation loss is also controlled, due to not having to repeatedly re-encode the data when different effects are applied.

Compared to the linear method of tape-to-tape editing, non-linear editing offers the flexibility of film editing, with random access and easy project organization. With the edit decision lists, the editor can work on low-resolution copies of the video. This makes it possible to edit both standard-definition broadcast quality and high definition broadcast quality very quickly on normal PCs which do not have the power to do the full processing of the huge full-quality high-resolution data in real-time.

The costs of editing systems have dropped such that non-linear editing tools are now within the reach of home users. Some editing software can now be accessed free as web applications; some, like Cinelerra (focused on the professional market) and Blender3D, can be downloaded as free software; and some, like Microsoft's Windows Movie Maker or Apple Inc.'s iMovie, come included with the appropriate operating system.

A multimedia computer for non-linear editing of video will usually have a video capture card to capture analog video and/or a FireWire connection to capture digital video from a DV camera, with its video editing software. Modern web-based editing systems can take video directly from a camera phone over a GPRS or 3G mobile connection, and editing can take place through a web browser interface, so, strictly speaking, a computer for video

Page 13: Vd Editing

editing does not require any installed hardware or software beyond a web browser and an internet connection.[citation needed]

Various editing tasks can then be performed on the imported video before it is exported to another medium, or MPEG encoded for transfer to a DVD.

History

The first truly non-linear editor, the CMX 600, was introduced in 1971 by CMX Systems, a joint venture between CBS and Memorex. It recorded & played back black-and-white analog video recorded in "skip-field" mode on modified disk pack drives the size of washing machines. These were commonly used to store about half an hour of data digitally on mainframe computers of the time. The 600 had a console with 2 monitors built in. The right monitor, which played the preview video, was used by the editor to make cuts and edit decisions using a light pen. The editor selected from options which were superimposed as text over the preview video. The left monitor was used to display the edited video. A Digital PDP-11 computer served as a controller for the whole system. Because the video edited on the 600 was in black and white and in low-resolution "skip-field" mode, the 600 was suitable only for offline editing.

Various approximations of non-linear editing systems were built in the '80s using computers coordinating multiple laser discs, or banks of VCRs. One example of these tape & disc-based systems was Lucasfilm's EditDroid, which used several laserdiscs of the same raw footage to simulate random-access editing (a compatible system was developed for sound post production by Lucasfilm called SoundDroid--one of the earliest digital audio workstations). The LA-based post house Laser Edit (which later merged with Pacific Video as Laser-Pacific) also had an in-house system using recordable random-access laserdiscs. Another non-linear system was Ediflex, which used a bank of multiple Sony Betamax VCRs for offline editing. All were slow, cumbersome, and had problems with the limited computer horsepower of the time, but the mid-to-late-1980s saw a trend towards non-linear editing, moving away from film editing on Movieolas and the linear videotape method (usually employing 3/4" VCRs).

The term "nonlinear editing" or "non-linear editing" was formalized in 1991 with the publication of Michael Rubin's Nonlinear: A Guide to Digital Film and

Page 14: Vd Editing

Video Editing (Triad, 1991) -- which popularized this terminology over other language common at the time, including "real time" editing, "random-access" or "RA" editing, "virtual" editing, "electronic film" editing, and so on. The handbook has remained in print since 1991, currently in its 4th edition (Triad, 2000).

Computer processing advanced sufficiently by the end of the '80s to enable true digital imagery, and has progressed today to provide this capability in personal desktop computers.

An example of computing power progressing to make non-linear editing possible was demonstrated in the first all-digital non-linear editing system to be released, the "Harry" effects compositing system manufactured by Quantel in 1985. Although it was more of a video effects system, it had some non-linear editing capabilities. Most importantly, it could record (and apply effects to) 80 seconds (due to hard disk space limitations) of broadcast-quality uncompressed digital video encoded in 8-bit CCIR 601 format on its built-in hard disk array.

Non-linear editing with computers as we know it today was first introduced by Editing Machines Corp. in 1989 with the EMC2 editor; a hard disk based non-linear off-line editing system, using half-screen resolution video at 15 frames per second. A couple of weeks later that same year, Avid introduced the Avid/1, the first in the line of their Media Composer systems. It was based on the Apple Macintosh computer platform (Macintosh II systems were used) with special hardware and software developed and installed by Avid. The Avid/1 was not the first system to introduce modern concepts in non-linear editing such as timeline editing and clip bins — both of these were pioneered in Lucasfilm's EditDroid in the early 1980s.

The video quality of the Avid/1 (and later Media Composer systems from the late 80s) was somewhat low (about VHS quality), due to the use of a very early version of a Motion JPEG (M-JPEG) codec. But it was enough to be a very versatile system for offline editing, to revolutionize video and film editing. The first long form documentary to be so edited was the HBO program Earth and the American Dream which went on to win a National Primetime Emmy Award for Editing in 1993. The Avid had quickly become the dominant NLE platform.

The NewTek Video Toaster Flyer included non-linear editing capabilities in addition to processing live video signals. The Flyer made use of hard drives

Page 15: Vd Editing

to store video clips and audio, and allowed complex scripted playback. The Flyer was capable of simultaneous dual-channel playback, which allowed the Toaster's Video switcher to perform transitions and other effects on Video clips without the need for rendering. The Flyer portion of the Video Toaster/Flyer combination was a complete computer of its own, having its own Microprocessor and Embedded software. Its hardware included three embedded SCSI controllers. Two of these SCSI buses were used to store video data, and the third to store audio. The Flyer used a proprietary Wavelet compression algorithm known as VTASC, which was well regarded at the time for offering better visual quality than comparable Motion JPEG based non-linear editing systems.

Until 1993, the Avid Media Composer could only be used for editing commercials or other small content projects, because the Apple Macintosh computers could access only 50 gigabytes of storage at one time. In 1992, this limitation was overcome by a group of industry experts led by Rick Eye a Digital Video R&D team at the Disney Channel. By February 1993, this team had integrated a long form system which gave the Avid Media Composer Apple Macintosh access to over 7 terabytes of digital video data. With instant access to the shot footage of an entire movie, long form non-linear editing (Motion Picture Editing) was now possible. The system made its debut at the NAB conference in 1993, in the booths of the three primary sub-system manufacturers, Avid, Silicon Graphics and Sony. Within a year, thousands of these systems replaced a century of 35mm film editing equipment in major motion picture studios and TV stations world wide, making Avid the undisputed leader in non-linear editing systems for over a decade.[2]

Although M-JPEG became the standard codec for NLE during the early 1990s, it had drawbacks. Its high computational requirements ruled out software implementations, leading to the extra cost and complexity of hardware compression/playback cards. More importantly, the traditional tape workflow had involved editing from tape, often in a rented facility. When the editor left the edit suite they could take their confidential video tapes with them. But the M-JPEG data rate was too high for systems like Avid on the Mac and Lightworks on PC to store the video on removable storage, so these used fixed hard disks instead. The tape paradigm of keeping your (confidential) content with you was not possible with these fixed disks. Editing machines were often rented from facilities houses on a per-hour basis, and some productions chose to delete their material after each edit session, and then recapture it the next day, in order to guarantee the security of their content.

Page 16: Vd Editing

[citation needed] In addition, each NLE system had storage limited by its hard disk capacity.

These issues were addressed by a small UK company, Eidos plc. Eidos chose the new ARM-based computers from the UK and implemented an editing system, launched in Europe in 1990 at the International Broadcasting Convention. Because it implemented its own compression software designed specifically for non-linear editing, the Eidos system had no requirement for JPEG hardware and was cheap to produce. The software could decode multiple video and audio streams at once for real-time effects at no extra cost. But most significantly, for the first time, it allowed effectively unlimited quantities of cheap removable storage. The Eidos Edit 1, Edit 2, and later Optima systems allowed the editor to use any Eidos system, rather than being tied down to a particular one, and still keep his data secure. The Optima software editing system was closely tied to Acorn hardware, so when Acorn stopped manufacturing the Risc PC in the late 1990s, Eidos discontinued the Optima system.

In the early 1990s a small American company called Data Translation took what it knew about coding and decoding pictures for the US military and large corporate clients and threw $12m into developing a desktop editor which would use its proprietary compression algorithms and off-the-shelf parts. Their aim was to 'democratize' the desktop and take some of Avid's market. In August 1993 Media 100 entered the market and thousands of would-be editors had a low-cost, high-quality platform to use.

Around the same period of time there were two other competitors, providing non-linear systems that required special hardware often cards that had to be added to the computer system. Fast Video Machine was a PC based system that first came out as an offline system and later became more online editing capable. Immix Video Cube was also a contender for Media Production companies. The Immix Video Cube had a control surface with faders to allow mixing and shuttle controls without the purchase of third party controllers. Data Translation's Media 100 came with 3 different JPEG codecs for different types of graphics of video and many resolutions. The Media 100 system kept increasing its maximum video resolution via software upgrades rather than hardware. This was because the Media 100 cards had enough processing power to be expanded to resolutions as high as Avid systems at the upper end of the Avid product line. Cards at the time had embedded, dedicated CPUs (for example a Motorola 68000 processor), which were as powerful as

Page 17: Vd Editing

the processors inside the Macintosh systems that hosted the application. These other companies caused tremendous downward market pressure on Avid. Avid was forced to continually offer lower priced systems to compete with the Media 100 and other systems.

Inspired by the success of Media 100, members of the Premiere development team left Adobe to start a project called "Keygrip" for Macromedia. Difficulty raising support and money for development led the team to take their non-linear editor to NAB. After various companies made offers, Keygrip was purchased by Apple as Steve Jobs wanted a product to compete with Adobe Premiere in the desktop video market. At around the same time, Avid — now with Windows versions of its editing software — was considering abandoning the Macintosh platform. Apple released Final Cut Pro in 1999, and despite not being taken seriously at first by professionals, it has evolved into a serious competitor to Avid.

DV

Another leap came in the late 1990s with the launch of DV-based video formats for consumer and professional use. With DV came IEEE 1394 (FireWire/iLink), a simple and inexpensive way of getting video into and out of computers. The video no longer had to be converted from an analog signal to digital data — it was recorded as digital to start with — and FireWire offered a straightforward way of transferring that data without the need for additional hardware or compression. With this innovation, editing became a more realistic proposition for standard computers with software-only packages. It enabled real desktop editing producing high-quality results at a fraction of the cost of other systems.

HD

More recently[when?] the introduction of highly compressed HD formats such as HDV has continued this trend, making it possible to edit HD material on a standard computer running a software-only editing application.

Avid is still considered the industry standard, with the majority of major feature films, television programs, and commercials created with its NLE systems[citation needed]. Final Cut Pro received a Technology & Engineering Emmy Award in 2002 and continues to develop a following.

Page 18: Vd Editing

Avid has held on to its market-leading position in the advent of cheaper software packages, notably Adobe Premiere in 1992 and Final Cut Pro in 1999. These three competing products by Avid, Adobe, and Apple are the foremost NLEs, often referred to as the A-Team.[3] With advances in raw computer processing power, new products have appeared including NewTek's software application SpeedEdit.

Since 2000, many personal computers include basic non-linear video editing software free of charge. This is the case of Apple iMovie for the Macintosh platform, PiTiVi for the Linux platform (it is installed by default on Ubuntu, the dominant desktop Linux distribution), and Windows Movie Maker for the Windows platform. This phenomenon has brought low-cost non-linear editing to consumers.

Quality

At one time, a primary concern with non-linear editing had been picture and sound quality. Storage limitations at the time required that all material undergo lossy compression techniques to reduce the amount of memory occupied.

Improvements in compression techniques and disk storage capacity have mitigated these concerns, and the migration to High Definition video and audio has virtually removed this concern completely. Most professional NLEs are also able to edit uncompressed video with the appropriate hardware.

Offline editing

Offline editing is part of the post-production process of filmmaking and television production in which raw footage is copied and edited, without affecting the camera original film stock or video tape. Once the project has been completely offline edited, the original media will be assembled in the online editing stage.

The term offline originated in the computing and telecommunications industries, meaning "not under the direct control of another device" (automation).

Page 19: Vd Editing

Modern offline video editing is conducted in a non-linear editing (NLE) suite. The digital revolution has made the offline editing workflow process immeasurably quicker, as practitioners moved from time-consuming (video tape to tape) linear video editing online editing suites, to computer hardware and video editing software such as Adobe Premiere, Final Cut Pro, Avid, Sony Vegas and Lightworks. Typically, all the original footage (often tens or hundreds of hours) is digitized into the suite at a low resolution. The editor and director are then free to work with all the options to create the final cut.

History

Film editing used an offline approach almost from the beginning. Film editors worked with a workprint of the original film negative to protect the negative from handling damage. When two-inch quadraplex video tape recording was first introduced by Ampex in 1956, it could not be physically cut and spliced as simply and cleanly as film negatives could be. One error-prone method option was to cut the tape with a razor blade. Since there was no visible frame line on the 2-inch-wide (51 mm) tape, a special ferrofluid developing solution was applied to the tape, allowing the editor to view the recorded control track pulse under a microscope, and thus determine where one frame ended and the next began. This process was not always exact, and if imperfectly performed would lead to picture breakup when the cut was played. Generally this process was used to assemble scenes together, not for creative editing.

The second option for video editing was to use two tape machines, one playing back the original tapes, and the other recording that playback. The original tapes were pre rolled, manually cued to a few seconds prior to the start of a shot on the player, while the recorder was set to record. Each machine was rolled forward simultaneously, and a punch in recording, similar to punch in / out of early audio multitrack recordings was made at the appropriate moment. Beyond not being very precise, recorders of this era cost much more than a house, making this process an expensive use of the machines. This technique of re-recording from source to edit master came to be known as linear video editing.

This was the way things were for television shows shot on tape for the first 15 years. Even such fast-paced shows as Rowan & Martin's Laugh-In continued to use the razor blade technique.

Page 20: Vd Editing

New technological developments

Three developments of the late sixties and early seventies revolutionized video editing, and made it possible for television to have its own version of the film workprint/conform process.

Time code

The first was the invention of time code. Whereas film negative had numbers printed optically along the side of the film, so that every frame could be identified exactly, video tape had no such system. Only video, audio, and a control pulse were recorded. Early attempts to rectify this were primitive to say the least. An announcer reciting the seconds was recorded onto an audio channel on the tape. Time code introduced frame precision, by recording a machine readable signal on an audio channel. A time code reader device translated this signal into hours, minutes, seconds and frames, originally displayed on a Nixie tube display, and later with LED readouts. This innovation made it possible for the editor to note the exact frames at which to make a cut, and thus be much more precise. He could create a paper edit by writing down the numbers of the first and last frames of each shot, and then arrange them in order on paper prior to the actual edit session with the expensive VTRs.

Cheaper video recorders

The second development was cheaper video recorders. Though not suitable for broadcast use directly, these provided a way to make a copy of the master, with its time code visibly inserted into a small box or 'time code window' in the picture. This tape could then be played in an office or at home on a video recorder costing only as much as a used car. The editor would note down the numbers of the shots and decide the order. They might simply write them in a list, or they might dub from one of these small machines to another to create a rough cut edit, and note the necessary frame numbers by watching this tape.

Exact editing

Though both of these developments helped greatly, effectively creating the offline editing method, they didn't solve the problem of precisely controlling the video recorder for frame accurate editing. That required precise control

Page 21: Vd Editing

of the tape transport mechanism, using a dedicated edit controller that could read the time code and perform an edit exactly on cue.

That innovation came about as a result of research conducted by CMX, a joint venture of the CBS and Memorex corporations. The intent was to create a much less haphazard method of editing video directly that had all of the creative control of traditional film editing. The result, the CMX 600, accomplished this goal with a two part process. Camera master tapes were dubbed as black and white analog video to very large computer memory discs. The editor could access any shot exactly, and quickly edit a precise black and white, low quality version of the program. More importantly, re-editing was trivial, as no cuts were actually performed. The shots were simply accessed and played in sequence from the disc in real time. The computer kept track of all the numbers in this offline stage of the process, and when the editor was satisfied, output them as an Edit decision list (EDL). This EDL was used in the final stage of the process, the online edit. To make it work, special computer to video tape recorder (VTR) edit interfaces had to be developed, called I-Squareds. Under the control of a computer reading back the EDL and communications protocols, these I-Squareds took control and shuttled the broadcast quality VTRs exactly to the points necessary to record and edit master with exact edits from the source tapes.

Though recording to computer disc pack and this first attempt at non-linear editing on video was abandoned as too expensive, the rest of the hardware was recycled into the offline/online edit process that remained dominant in television production for the next 20 years or more.

Although tape formats changed from open reels to videocassettes (VCR), and all the equipment rapidly became much cheaper, the basics of the process remained the same. An editor would offline on a less expensive, low quality format, before entering the online editing suite with an EDL and master source tapes, to finish the broadcast quality version of the television show

Page 22: Vd Editing

Online editing

Online editing is an older post-production linear video editing process that is performed in the final stage of a video production. It occurs after offline editing. For the most part online editing has been replaced by video editing software that operate on non-linear editing systems (NLE).

The term online originated in the telecommunication industry, meaning "Under the direct control of another device" (automation). Online editors such as the Sony BVE-9000 edit control unit used the RS-422 remote control 9-Pin Protocol to allow the computer-interface of edit controllers to control video tape recorders (VTR) via a series of commands. The protocol supports a variety of devices including one-inch reel-to-reel type C videotape as well as videocassette recorders (VCR) to Fast-Forward, Rewind and Play and Record based on SMPTE timecode. The controllers have the ability to interface with professional audio equipment like audio mixers with console automation.

The video quality first introduced with Avid's Media Composer in 1989 was incapable of producing broadcast quality images due to computer processing limitations. The term 'Online' changed from its original meaning to where the pictures are re-assembled at full or 'online' resolution. An edit decision list (EDL) or equivalent created during the offline edit is used to carry over the cuts and dissolves from the. This conform is checked against a video copy of the offline edit to verify that the edits are correct and frame-accurate. This workprint (cutting copy in the UK) also provides a reference for any digital video effects that need to be added.

After conforming the project, the online editor will add visual effects, lower third titles, and apply color correction. This process is typically supervised by the client(s). The editor will also ensure that the program meets the technical delivery broadcast safe specs of the broadcaster, ensuring proper video levels, aspect ratio, and blanking width.

Sometimes the online editor will package the show, putting together each version. Each version may have different requirements for the formatting (i.e. closed blacks), bumper music use of a commercial bumper, different closing credits, etc.

Page 23: Vd Editing

Projects may be re-captured at the lowest level of Video compression possible - ideally with no compression at all.

Vision mixer

A vision mixer (also called video switcher, video mixer or production switcher) is a device used to select between several different video sources

Page 24: Vd Editing

and in some cases Compositing (mix) video sources together to create special effects. This is similar to what a mixing console does for audio.

The terms vision mixer and video mixer that describes both the equipment and the device operator are almost exclusively European. In the United States, the equipment is called a video production switcher and the device operator is known as a technical director (TD) that is part of a television crew.

Term usage

Typically a vision mixer would be found in a video production environment such as a television studio, production truck, OB Van or linear video editing bay of a post-production facility.

Capabilities and usage in TV production

Besides hard cuts (switching directly between two input signals), mixers can also generate a variety of transitions, from simple dissolves to pattern wipes. Additionally, most vision mixers can perform keying operations and generate color signals (called mattes in this context). Most vision mixers are targeted at the professional market, with newer analog models having component video connections and digital ones using Serial Digital Interface (SDI). They are used in live television with video tape recording (VTR) and video servers for linear video editing, even though the use of vision mixers in video editing has been largely supplanted by computer based Non-linear editing systems (NLE).

Older professional mixers worked with composite video, analog signal inputs. There are still a number of consumer video switchers with composite video, S-Video or even FireWire available. These are often used for VJing, presentations, and small multi-camera productions.

Operation

Page 25: Vd Editing

A Sony BVS-3200CP vision mixer.

The main concept of a professional vision mixer is the bus, basically a row of buttons with each button representing a video source. Pressing such a button will select the video out of that bus. Older video mixers had two equivalent buses (called the A and B bus; such a mixer is known as an A/B mixer). One of these buses could be selected as the main out (or program) bus. Most modern mixers, however, have one bus that is always the program bus, the second main bus being the preview (sometimes called preset) bus. These mixers are called flip-flop mixers, since the selected source of the preview and program buses can be exchanged. Both preview and program bus usually have their own video monitor.

Another main feature of a vision mixer is the transition lever, also called a T-bar or Fader Bar. This lever, similar to an audio fader, creates a smooth transition between two buses. Note that in a flip-flop mixer, the position of the main transition lever does not indicate which bus is active, since the program bus is always the active or hot bus. Instead of moving the lever by hand, a button (commonly labeled "mix", "auto" or "auto trans") can be used, which performs the transition over a user-defined period of time. Another button, usually labeled "cut" or "take", directly swaps the preview to the program without any transition. The type of transition used can be selected in the transition section. Common transitions include dissolves (similar to an audio crossfade) and pattern wipes.

The third bus used for compositing is the key bus. A mixer can actually have more than one key bus, but they usually share only one set of buttons. Here, one signal can be selected for keying over the program bus. The digital on-screen graphic image that will be seen in the program is called the fill, while the mask used to cut the key's translucence is called the source. This source, e.g. chrominance, luminance, pattern (the internal pattern generator is used)

Page 26: Vd Editing

or split (an additional video signal similar to an alpha channel is used) and can be selected in the keying section of the mixer. Note that instead of the key bus, other video sources can be selected for the fill signal, but the key bus is usually the most convenient method for selecting a key fill. Usually, a key is turned on and off the same way a transition is. For this, the transition section can be switched from program (or background) mode to key mode. Often, the transition section allows background video and one or more keyers to be transitioned separately or in any combination with one push of the "auto" button.

These three main buses together form the basic mixer section called Program/Preset or P/P. Bigger production mixers may have a number of additional sections of this type, which are called Mix/Effects (M/E for short) and numbered. Any M/E section can be selected as a source in the P/P stage, making the mixer operations much more versatile, since effects or keys can be composed "offline" in an M/E and then go "live" at the push of one button.

After the P/P section, there is another keying stage called the downstream keyer (DSK). It is mostly used for keying text or graphics, and has its own "Cut" and "Mix" buttons. The signal before the DSK keyer is called clean feed. After the DSK is one last stage that overrides any signal with black, usually called Fade To Black or FTB.

Modern vision mixers may also have additional functions, such as serial communications with the ability to use proprietary communications protocols, control aux channels for routing video signals to other sources than the program out, macro programming, and DVE (Digital Video Effects) capabilities. Mixers are often equipped with effects memory registers, which can store a snapshot of any part of a complex mixer configuration and then recall the setup with one button press.

Setup

Rear connection panel of a Sony DVS-7000 vision mixer main unit. Some of the BNC connectors accept source inputs, while others output video from the various buses and aux channels. The D-subminiature ports interface with other equipment such as the keyer, tally, and control panel.

Page 27: Vd Editing

Since vision mixers combine various video signals such as VTRs and professional video cameras, it is very important that all these sources are in proper synchronization with one another. In professional facilities all the equipment is Genlocked with colorburst from a video-signal generator. The signals which cannot be synchronized (either because they originate outside the facility or because the particular equipment doesn’t accept external sync) must go through a frame synchronizer. Some vision mixers have internal “frame-syncs” or they can be a separate piece of equipment, such as a "time base corrector". If the mixer is used for video editing, the editing console (which usually controls the vision mixer remotely) must also be synched. Most larger vision mixers divide the control panel from the actual circuitry because of noise, temperature and cable length considerations. The control panel is located in the production control room, while the main unit, to which all cables are connected, is located in a machine room alongside the other hardware.

Chapter III

Graphics

Graphics are visual presentations on some surface, such as a wall, canvas, screen, paper, or stone to brand, inform, illustrate, or entertain. Examples are photographs, drawings, Line Art, graphs, diagrams, typography, numbers, symbols, geometric designs, maps, engineering drawings, or other images. Graphics often combine text, illustration, and color. Graphic design may consist of the deliberate selection, creation, or arrangement of typography alone, as in a brochure, flier, poster, web site, or book without any other element. Clarity or effective communication may be the objective, association with other cultural elements may be sought, or merely, the creation of a distinctive style.

Graphics can be functional or artistic. The latter can be a recorded version, such as a photograph, or an interpretation by a scientist to highlight essential features, or an artist, in which case the distinction with imaginary graphics may become blurred.

Page 28: Vd Editing

Drawing

Drawing generally involves making marks on a surface by applying pressure from a tool, or moving a tool across a surface. Common tools are graphite pencils, pen and ink, inked brushes, wax color pencils, crayons, charcoals, pastels, and markers. Digital tools which simulate the effects of these are also used. The main techniques used in drawing are line drawing, hatching, crosshatching, random hatching, scribbling, stippling, blending, and shading.

Drawing is generally considered distinct from painting, in which colored pigments are suspended in a liquid medium and are usually applied with a brush. Notable great drawers include Michelangelo, Rembrandt, Raphael and Leonardo da Vinci.

Many people choose drawing as a main art style, or they may use it to make sketches for paintings, sculptures and other types of art. The other term is Engineering Graphics, preferably the language of engineers that simulates Three Dimensional capability of engineer to plan and Implement his ideas. It comprises Projection, Development, Perspective, Section, Intersection, and Isometric ideations.

Line art

Line art is a rather non-specific term sometimes used for any image that consists of distinct straight and curved lines placed against a (usually plain) background, without gradations in shade (darkness) or hue (color) to represent two-dimensional or three-dimensional objects. Line art is usually monochromatic, although lines may be of different colors.

Illustration

Page 29: Vd Editing

An illustration is a visual representation such as a drawing, painting, photograph or other work of art that stresses subject more than form. The aim of an illustration is to elucidate or decorate a story, poem or piece of textual information (such as a newspaper article), traditionally by providing a visual representation of something described in the text. The editorial cartoon, also known as a political cartoon, is an illustration containing a political or social message.

Illustrations can be used to display a wide range of subject matter and serve a variety of functions, such as:

giving faces to characters in a story displaying a number of examples of an item described in an academic

textbook (e.g. A Typology) visualising step-wise sets of instructions in a technical manual communicating subtle thematic tone in a narrative linking brands to the ideas of human expression, individuality and

creativity making a reader laugh or smile for fun (to make laugh) funny

Graphs

A graph or chart is a type of information graphic that represents tabular, numeric data. Charts are often used to make it easier to understand large quantities of data and the relationships between different parts of the data.

Page 30: Vd Editing

Diagrams

A diagram is a simplified and structured visual representation of concepts, ideas, constructions, relations, statistical data, etc., used to visualize and clarify the topic.

Symbols

A symbol, in its basic sense, is a representation of a concept or quantity; i.e., an idea, object, concept, quality, etc. In more psychological and philosophical terms, all concepts are symbolic in nature, and representations for these concepts are simply token artifacts that are allegorical to (but do not directly codify) a symbolic meaning, or symbolism.

Geometric design

A map is a simplified depiction of a space, a navigational aid which highlights relations between objects within that space. Usually, a map is a two-dimensional, geometrically accurate representation of a three-dimensional space.

One of the first 'modern' maps was made by Waldseemüller.

Photography

One difference between photography and other forms of graphics is that a photographer, in principle, just records a single moment in reality, with seemingly no interpretation. However, a photographer can choose the field of view and angle, and may also use other techniques, such as various lenses to distort the view or filters to change the colors. In recent times, digital photography has opened the way to an infinite number of fast, but strong, manipulations. Even in the early days of photography, there was controversy over photographs of enacted scenes that were presented as 'real life' (especially in war photography, where it can be very difficult to

Page 31: Vd Editing

record the original events). Shifting the viewer's eyes ever so slightly with simple pinpricks in the negative could have a dramatic effect.

The choice of the field of view can have a strong effect, effectively 'censoring out' other parts of the scene, accomplished by cropping them out or simply not including them in the photograph. This even touches on the philosophical question of what reality is. The human brain processes information based on previous experience, making us see what we want to see or what we were taught to see. Photography does the same, although the photographer interprets the scene for their viewer.

Computer graphics

There are two types of computer graphics: raster graphics, where each pixel is separately defined (as in a digital photograph), and vector graphics, where mathematical formulas are used to draw lines and shapes, which are then interpreted at the viewer's end to produce the graphic. Using vectors results in infinitely sharp graphics and often smaller files, but, when complex,like vectors take time to render and may have larger file sizes than a raster equivalent.

In 1950, the first computer-driven display was attached to MIT's Whirlwind I computer to generate simple pictures. This was followed by MIT's TX-0 and TX-2, interactive computing which increased interest in computer graphics during the late 1950s. In 1962, Ivan Sutherland invented Sketchpad, an innovative program that influenced alternative forms of interaction with computers.

In the mid-1960s, large computer graphics research projects were begun at MIT, General Motors, Bell Labs, and Lockheed Corporation. Douglas T. Ross of MIT developed an advanced compiler language for graphics programming. S.A.Coons, also at MIT, and J. C. Ferguson at Boeing, began work in sculptured surfaces. GM developed their DAC-1 system, and other companies, such as Douglas, Lockheed, and McDonnell, also made significant developments. In 1968, ray tracing was first described by Arthur Appel of the IBM Research Center, Yorktown Heights, N.Y. [1]

During the late 1970s, personal computers became more powerful, capable of drawing both basic and complex shapes and designs. In the 1980s, artists

Page 32: Vd Editing

and graphic designers began to see the personal computer, particularly the Commodore Amiga and Macintosh, as a serious design tool, one that could save time and draw more accurately than other methods. 3D computer graphics became possible in the late 1980s with the powerful SGI computers, which were later used to create some of the first fully computer-generated short films at Pixar. The Macintosh remains one of the most popular tools for computer graphics in graphic design studios and businesses.

Modern computer systems, dating from the 1980s and onwards, often use a graphical user interface (GUI) to present data and information with symbols, icons and pictures, rather than text. Graphics are one of the five key elements of multimedia technology.

3D graphics became more popular in the 1990s in gaming, multimedia and animation. In 1996, Quake, one of the first fully 3D games, was released. In 1995, Toy Story, the first full-length computer-generated animation film, was released in cinemas. Since then, computer graphics have become more accurate and detailed, due to more advanced computers and better 3D modeling software applications, such as Maya, 3D Studio Max, and Cinema 4D.

Another use of computer graphics is screensavers, originally intended to preventing the layout of much-used GUIs from 'burning into' the computer screen. They have since evolved into true pieces of art, their practical purpose obsolete; modern screens are not susceptible to such burn in artifacts.

Web graphics

Signature art used on web forums

In the 1990s, Internet speeds increased, and Internet browsers capable of viewing images were released, the first being Mosaic. Websites began to use

Page 33: Vd Editing

the GIF format to display small graphics, such as banners, advertisements and navigation buttons, on web pages. Modern web browsers can now display JPEG, PNG and increasingly, SVG images in addition to GIFs on web pages. SVG, and to some extent VML, support in some modern web browsers have made it possible to display vector graphics that are clear at any size. Plugins expand the web browser functions to display animated, interactive and 3-D graphics contained within file formats such as SWF and X3D.

Modern web graphics can be made with software such as Adobe Photoshop, the GIMP, or Corel Paint Shop Pro. Users of Microsoft Windows have MS Paint, which many find to be lacking in features. This is because MS Paint is a drawing package and not a graphics package.

Numerous platforms and websites such as ForumFanatics have been created to cater to web graphics artists and to host their communities. A growing number of people use create internet forum signatures—generally appearing after a user's post—and other digital artwork, such as photo manipulations and large graphics. With computer games' developers creating their own communities around their products, many more websites are being developed to offer graphics for the fans and to enable them to show their appreciation of such games in their own gaming profiles. ForzaMotorsport.net, the official site for the Forza Motorsport series, comprises information on the games, the cars available in each one plus it also has a forum (sign in is with the Xbox users' login).

Chapter IV

CONCLUSION

Film editing is part of the creative post-production process of filmmaking. It involves the selection and combining of shots into sequences, and ultimately creating a finished motion picture. It is an art of storytelling. Film editing is the only art that is unique to cinema, separating film-making from other art forms that preceded it (such as photography, theater, dance, writing, and directing), although there are close parallels to the editing process in other art forms like poetry or novel writing. Film editing is often referred to as the "invisible art"[1] because when it is well-practiced, the viewer can become so engaged that he or she is not even aware of the editor's work.

Page 34: Vd Editing

On its most fundamental level, film editing is the art, technique, and practice of assembling shots into a coherent whole. A film editor is a person who practices film editing by assembling the footage. However, the job of an editor isn’t simply to mechanically put pieces of a film together, cut off film slates, or edit dialogue scenes. A film editor must creatively work with the layers of images, story, dialogue, music, pacing, as well as the actors' performances to effectively "re-imagine" and even rewrite the film to craft a cohesive whole. Editors usually play a dynamic role in the making of a film.

With the advent of digital editing, film editors and their assistants have become responsible for many areas of filmmaking that used to be the responsibility of others. For instance, in past years, picture editors dealt only with just that—picture. Sound, music, and (more recently) visual effects editors dealt with the practicalities of other aspects of the editing process, usually under the direction of the picture editor and director. However, digital systems have increasingly put these responsibilities on the picture editor. It is common, especially on lower budget films, for the assistant editors or even the editor to cut in music, mock up visual effects, and add sound effects or other sound replacements. These temporary elements are usually replaced with more refined final elements by the sound, music, and visual effects teams hired to complete the picture.

Film editing is an art that can be used in diverse ways. It can create sensually provocative montages; become a laboratory for experimental cinema; bring out the emotional truth in an actor's performance; create a point of view on otherwise obtuse events; guide the telling and pace of a story; create an illusion of danger where there is none; give emphasis to things that would not have otherwise been noted; and even create a vital subconscious emotional connection to the viewer, among many other possibilities.

Page 35: Vd Editing

Chapter V

Work Experience

I am working as a video editor in SUDERSHAN TV for more than 8 months. I had edited packages on various topics from politics to bollywood. Some of them are

Sharuk khan iifa pkg Hritik Roshan pkg

Page 36: Vd Editing

KatrinaKaif pkg A.R. Rahman pkg

Bollywood pkg

Page 37: Vd Editing

Mulayam yadav pkg Mayawati pkg

Rahul Dravid pkg 2 Mahender S. Dhoni pkg

Page 38: Vd Editing

Anna Hazare biography Mumbai bomb blast pkg

CONCLUSION

After working as video editor in ‘SUDERSHAN TV’ brought some major changes in

my personality, necessarily needed by for media professional. This internship

provided relevant experience in a field to include in resume as well as professional

references and networking contacts. The following characteristics were added to my

personality:-

Patience.

Politeness.

Working together with different people as a team.

Helped me learn to work under pressure.

Meet deadlines.

Page 39: Vd Editing

Made me realize the importance of punctuality.

Responsibility.

Chapter VI

BIBLIOGRAPHY :

WEBSITES download.nos.org/srsec335new/ch4.pdf

en.wikipedia.org/wiki/postproduction

www.caluniv.ac.in/Global%20mdia%20journal/.../C-2%20Kaul.pdf Dying Industries, blogs.wsj.com/economics

Pell Research Report on Video Postproduction Services

Page 40: Vd Editing