Editing: evolution, principles, functions, practices, and techniques


Editing is where the material that has been shot is blended together to form a
convincing, persuasive presentation. However, editing has a much more subtle role to play than a simple piecing together process. It is the technique of selecting and arranging shots; choosing their order, their duration, and the ways in
which they are to be joined together. Editing is where graphics, music, sound
effects, and special effects are added to the footage shot earlier. It has a significant influence on the viewers’ reactions to what they see and hear. Skilled editing makes a major contribution to the effectiveness of any production. Poor
editing can leave the audience confused and bored. The mechanics of editing
are simple enough; but the subtle effects of the editor’s choices are a study apart.
Evolution of editing

As with most other film techniques, editing has evolved over time as the technology and audience expectations change. The following is a brief history of this technique.
Before Editing: Like almost every basic idea about movies, the idea of editing has its precursors. Flashbacks had existed in novels; scene changes were already part of live theater; even narrated sequences had been a part of visual culture from medieval altar triptychs to late nineteenth-century comic strips. But the very earliest filmmakers were afraid to edit film shots together because they assumed that splicing together different shots of different things from different positions would simply confuse audiences.
Early days of editing: However, filmmakers quickly discovered that editing shots into a sequence not only contributed to the audience's sense of tale, but also enabled them to tell more complex stories as a result. You can see primitive instances of editing in films like Rescued by Rover (Great Britain, 1904) and The Great Train Robbery (1903). Early on the cuts were made in the camera, so that the cameraman would simply stop cranking at the exact end of a shot, and begin cranking again when it was moved somewhere else, or when something else was put in front of it. This kind of editing could allow for some early special effects. In movies he is making at the turn of the century, Georges Mlis stops the camera after detonating a magic puff of smoke in front of his actor, then begins the camera again after the actor has left the stage, making it seem as if the actor has magically vanished.
D.W. Griffith was the first film director who became a superstar in the world of cinema in his day. Although he may not have invented things like close-ups and certain concepts of editing, he was able to exploit them and reinvent them into something that today is prevalent in every film, short or feature. Griffith’s work nevertheless transformed that system from its primitive to its classical mode. He was the first filmmaker to realize that the motion-picture medium, properly vested with technical vitality and seriousness of theme, could exercise enormous persuasive power over an audience, or even a nation, without recourse to print or human speech.
Griffith experimented with all the narrative techniques. In his long career Griffith crafted over 450 films and caused quite a bit of controversy. Today he is seen as one of the most hated and most revered man in cinematic history, however there is no denying that he forever shaped the way we view cinema today. Griffith’s contributions to cinema are infinite and today we applaud him for shaping filmmaking to be a superior art form.
He influenced the art of editing worldwide. The Moscow Film School of the 1920s, for example, played his Intolerance (1916) over and over again in order to use Griffith's techniques for the films of its students [the Moscow Film School, first real film school in the world—was founded as a propaganda device. Lenin knew early on that the cinema was going to be an important ideological tool for communicating ways of seeing the world.] Though the idea of putting together shots to forward theme as well as action—one way of seeing montage—had occurred to other filmmakers before Griffith and the early Soviets, Griffith made it a regular practice and the Russian filmmakers theorized its meaning. The first rigorous use of the term is by Soviet filmmakers like V.I. Pudovkin and Sergei Eisenstein, who saw montage principally as a useful propaganda film tool. Montage was a way to put together a number of shots, more or less quickly, in a manner that pointed out a moral or an idea. In Charlie Chaplin's Modern Times (1936), a shot of a faceless, crowded group of men emerging from a subway on their way to work is followed by a shot of a herd of sheep being led to slaughter. There is one black ram in the middle of the herd. We immediately cut back to Charlie emerging in the midst of the crowd: the one black sheep in the fold.
One of the most notable of the Soviet directors of this era was Sergei Eisenstein, who transformed the principles of classical editing into something more consciously intellectualized he called montage.
But even in feature filmmaking some directors chose to avoid the manipulation of reality that montage and heavy editing seemed to imply. In the silent era, some American comics such as Buster Keaton and Charlie Chaplin often relied on long takes in order to demonstrate that no special effects had been used and the acrobatics of the comedian were not camera tricks but dangerously real events.
In the 1930s, Jean Renoir's films were filled with shots of long duration. The best examples are probably Grand Illusion (La Grande Illusion, France, 1937) and Boudu Saved from Drowning (Boudu Sauv Des Eaux, France, 1932). The subsequent movements most associated with less emphasis of montage are Italian neorealism and the French Nouvelle Vague (New Wave) and cinema verite.
Editing today: Even in an era of incredibly advanced special effects, some filmmakers are still enamored of the photographic realism in sustained shots.
But the past 20 or so years has also seen the rise of "digital editing" (also called nonlinear editing), which makes any kind of editing easier. The notion of editing film on video originated when films were transferred to video for television viewing. Then filmmakers used video to edit their work more quickly and less expensively than they could on film. The task of cleanly splicing together video clips was then taken over by computers using advanced graphics programs that could then also perform various special effects functions. Finally, computers convert digital images back into film or video. These digital cuts are a very far cry from Mli's editing in the camera.
Editing in the digital era
Digital Video Is Lossless: First of all, no matter how often a digital file is copied, the copy is always identical to the original. This is not the case when you copy analog materials, such as VHS tapes. When you duplicate an analog tape, the copy has a slightly lower quality. If you repeat this process several times, and use the duplicate to make more copies, your videotape may become unwatchable. Repeated playback also increases the wear and tear on your original source tapes. Successive copies are called generations and the change in image quality is called generational loss.
Digital Video Editing Is Non-linear: Film editing began as a non-linear medium. In the early days of cinema, a film editor would cut and splice her film to edit it (most early editors were women), and therefore she could work in any order she wanted. Like today's digital editors, she had what is called random access to her material. She could always go back to any scene and change its length. Or if she wanted to rearrange a section in the middle, she could do it by splicing and taping the film back together, without affecting the rest.
Digital Video Editing Is Non-Destructive: What you are actually creating in your editing software is a virtual assembly. When you edit, you are not disturbing the video files themselves. You are only giving the computer instructions for what to do with the media files stored on your hard drive. This is a fundamental difference from analog systems, which produce what is, rather dramatically termed, a destructive assembly of film.
Every time that you look at a sequence in your video editing system, the images are instantaneously assembled for you as you watch. If you want to do something completely different with a scene, your edits will only change the instructions. In a mechanical or an analog system, you have to undo version A before you can create version B (destroying version A in the process).
In non-destructive editing, the clips within your video project are pointers to your captured source files, not the actual source files themselves. A video timeline is comparable to a musical score. Just as the sheet music refers to instruments and indicates when they should play, the project refers to media files and when they should play.
The Digital Editor's Workflow: Besides cutting or trimming clips, editing involves importing (or capturing) files and exporting them (which most of the time involves mastering to a tape). The editing workflow can be summarized in these three basic steps:
1. Input. This usually consists of capturing or digitizing, in other words loading video material from source tapes into the computer and/or importing material (such as already digitized files, like sound files or graphics made in other programs) into a project.
2. Editing. This is where the material is categorized. The shots are cleaned up and organized to create sequences to be used as building blocks and assembled in a final video sequence. Also, in this stage, sound will be edited and mixed, and transitions and titles will be added to create a finalized video piece.
3. Output. After the edit is done, you have a number of options for outputting your final sequence. These range from transferring to tape, to exporting a QuickTime movie for uploading to the Web or for burning onto a DVD.
To summarise, digital editing makes the process faster, less expensive, less risky (mistakes could be undoed), try and experiment with visual effects, CGI.

Editing goals
Basically, editing or postproduction is the process of combining individual shots in a specific order. It has several purposes:
  • To assemble material in a sequential fashion. The shooting order may differ from the running order

  • To correct mistakes by editing them out or by covering them with other footage. 

  • To create, enhance, embellish, and bring to life images and events that were once captured live. Tools such as visual effects, sound effects, and music can give the story more drama, thus more impact on the audience. 

Shooting order vs. running order
During the production process, when possible, events are usually shot in the order that is most convenient or practical, and then the takes are joined together during the editing process so that they appear consecutive. The eventual "running order" may be very different from the order in which the scenes were shot (the "shooting order"). Some of the various shooting situations follow:
  • Sometimes the action is shot from start to finish, such as might occur if you are shooting someone who is blowing a glass vase.
  • Only sections of the total action may be deliberately shot, omitting unwanted action. 

  • The action may be repeated so that it can be shot from various positions. 

  • All of the action at one location may be shot before going on to the next 
location, although the script may cut between them. 

  • A series of similar subjects may be shot that have reached different 
stages. For example, shots of various newborn foals, yearlings, colts, and aging horses can be edited together to imply the life cycle of a specific horse. 

Editing video and audio
  • Splicing - The original video edit technique included cutting and splicing segments of the videotape together. However, the edits were physically hard on the VCR’s delicate heads and did not look good on the television screen. This method was short-lived.
  • Linear editing - Next, editing moved on to the process of linear "dubbing" or copying the master tape to another tape in a sequential order. This worked well for editors until the director or client wanted significant changes to be made in the middle of a tape. With a linear tape, that usually meant that the whole project had to be entirely reedited, which was incredibly time-consuming and frustrating. Linear editing also did not work well if multiple generations (copies of copies) of the tape had to be made, because each generation deteriorated a little more. Linear systems are generally made up of a "player" and a "record" VCR along with a control console. The original footage is placed into the player and then is edited to the recorder. Although some segments of the television industry are still using linear editing, the majority of programming today is edited on a nonlinear editor.
  • Non-linear editing: Today almost all video and television programs are edited on a nonlinear editor. Nonlinear editing is the process whereby the recorded video is digitized (copied) onto a computer. Then the footage can be arranged or rearranged, special effects can be added, and the audio and graphics can be adjusted using editing software. Nonlinear editing systems make it easy to make changes, moving clips around until the director or client is happy. Hard disk and memory card cam- eras have allowed editors to begin editing much faster because they do not need to digitize all of the footage. Nonlinear systems cost a fraction of the price of a professional linear editing system. Once the edited project is complete, it can be output to whatever medium is desired: tape, Internet, iPod, CD, DVD, and so on.
Editing modes: Online and Offline editing
Offline editing is actually a rough or draft cut of the project by editing a low-quality footage together, so the main editor and possibly director could get ideas for the final cut. Another role for an offline editor is to create an edit decision list (EDL) which is similar to log sheets (a list of shots).
It is very important because once the offline editors done a list of the shots they put in a rough cut, the online editor would follow and make changes in order to edit a final cut. Offline editors can also make creative descisions; shots, cuts, dissolves, fades, etc.
Online editing is a final cut of the project by editing a high quality footage together. Online editors would reconstruct the final cut based on the EDL, created by the offline editors. They will add visual effects, lower third titles, and apply color correction.

The reason the offline editing has to be done first is because it is cheaper to use in a long period of time in contrast to online editing.
  • Linear off- and on-line editing: Linear off-line editing is done to give you a rough idea of how the intended shot sequence looks and feels. It is a sketch, not the final painting. Even skilled editors like to do an off-line edit to check the rhythm of the shot sequence, decide on various transitions and effects, and get some idea of the audio requirements. Linear off-line editing is usually done with low-end equipment. You could even use two VHS recorders for an off-line rough-cut: one feeds the source tapes, the other records the selected shots in the desired sequence (see figure 13.2). Never mind the sloppy transitions or audio—all you want to see is whether the sequences make sense, that is, tell the intended story. If you do a preliminary edit for a client, of course, the off-line edit should look as good as you can possibly make it, so VHS machines will no longer suffice. The most valuable by-product of off-line editing is a final edit decision list (EDL) that you can then use for on-line editing.
  • Non-linear off- and on-line editing: In nonlinear editing off-line means that you capture the selected shots in low-resolution video and use them for your rough-cut. The reason for importing the video in low-resolution is to save storage space and processing time. Even though you can run the edited low-resolution version from beginning to end, your final aim is actually an accurate EDL. When editing the on-line version, you redigitize the selected clips in high resolution and sequence them according to the EDL. This procedure makes little sense if you're editing a relatively short piece. If you kept a fairly accurate VTR log, you can capture the selected clips in high-resolution without straining your hard drive. Then every time you try out a particular editing sequence, your editing is on-line even though your intentions may be to do just a rough-cut. As you can see, this is one of the huge advantages of nonlinear editing.

When Would I Use Offline Editing?
Offline editing is used when you’re working with high file size/high resolution video files. The file size of raw (or uncompressed footage) can be astronomical and it will tax your computer system if it isn’t equipped to handle such a workload. By transcoding that footage down into a lower resolution format you can help speed up the editing process by a lot. The most valuable by-product of off-line editing is a final edit decision list (EDL) that you can then use for on-line editing.
The essence of offline film/video editing has been around nearly since the dawn of film. In fact in the days of celluloid film stock and video tape, editors would make copies of the originals, these copies were called work prints. These prints would then be used to develop the edit of the film, thus preserving the original print.
As we moved into the digital age of editing in the early to mid 1990’s, offline editing gained traction, and thus began the process of making duplicate copies of the master files for editing. Which once again preserved the original files. As film resolutions grew larger and the file sizes bigger, and transcoding was easier, editors would duplicate the footage but at a lower resolution in order to speed up the actual editing process. Thus bringing us to our current state.
Logging: An often-neglected important aspect of the production process is logging the recorded material. Logging saves time during the actual editing process because the logging can be completed before the edit session.
Basic Editing Systems
As explained earlier, all taped-based recording systems are linear, and all disk-based systems are nonlinear. Similarly, all editing systems using videotape are linear, regardless of whether the information recorded on the tape is analog or digital. All editing systems that are disk- based are nonlinear. What exactly does this mean from a production point of view? Let's look at how recorded information is retrieved.
  • Linear systems: Having to roll through all the preceding shots is a linear—one-after-the-other—process. To locate shot 25 on a videotape, you need to roll through the previous twenty-four shots before reaching shot 25. You cannot simply jump to shot 25, skipping all preceding shots. All tape-based editing systems are therefore called linear, regardless of whether the tapes contain analog or digital signals.
  • Non-linear systems: When information is stored on a disk-based editing system, you can jump to shot 25 directly without rolling through the preceding twenty-four shots. Being able to access any specific shot or frame in random order is a nonlinear process. All disk-based systems are, therefore, called nonlinear. Because they are computer-driven, they can operate only with digital signals. In effect, the nonlinear editing system operates like a large ESS (electronic still store) system that allows you to identify and access each frame or frame sequence in a fraction of a second. Because the system is nonlinear, it can display any two or more frames side-by- side on a single computer screen so you can see how well the shots will edit together.
Editing principles
This big operational difference between the two systems has changed the fundamental concept of how editing works. Linear editing is basically selecting shots from one tape and copying them in a specific order onto another tape. The operational principle of linear editing is copying.
Nonlinear editing (NLE) allows you to select and rearrange frames and shots. Rather than copy certain images (as in linear editing), you sort through the image files and mark them to play back in a specific order. The operational principle of nonlinear editing is selecting video and audio data files and making the computer play them back as a specific sequence.
To give your editing direction and make your sequencing choices less arbitrary, you need to know the purpose of the show and the specific context of the event you are to re-create through editing.
Nonlinear editing acts to supply structure to a number of shots, initially repre­sented as individual stills, which you then run as clips. You probably noticed that this is the exact opposite of linear editing, where you start out with running sequences and then freeze particular frames that mark the edit points.
The shot selection is based primarily on story continuity. Equally important aspects of editing concern complexity editing a n d preserving context. Y ou should realize that all editing principles—including aesthetic ones—are conventions and not absolutes. They work well under most circumstances and are a basic part of the visual literacy of most television viewers and production personnel. Depending on the event context and the com­ municationaim,someofthe"do's"ofeditingmayeasily become the "don'ts" and vice versa.
Continuity editing
Continuity editing refers to the achievement of story con­tinuity despite the fact that great chunks of the story are actually missing,and to assemble the shots in such a way that viewers are largely unaware of the edits. Specifically, you need to observe these aesthetic factors: (1) subject identification, (2) the mental map, (3) vectors, (4) move­ment, (5) colour, and (6) sound.
  1. Subject Identification: The viewer should be able to recognize a subject or an object from one shot to the next. Therefore, avoid editing between shots of extreme changes in distance, SEE13.24 If you cannot maintain visual continuity for identification, bridge the gap by telling the viewer that the shot is, indeed, the same person or thing.
    Despite what wasjust noted, trying to edit together shots that are too similar can lead to even worse trou­ ble—the jump cut. This occurs when you edit shots that are identical in subject yet slightly different in screen lo­ cation; the subject seems to jerk from one screen location to another as if pushed by an unseen force. To avoid a jump cut, try to find a succeeding shot that shows the object from a different angle or field of view, or insert a cutaway shot
  2. Mental map: Because television has a relatively small screen, we normally see little of a total scene in the on-screen space. Rather, the many close-ups suggest, or should
suggest, that the event continues in the off-screen space.
What you show in the on-screen space defines the off­
screen space as well. For example, if you show person A
looking screen-right in a close-up, obviously talking to an
off-screen person (B), the viewer would expect person B to
look screen-left on a subsequent close-up. What you have done—quite unconsciously—is help the viewer construct a mental map that outs people and place in a logical place regardless of whether they are on screen or off-screen space.
    Continuity editing is little more than using graphic, index, and motion vectors in the source mate­ rial to establish or maintain the viewer's mental map. If you were t o apply the vectors t o the example o f on-screen person A talking t o off-screen person B , the screen-right index vector of A needs to be edited to the screen-left index vector ofB.Although the index vectors of the two persons are converging in off-screen space, they indicate that A and B are talking with each other rather than away from each other. Maintaining screen positions is especially important in over-the-shoulder shots. If, for example, y o u show a reporter interviewing somebody in a n over-the-shoulder two-shot, the viewer's mental map expects the two people to remain in their relative screen positions and not switch places during a reverse-angle shot.
    One important aid in maintaining the viewer's mental map and keeping the subjects in the expected screen space in reverse-angle shooting is the vector line. The vector line (also called the line, the line of conversation and action, or the hundred eighty) is a n extension o f converging index vectors or of a motion vector in the direction of object travel.
    When doing reverse-angle switching from camera 1 to camera 2, you need to position the cameras o n the same side o f the vector line. Crossing the line with one of the two cameras will switch the subjects' screen positions and make them appear to be playing musical chairs, thus upsetting the mental map.
  3. Vector line:
 The vector line is formed by extending converging index vec­tors o r a motion vector. Crossing the motion vector line with cameras (placing cameras o n opposite sides o f a moving object) will reverse the direction o f object motion every time you cut. You will also see the opposite camera in the background. To continue a screen-left o r screen-right object motion, you must keep both cameras o n the same side of the vector line.
  4. Movement: When editing, or cutting a n action with a switcher, try to continue the action as much as possible from shot to shot. The following discussion covers some of the major points to keep in mind. To preserve motion continuity, cut during the motion of the subject, not before or after it. For example, if you have a close-up of a man preparing to rise from a chair, cut to a wider shot just after h e has started to rise but before he finishes the movement. Or, if you have the choice, you can let him almost finish the action on the close-up (even if he goes out of the frame temporarily) before cutting to the wider shot. But do not wait until he has finished get- ting up before going to the wider shot.
    If one shot contains a moving object, do not follow it with a shot that shows the object stationary. Similarly, if you follow a moving object in one shot with a camera pan, do not cut to a stationary camera in the next shot. Equally jarring would be a cut from a stationary object to a moving one. You need to have the subject or camera move in both the preceding and the subsequent shots.
  5. Colour: One of the most serious continuity problems occurs when colors in the same scene don't match. For example, if the script for an EFP calls for an exterior MS (medium shot) of a white building followed by an MS of somebody walking to the front of the same building, it should not suddenly turn blue. As obvious as such a discrepancy may be, colour continuity is not always easy to maintain, even if you are careful to white-balance the cameras for each new location and lighting situation. What can throw you off are lighting changes you may not notice in the fervour of production. For example, the temporary blocking of the sun by some clouds can drastically influ­ence the color temperature, as can the highly polished red paint of a car reflecting onto the white shirt of a person standing next to it.
    The more attention you pay to white-balancing the camera to the prevailing colour temperature of the lighting, the easier it is to maintain color continuity in postproduc­tion. As mentioned before, any type of color correction in postproduction is difficult and time-consuming.
  6. Sound: When editing dialogue or commentary, take extra care to preserve the general rhythm of the speech. The pauses between shots of a continuing conversation should be neither much shorter nor much longer than the ones in the unedited version. In an interview the cut (edit or switcher-activated) usually occurs at the end of a question or an answer. Reaction shots, however, are often smoother when they occur during, rather than at the end of, phrases or sentences. But note that action is generally a stronger motivation for a cut than dialogue. If somebody moves during the conversation, you must cut on the move, even if the other person is still in the middle of a statement.
    Ambient (background) sounds are very important in maintaining editing continu­ity. If the background noise acts as environmental sounds, which give clues to where the event takes place, you need to maintain these sounds throughout the scene, even if it was built from shots actually taken from different angles and at different times. You may have to supply this continu­ity by mixing in additional sounds in the postproduction sweetening sessions.
    When editing video to music, try to cut with the beat. Cuts determine the beat of the visual sequence and keep the action rhythmically tight, much as the bars measure divisions in music. If the general rhythm of the music is casual or flowing, dissolves are usually more appropriate than hard cuts. But do not be a slave to this convention. Cutting "around the beat" (slightly earlier or later than the beat) on occasion can make the cutting rhythm less mechanical and intensify the scene.
Complexity editing
Complexity editing is a deliberate break with editing con­ ventions to increase the complexity and intensity of a scene. Your selection and sequencing of shots is guided no longer by the need to maintain visual and aural continuity but by ways of getting and keeping the viewers' attention and in­ creasing their emotional involvement. Complexity editing does not mean that you should flaunt the rules of continu­ ity editing but rather that you may deliberately break some of them to intensify your communication intent.
Many commercials use complexity editing to make us sit up and take notice. Even the jump cut has gained prominence as an aesthetic intensifier. You have undoubt­ edly seen the erratic editing that makes a person jump from one screen location to the next, even when he is only talking about the virtues of a credit card. Much of music televi­ sion (MTV) editing is based on the complexity principle. Although hardly necessary, the jarring discontinuity of shots further intensifies the high energy of the music.
Complexity editing is also an effective intensification device in television plays. For example, to capture the extreme confusion of a person driven to the point of a breakdown, you may want to cross the vector line with the cameras to show the person in a quick series of flip-flop Shots.
Context
In all types of editing, but especially when editing EFP news stories and documentaries, you must preserve the true context in which the main event took place. As­sume that the news footage of a speech by a local political candidate contains a funny close-up of an audience mem­ber sound asleep. But when you screen the rest of the foot­ age, you discover that all the other audience members were not only wide awake but quite stimulated by the candidate's remarks. Are you going to use the close-up? Of course not. The person asleep was in no way representative of the over­ all context in which the event—the speech—took place.
You must be especially careful when using stock shots in editing. A stock shot depicts a common occurrence— clouds, beach scenes, snow falling, traffic, crowds—that can be applied in a variety of contexts because its qualities are typical. Some television stations either subscribe to a stock-shot library or maintain their own collections.
Here are two examples of using stock shots in editing: When editing the speech by the political candidate, you find that you need a cutaway to maintain continuity dur­ ing a change in screen direction. You have a stock shot of a news photographer. Can you use it? Yes, because a news photographer certainly fits into the actual event context. But should you use a stock shot of the audience happily clapping after the candidate reads the grim statistics of Labor Day traffic accidents, just to preserve visual continu­ ity? Definitely not. The smiling faces of the audience are certainly out of place in this context.
Transition Devices
Whenever you put two shots together, you need a transition—a device that implies that the two shots are related. There are four basic transition devices: (1) the cut, (2) the dissolve, (3) the wipe, and (4) the fade. In addition, there are countless special effects available that can serve as transitions. Examples are flips, page turns, or fly effects. Although they all have the same basic purpose—to provide an acceptable link between shots— they differ in function, that is, how we are to perceive the transition in a shot sequence.
  • Cut: The cut is the most common type of video transition. It simply means replacing one shot instantly with the next. When you shoot video footage on your camera, there is a cut between each shot, i.e. between when you stop recording and start recording the next shot. Although some cameras do offer built-in transitions, most recorded footage is separated by cuts. In video editing and live switching, cuts are fast and efficient. Once a scene has been established, cuts are the best way to keep the action rolling at a good pace. Other types of transition can slow the pace or even be distracting. Of course there are some situations where fancier transitions are in order. Certain genres of television, for example, rely on a variety of transitions. Even in these productions though, notice how many transitions are still simple cuts. A common mistake amongst amateurs is to shun the cut in favour of showiness, adding wipes and effects between every shot. Learn to avoid temptation and stick to the basics. The video shots are what the audience wants to see, not how many transitions your editing program can do.
  • Dissolve: A dissolve overlaps two shots or scenes, gradually transitioning from one to the other. It’s usually used at the end of one scene and the beginning of the next and can show that two narratives or scenes are linked. They can be used to show time passing, or to move from one location to another. Quick dissolves might be used to show the scenes occur a few minutes or hours later, while a long dissolve might symbolize a longer duration of months or years between the scenes.
  • Wipes: A wipe is when a shot travels from one side of the frame to the other, replacing the previous scene. Wipes are often used to transition between storylines taking place in different locations, and/or to establish tension or conflict.
  • Fade In/Out: A fade is when the scene gradually turns to a single color — usually black or white — or when a scene gradually appears on screen. Fade-ins occur at the beginning of a film or scene, while fade-outs are at the end. A fade to black — the most common transition type — is a dramatic transition that often symbolizes the passage of time or signifies completion. Fading to black is used to move from a dramatic or emotional scene into another scene, or to the credits at the end of a film.
  • Digital Effect Transitions: Most editing applications offer a large selection of digital transitions with various effects. There are too many to list here, but these effects include colour replacement, animated effects, pixelization, focus drops, lighting effects, etc. Many cameras also include digital effects, but if possible it is better to add these in post-production.

Editing Different Genres
  • Editing drama: When editing a drama, you're likely to be working from a script and sometimes a storyboard. The scenes will have a purpose, with fairly fixed action or dialogue. Your job is to look at what was actually filmed and see if you can make it work in the way it was intended. If it doesn't, forget about the script and make the best of what you have.
  • Editing documentary: Usually with a documentary, the structure is not so fixed. Though there may be a basic structure, a voice-over or an outline, there will be lots of sequences that couldn't be planned. Don’t try to start at the beginning and continue to the end. Make up stand-alone sequences. This is where your scene cards really come into play. Then gradually put your separate sequences together to make sense of your story.
    You will also have to go through a lot of interviews. These are sometimes transcribed and printed for ease of reference. This allows you to do a paper edit, a little like using scene cards. Any tools like these can prove invaluable. Quite often you will use interviews to tell your story, but eventually will cover the interviewee shots with other footage that shows the subject better. But look out for any time that the interviewee expresses real emotion, as sometimes having these moments on screen can be very powerful, adding the ‘human’ element to the film.
    Sometimes there is up to one hundred times more footage than will appear in the final film. The key to working successfully on documentary is in the management of the footage. For ease of filing and finding, clips can be duplicated and live in more than one folder. For instance, a clip might be in the exterior house folder, but also be attached to a character’s folder. With a documentary film the editor functions as the shaper of the story as well as its visual style and sound design.
  • Editing animation: Animation works in the opposite way to documentary: because it takes so long to create, very little will have been shot that doesn't end up in the film. Indeed, the problem with animation can be not having enough of anything. Be inventive, learn about loops, holds, and ping pongs, which can save your film. Frequently, the entire soundtrack is recorded before the animation, and the pictures are then created (and timed) to fit. The challenge is then to get the pictures perfectly aligned with the sound. If the pictures are recorded first, putting them together shouldn't take too long. The main challenge in this case is to fit the voices and effects to all the actions.
  • Cutting commercials, promos, trailers, and PSAs: Cutting commercials editors work under pressure. Usually a day for a 30 to 60 sec clip. The goal is to put together such compelling images, text and stories in a small amount of time. The goal is to convince viewers to spend money or time on a product.
  • In Reality TV there is a lot of useless footage. This becomes an issue because editors have to look through hours of footage for good moments. There is no room for slacking and and editors have to cut out bleeps. There is also cheating going on the shows from what is called "Frankenbites" (new lines pieced together over several lines. Why? Because the viewer is paying attention to the continuity of the story. In short, it condenses statements to make them quicker, cleaner and more concise. Editors can do a lot to dress up a scene with fancy wipes, split screens and speed-ups, it is the necessary play toy that keeps the attention of a viewer.
    • A frankenbite allows editors to manufacture "story" efficiently and dramatically by extracting the salient elements of a lengthy, nuanced interview or exchange into a seemingly blunt, revealing confession or argument.
  • Comedy: Example: cutting to reactions after the joke. When editing Multi-cam comedy use longest piece of audio. With synced cameras footage, you cut on the fly from one cam to the next. Remember to cut in laughs from the audience. When editing Single-cam comedy the editor has the ability to create timing, build character. Remember If you can cut comedy, you can cut anything because you understand timing, characters, reaction shots, and how to start, build, sustain, and end laughs/scenes.
  • Cutting News: Download material (titles, footage, etc) for each for each story and cut it for the ordering of the show. With news, stories depend on framing from the camera and catching the atmosphere. Goal is to edit to bring understanding to a community.
  • Music Videos: Goal when editing a music video is to make a promotional documentary of a song. Rhythm, beat, timing.

Computer graphics and animation techniques
  • A superimposition, or super for short, is a form of double exposure. The picture from one video source is elec­ tronically superimposed over the picture from another. More often supers are used for creating the effects of inner events—thoughts, dreams, or processes of imagina­ tion. The traditional (albeit overused) super of a dream sequence shows a close-up of a sleeping person, with im­ ages superimposed over his or her face. Sometimes supers are used to make an event more complex. For example, you may want to super a close-up of a dancer over a long shot of the same dancer. If the effect is done properly, we are given new insight into the dance. You are no longer photographing a dance but helping create it.
  • Keying means using an electronic signal to cut out portions of a television picture and fill them in with various colors or portions of another image. The basic purpose of a key is to add titles to a background picture or to cut another picture (the image of a weathercaster) into the background pic­ ture (the satellite weather map). Lettering of the title is generally supplied by a character generator (CG)
  • Chroma keying is a special effect that uses a specific color (chroma), usually blue or green, as the backdrop for the person or object that is to appear in front of the back­ ground scene. During the key the blue or green backdrop will be replaced by the background video source without affecting the foreground object. A typical example is the weathercaster standing in front of a weather map or a satellite picture. During the chroma key, the computer- generated weather map or satellite image replaces all blue or green areas—but not the weathercaster. The key effect makes the weathercaster appear to be standing in front of the weather map or satellite image,
  • In a wipe, a second image in some geometric shape gradually replaces parts or all of the first (on-air) image. Although, technically, the second picture gradually overlaps the first in some geometric fashion, perceptually it looks as though the second image wipes the first image off the screen. The two simplest wipes are the vertical and the hori­ zontal. A vertical wipe gives the same effect as pulling down a window shade over the screen.
  • The more common DVE used in production are prere­ corded manipulations of image size, shape, light, and color (shrinking and expanding, stretching, positioning and point of view, perspective, mosaic, and posterization and solarization); motion (slide and peel effects, snapshots, and rotation, bounce, fly, and cube-spin effects); and multi- images (secondary frame and echo effects).