Promise of fully automated video production is achieving mesmerizing shots and sequences just by identifying, finding and positioning the cameras in a spot relevant to the task at hand. The technology is progressing at a rapid pace so this vision does not have to be lazy cameraman’s daydream anymore. Preparation and documentation is half the work and they give the best starting point for developing a smart shot composer.
Much of a cameraman’s work has been waiting around highly alert for the perfect moment. Often once the footage is in the can you are already in a rush to the next location. Planning eases the pain, but most of the activities have evolved into second nature. Problem with habits is not just that they are hard to get rid off, they are also hard to notice and describe. Much of the challenge of automated shot composition lies in fully understanding how a great cameraman works.
The basics of shot composition are covered in all art schools. Well-documented esthetic principles like the rule of thirds are the basis of our visual environment has been built upon. More technical than artistic rules are also taught. You are supposed to give lead room for action, leave space above a head and space for objects eyes to wander. All sound advice and a great help for a beginner, but also quite feasible to implement with object recognition algorithms.
The linear element ie. time in video brings more possibilities to playing with rhythm and in the picture. Each moving visual element evokes different responses in the viewer. Hard part is knowing which effect it would be worthwhile to achieve. The big question is whether the camera crew should be concerned with these large questions or the composition of the picture. In small productions you do not have the luxury of a director of photography and even in large ones the flow of production is mostly based on trust between the director and the cameraman.
Solving the conundrums presented is the tricky bit. The best bet is to look around at other industries and benchmark. Especially it is a good habit to look at creative industries that have created a very well tuned production flows. One such very lucrative business is video games. To tune up the productions thousands of hours have been put into building game engines.
Game engines essentially describe what the player sees as the script progresses. The most exiting part is the mathematically coded rule set; when, how and what kind camera angles are used. The end results have become quite cinematic. Beautiful effects are used to achieve stunning pictures.
From automated shot composition point of view game engines are a great start for study, but some adjustments have to made since games are fully controlled environments. Rehearsed shoots are already planned and even partly produced using 3D techniques. Motion-controlled camerarigs and merging bluescreen footage with artificial studios are already in use today. Action sequences with special effects no longer have alien ships strapped to wires.
The technology has moved a quite long way already, but there are still the unplanned obstacles of real life. Live shoots have quite different requirements so more adaptive logic is needed. Since the real-time computation capacity is currently not available off-line solutions have to be built. Camera sensors are getting better and the composition can soon be done or at least perfected later on. Start by shooting with highest possible resolution with wide lens from as far a far away as possible.