I feel silly typing the word "manifesto," but it's as close as I can get to what I'm writing here, in response to the "Poll" comments ...
In about 1989 I wrote a draft of the following; it was published, somewhat modified, as the guest editorial in New Media Magazine in roughly 1990-1992, somewhere in there. (I no longer have a copy of the issue it appeared in; the chaos of the last 6 or 7 years of my life has resulted in me losing a bunch of things, and this was one of them.)
The House UnAmerican Activities Committee didn't put a dent in it. The Hayes Code didn't slow it down. It even survived The Sting, Part II. Computers, however, are going to destroy the motion picture industry as it's existed for close to a century.
There's a numbers game I play. It goes like this:
|~~~~~~~~~~~~~~||(NTSC Video)||(Digital film)|
|16.7M Colors||24 bits||24 bits|
Digitized in true color, a single second of NTSC video (your tv signal is NTSC) requires roughly 29MB storage uncompressed. With MPEG or similar compression (i.e., a lossy moving image compression standards, meaning it results in some loss of detail), you get good quality 10:1 or 15:1 compression. With these numbers, a 22-minute sitcom, for example, requires 36.52GB storage before compression, or 3.5GB after compression. Once digitized, this video can be edited: titles and overlays can be added, color values adjusted, wipes and transitions added, filters applied, special effects (FX) inserted, zits removed--
Right now it's expensive to do this. All of the hardware involved is expensive. But things get cheaper in this industry....fast.
When you do the numbers for film, as opposed to videotape, they get real hairy. We're, at best, several years away from creating fully digital films, except as a stunt.
Back to our chart; one minute of digital film equals roughly 17GB. Star Wars, at two hours, requires 2073GB, or two terabytes uncompressed. With compression, we'll guess (conservatively) that 200GB will hold one high-quality copy of Star Wars. (With mature fractal compression algorithms, it will likely be less.) That seems like a lot, but my guess is that if we want to, we can have a terabyte on our desktops in fifteen years, and at reasonable cost.
What we call multimedia is, as they say, the bastard child of a thousand maniacs --except that instead of growing up into Freddy Krueger, it will grow into digital film. When, for $10-20K, an artist can create the sorts of FX that enlivened Star Wars, you'll see a lot more movies being made, for a lot less. Within the next 20 years, I fully expect to see several thousand movies released each year, digitally rendered, edited, and mastered, with only final distribution existing in analog form -- if that.
Regardless of your familiarity with movie production, you have in your life seen many movies, and seen the concluding, lengthy credit crawl. This is one reason movies cost as much as they do. (Others are inflated actor, producer, director, and, sometimes, writer salaries, the "above the line costs"; and large-studio inefficiency). It takes anywhere from dozens to thousands of people to make a modern motion picture-but this will change.
With digital movies, the crew necessary to make a motion picture will shrink dramatically. Envision this scenario:
A group of actors assembles on an empty sound stage. They're not costumed, not made up, and the stage is not dressed. There are few or no props. Markers on floors and walls indicate the perimeters of the virtual set. Doorways, chairs, tables, etc., are in place to help the actors orient themselves, to keep them from walking through walls accidentally, or falling down when they're supposed to sit.
The actors act out their scene, saying their lines, while the digital cameras record. Possibly they're recording live sound; likelier they're just recording movements and facial expressions. Total crew: a few camera operators, sound (maybe), director, some stage hands, a very few support persons.
The digital film is taken into the studio, and a rough cut assembled. On top of this rough cut, the editor-who combines the modern roles of editor, FX expert, makeup, set designer, etc., lays in sets, backgrounds, costumes, any FX, using techniques like those employed by film colorizers today, but on a platform orders of magnitude faster, and with far better software. (The director, editor, and possibly a camera operator, might be the same person.)
Once the visuals are completed, sound is laid in. This process would not be much different from that engaged in today; a director and musician going through the film second by second, explaining the effect the director wants, and trusting the musician to score it correctly. A sound editor (the musician or director, or a third person) would go through it as well, laying in the sounds-voices, footsteps, doors, vehicles-that make a film sound "real."
It would not surprise me to see a good movie made by a crew of as few as six people.
This is not going to happen all at once, of course, but some of the pieces are appearing now. FX was the first field to be impacted by computers; set design and art direction will be next. Editing might go fully digital around 2000; and the editing/composition tools might drop into the $10-$20K range by 2005.
But it's starting now, and don't think it's not.
A friend who works for Amblin Entertainment, Spielberg's company, recently demonstrated for a Universal executive some neat FX that were done with a Video Toaster and Iris workstation. In essence, they had just tossed down work that would have required modelmakers, set designers, art directors and an FX house weeks or months by the old techniques.
The exec watched with pleasure, and when they were done, said, "How long before you can do this with actors?"
The "friend" mentioned here is Karl Martin, my partner in Queen of Angels.
The guess about editing going fully digital around 2000 is pretty much on target -- there are still analog components in most edit suites, but the switchover is very much under way and nobody is buying much in the way of new analog editing equipment.
What does this have to do with anything?
The novel is a 98% complete art form. (Moran's Rule of 90: 90% competency is the place where professionalism appears. A 90% competent writer can produce fiction worth reading -- no glaring errors in form, characterization, or dialog. Somebody, somewhere, will probably publish it. You can reach 90% with a couple of years hard work in most artistic fields, if you have any native talent at all. You will then spend the rest of your life mastering the remaining 10%.)
The novel as a field was running around the 90% mark for much of the 19th century, until Mark Twain wrote "Huckleberry Finn." As Hemingway noted, "There was nothing before it. There has been nothing as good since." It was the first masterpiece of American fiction, and you can still read it today -- I've heard the argument that the English language will never change to the point where Shakespeare can't be read by a native English speaker. I'd borrow that argument and say that American culture will never change to the point where a native American won't be able to read "Huckleberry Finn" and understand it.
The last serious assault upon the structure of the novel took place in 1922. That was the year "Ulysses" was published by James Joyce. I have to concede that I've never managed to finish "Ulysses," though I've made my way halfway through it on a couple of occasions. It's not the sort of work that I enjoy, for the most part -- but it's hard to argue that the work as a whole is not a success at what it sets out to do.
There have been great novels published since 1922, certainly. "Catch-22." "Lonesome Dove," in a completely traditional format. The U.S.A. Trilogy, by dos Passos; "Lolita," "Under the Volcano," "The World According to Garp," "Another Roadside Attraction" -- from my own field I'd include "Childhood's End" and "The Demolished Man." But the definition of masterpiece does, I think, include that the work must redefine the boundaries of the form in which it takes place. By this definition there are fewer masterpieces written today. A novel, no matter how brilliantly executed, lacks the shock of the new that greeted the reading public when John dos Passos included sections called "The Camera Eye," and others that were textual newsreels in "The 42nd Parallel." Others were there before us, charting out the land.
Science fiction, like all hardcore genres, presents a special case. Most early science fiction novels were badly written. Asimov had a tin ear; Heinlein was not much better. Only Clarke of the "Big Three" who defined the middle of this century wrote with anything that approached grace. The field ran two generations or more behind the mainstream of fiction in the United States -- the "New Wave" of the 60s was in many ways a science fictional recapitulation of work done by James Joyce and others early in the century. The expansion of SF writing to include sex, or scenes written in present tense or second person, or lengthy interior monologues or any of a dozen other devices ... was a retracing of steps. Others had been there before them.
The modern field of SF is only a few years behind the mainstream field, partially because we've been catching up, but mostly because mainstream work hasn't been progressing at the same rate. SF is still not as well written as the best mainstream fiction, which is hardly surprising: there are fewer practitioners of SF and it's harder to write. As Larry Niven observed, a writer of detective mysteries has only to tell you that his hero has walked into a seedy bar and the reader has already done most of his work for him in establishing the ambience of the scene. If Niven or I write such a scene, we have to tell you everything about that bar that differs from the bars we've all walked into. And we have to do it in a seamless, unobtrusive fashion. (One of the few areas where SF writers as a group are better than mainstream writers is just that job of presenting background information without completely destroying the flow of a story. It's a skill we've had to learn to do our jobs with even 90% competency.)
But the gap isn't what it was. New work in the novel doesn't, because it can't, break the new ground established early in this century. We've been in the bedroom, we've even been in the bathroom. We've been in the morgue, the cemetary, the hospitals, the bars and restaurants and offices and schools, we've seen murders and rape and incest, child molestation and war and police brutality, we've seen homosexuals and lesbians, abused wives and serial killers, Noble and Not-So Noble immigrants, death squads and freedom fighters and bribery and assassination -- we've even seen normalcy rendered in such a fashion as to be interesting, high school and college graduations, elections and pregnancies and tough, hard working, decent people doing their best in trying circumstances. We've seen it in first person, second person, third person, in omniscient author, in camera-eye view, in present tense and past tense and future tense and mixtures of the lot, told forward in time, backward in time, and bouncing around sideways as it suited the author -- and the authors can't confuse us any more because no matter how far off the track you go, we've been there and done that.
People still write great novels. The world around us changes and the circumstances of our lives change in ways that are worth examining, and we all still want to be entertained, and many of us still read. But it's been 76 years since "Ulysses" was published, and the novel is grown up -- middle-aged, even -- and that includes the science fiction novel, and the mystery novel, and the western, and all the other genres. I won't be Robert Heinlein when I die, even if I wanted to be, even if I was good enough. He was here already once.
In some of my novels I refer to "sensables."
In my imagination, a sensable is a story you live, from the viewpoint of any person in the story. You put the traceset on your temples, and it directly stimulates your auditory and visual senses, tactile and smell and taste. It's a recreation of reality at sufficient resolution to pass for reality --
-- and it's coming. It may not arrive in my lifetime, but I never expected affordable 3D animation within my lifetime. I saw 3D coming when I was still a kid -- there's a scene in "The Moon is a Harsh Mistress" where Mike, the computer protagonist, creates "Adam Selene," a virtual human being, in realtime on a video monitor. It's portrayed as being an extremely difficult task for a very bright AI ... in 2076. At the time I accepted the date -- it seemed about right from what I knew of the computing power necessary, of the video rates that would have to be sustained for a simulation of "reality."
We're going to beat that one by two or three generations at least.
We don't actually perceive reality, which is one of the reasons Virtual Reality is possible. We already live in a Virtual Reality Of The Senses. Our eyes are low-resolution, low-bandwidth cameras. Our sense of smell and taste are crude mechanisms for analyzing chemicals. You can tell by touch which side of a coin you're touching, but that's about the limit of your sense of touch's resolution. We already make hearing devices that are far better than those evolution put in our ears. So it seems possible to me to create plausible virtual reality -- perhaps within my lifetime.
There are more great novels than great movies. Several hundred movies are released each year, and with the proliferation and declining cost of the tools of production, the number is rising. But there are thousands (tens of thousands?) of new works of fiction published every year. (Over a thousand SF novels are published each year.) It's much easier to create a great novel than a great movie -- because only one person has to work at the height of his abilities. Film, being collaborative, requires that large numbers of people must do good-to-excellent work, before a great movie can occur. This is why Steven Spielberg, 50 years after the end of World War II, long after the deaths of most of the people who fought in that war, can create a movie called "Saving Private Ryan" that is being hailed as the definitive movie on combat in World War II -- "Saving Private Ryan" is an extraordinary piece of work and there haven't been that many World War II movies. (A few hundred, at a wild guess.)
Movies are harder than novels. Bigger challenges.
I started playing with 3D software back in '91 or '92. I had a 25-Mhz 386 computer and it used to take me most of the night to render a single frame in 640x480 resolution. Intellectually I knew the machines were going to get faster -- but I'd have been stunned that in 1998 I could render scenes far more complex than the ones I was working on in '91, in two to three minutes per frame without raytracing. (I.e., shadow maps and Phong shading, for those of you who know what I'm talking about.) With raytracing one of those all-night frames climbs up to 8 or 10 minutes, on current hardware -- and if reports of the rendering power of the hardware raytracer I recently ran across are true, that 8 or 10 minutes drops down to 5 or 6 seconds, with the purchase of about $60,000 worth of equipment, which is trivial in terms of film and video production.
Film isn't yet a mature field. It's young by comparison to the novel, and it's a far more difficult discipline under the best of conditions. There's a history to it, and you can make some good guesses about where it's going as a storytelling medium based on its history to date -- but if film were a child, it would be a ten-year old.
3D, by comparison, is a toddler. Virtual Reality is a newborn. In a hundred years people will look back on this field, and among its practitoners they will find a Spielberg, a Scorcese, a Lean, a Hitchcock and a de Mille, a Wilder and a Chaplin; a Dos Passos, a Joyce, Nabokov, a Faulkner or Hemingway or ... Twain. With whom I share a birthday, November 30th.
I'd really intended to spend my life writing the Tales of the Continuing Time. I'm never going to stop writing those stories, and if I live long enough, I still intend to finish them. But, when I finished ... I'd be Greg Bear. Or Benford, or Niven, or, if I were luckier than I deserve, Le Guin. I doubt that it's possible for me to have the place Heinlein has in SF, or Tolkien in fantasy, or L'Amour in Westerns, simply because those spots are taken.
You know who's working right now who I really envy? Joe Straczynski. Over the last five years he's written a story with such depth and sweep, no SF writer has ever really equaled it. There's probably not a person on this list who doesn't know what story I'm talking about, either. It's not an accident that his show relies heavily upon 3D technology; he couldn't have done those stories without it. He's working at a much higher level than I am right now, with much better resources -- but he doesn't look so far ahead of me, from where I'm standing now, that I don't think I can catch him.
But probably not with live action. 3D is improving at such a breakneck pace that I think we'll see believable humans -- I don't mean "plausible" humans, I mean believable -- within the next few years. Maya, a new platform that's just shipping (and costs a ridiculous amount) has character tools built into it that have to be seen to be believed. To quote myself in 1989 ...
"Right now it's expensive to do this. All of the hardware involved is expensive. But things get cheaper in this industry....fast."
Today I can take Trent the Uncatchable, put him on a rooftop, and make him run across it and jump to the other side ... and I can animate the entire thing within about 20 minutes. I can make him talk -- or at least, open and close his mouth without making the mesh composing his face break up. I can't make him smile believably yet, but I'm working on that. (The mesh deforms in odd ways ... but other people have solved this problem, and I'm going to.) I can't make his eyes crinkle as though he's amused, and trying to make wrinkles appear in his forehead when he frowns is tough.
But I'm going to solve those things, too.
In 2005, I'm going to load a file and open up Trent -- or Camber Tremodian, or Ola Blue, if Trent and cast are off-stage by then -- and put on a VR suit with gloves and boots, sensors scattered across my body, other sensors attached to the muscles on my face -- and say the dialog I've written for them. And MY IMAGE will act out, in a 3D world, my motions in the real world. Trent will smile when I smile, frown when I frown. (Of course, I'll still animate jumping-off-roofs by hand.)
And then one by one, I (or I and other actors) will walk through the other roles, saying their dialog and making their motions --
By 2010 I'll be wearing a bodysuit that gives me resistance when I pick up a virtual cup of coffee, and that plays back, in real time, the scene my Image is running inside, on a small LCD panel that's floating in front of my eyes. (No goggles ... they'd interfere with the sensors catching the muscles on my face.) And I'll be able to animate only a little more slowly than I can write --
I'm the luckiest man alive. I'm going to live to see these things happen, to have my best shot at being the Twain of this field, or if not Twain, the Hemingway, or the Dos Passos ... or the Moran.
I'm never going to stop writing novels, as far as I can see; I enjoy it too much and there are things you can do with text that you cannot do in 3D. (Though it seems to me that there's no reason a channel with text in it couldn't be poured through your traceset, at odd moments ...) But neither am I going to stop animating. Ever --
There is no one in the world I would trade places with, and that's the truth.