It Came From the Computer. It Ate the Movies.
There might have been a moment of late when you sat in a movie theater struggling to remain awake as some entirely convincing space battle between two starships a long time ago in a galaxy far, far away took place in front of you. (It was so convincing, in fact, that you forgot it wasn’t real, which didn’t make it any more interesting.) Then you realized that you had just seen a few other entirely convincing and equally boring space battles happen in some movie about superheroes a month earlier, and would likely find yourself watching another in a month or two. If I am describing your reality – and if you have kids who like the movies or are a habitual moviegoer yourself, this is your reality, my friends – you have every right to stop and ask yourself this question:
What hath the Stained-Glass Man wrought?
There’s a scene set in an otherwise forgettable 1985 film called Young Sherlock Holmes during which a figure pops out of a stained-glass window and walks down a church aisle brandishing a sword. It lasts all of 30 seconds. As it turned out, they were the most revolutionary 30 seconds of cinema since Al Jolson spoke the words “you ain’t heard nothin’ yet” in The Jazz Singer back in 1927, and ushered in the age of the talking picture.
The Stained-Glass Man was the first wholly digital human character ever created, by which I mean he was not photographed at all. Rather, he was designed on a computer, and that computer then directed a laser beam to etch the Stained-Glass Man’s image 24 times a second onto a piece of film. He moved and the camera moved with him, at one point circling him to show he was both flat and 3-dimensional. Those 30 seconds took a team of Oscar-winning special-effects wizards six months to complete. The Stained-Glass Man was the work of a new division of George Lucas’ company, Industrial Light & Magic. The division was called Pixar. In a decade’s time, Pixar would begin to make a string of wildly successful animated films in whose cinematic preparation or execution no pen ever touched paper.
Jolson had made movies speak. The computer made movies visually limitless. But what was different about “computer-generated imagery” is that, in the words of the film historian Stephen Prince, it made possible the creation of “credible photographic images of things which cannot be photographed.” Steven Spielberg describes seeing the raw CGI footage of dinosaurs running across a monitor for his 1993 film, Jurassic Park, as the moment he knew movies had changed forever, precisely because you could make a dinosaur look as if it existed in real life. But the true visionary special-effects director was James Cameron, who had used CGI to make it appear as though a column of water had consciousness in his 1989 picture, The Abyss.
Cameron saw how he could use these techniques to make non-existent settings seem real. As Prince writes of a later Cameron movie, “One of the more spectacular digital images in True Lies is a long shot of a chateau nestled beside a lake and surrounded by the Swiss Alps. The image is a digital composite, blending a mansion from Newport, Rhode Island, water shot in Nevada, and a digital matte painting of the Alps.” Cameron would take this magician’s trick to the limit with his recreation of the HMS Titanic in a 1997 movie that featured 500 different effects shots and became the box-office champion of all time. That is, until it was bested by Cameron’s own Avatar in 2009, which is largely set in a non-existent world called Pandora, and into which the faces and bodies of the actors were digitized and then inserted. Avatar made $2.8 billion.
The computer’s dominance of big-budget filmmaking has not led to a new kind of hyper-realism that would give writers and directors new freedom. Instead, it has addicted Hollywood to unreality of an entirely new sort.
What Cameron did with Titanic suggested there might be a glorious future for digital filmmaking: a future in which the past could be resurrected in a way it had never been before, and in which moviemakers could work almost like novelists in the sense that they could simply incept any reality they chose and put it onto the screen. But things haven’t turned out that way, as Cameron’s later triumph with Avatar proved. The computer’s dominance of big-budget filmmaking has not led to a new kind of hyper-realism that would give writers and directors new freedom. Instead, it has addicted Hollywood to unreality of an entirely new sort.
The best example of this is the rise of the comic-book superhero movie. For decades, both on television and in the movies, this stuff was scraping-the-bottom-of-the-barrel entertainment. Special effects were so primitive that the sight of a man trying to fly, or someone using some form of superpower, was more apt to be risible than exciting. (This is why the genre tended to degenerate into camp. It was so self-defeating that it had to make fun of itself before you made fun of it.) But in 2002 came Spider-Man, the first superhero movie of the digital age. And it was a smash. It was followed not only by its own sequels but by a series of successful movies based on the X-Men comic books. And then, in 2008, Marvel made the leap to film with its own version of its comic book, Iron Man. This was, as the younglings say, “the game-changer,” as the comic-book picture joined with science fiction and animated features to serve as the mainstays of Hollywood.
If you go through the list of the top-grossing movies of all time in the U.S., you literally have to travel down to No. 35 – thirty-five! – to find one that isn’t dominated by special effects or digital work. It’s Mel Gibson’s The Passion of the Christ. And it’s not until you get to No. 43 that you find one set in the present day (American Sniper). What this tells you is that for most moviegoers under the age of 40, going to the movies is about seeing special effects, and it has been for nearly 20 years.
Now, switch over to the chart that shows you the most successful movies of all time adjusted for inflation and the story is radically different. The top 25 on that list include Gone with the Wind (at No. 1), The Sound of Music, Dr. Zhivago, The Godfather, The Sting, and The Graduate. These movies conjure up a different kind of moviegoing experience – one that’s story-based and involves recognizable people in recognizable settings facing problems in the real world. This was what the movies were for – to reconfigure reality in a highly dramatic (or comic) way, with the goal of entertaining people by making them feel as though they might live through what the characters are living through. That was the nature of the regular fare produced by Hollywood that didn’t make it to the all-time charts – the singles and doubles and triples, if you will, rather than the home runs.
The breakup of the studio system half a century ago and the subsequent takeover of the entertainment business by corporations that sought to impose a rational financial framework on a creative medium that is – due to its reliance on crazy people – irrational by definition, changed the moviemaking game. It became a wiser and more prudent play to swing for the fences and strike out than make a decent pile of cash over a long period of time. There was a time that a relatively small, relatively realistic, even relatively downbeat movie could catch a cultural wave and become an object of intense discussion and interest. I think of late 1960s fare like Midnight Cowboy, or Bob and Carol and Ted and Alice, or Easy Rider. Each of these movies became a cultural sensation and, relatively speaking, made a huge amount of money. What each of these movies had in common was that they dipped into controversial subject matter and were made for adults. Controversy does nothing for a movie now. In fact, controversy probably hurts. And movies aren’t made for adults, because the ideal movie viewer is a kid between 12 and 30 who might be induced to see the thing again and again.
Studios could make those smaller movies – indeed, mostly made those smaller movies – because if they failed they wouldn’t be too damaging. But in the corporate era, the opportunity costs of producing more modest works that will merely double their small investments have come to seem excessively risky to Hollywood’s present-day panjandra – the producers and executives who used to live only in terror of being ousted by a Machiavellian underling, but who are now in terror of being held to account for their many molestations.
If you think about every movie as an individual start-up, you can see why. What’s better? To make the rounds of venture capitalists with an interesting app for which you only need $1 million in seed money, or to have a wildly ambitious product with potential global appeal for which your number is $100 million? For one thing, if you can raise the $100 million, you can pay yourself a lot more and spread a lot more money around to your friends and others with whom you might be in business at a later date if the start-up fails. And people might take your bigger number more seriously than your smaller number.
So the institutional and personal bias is toward huge projects that shower money all over the industry, because you make money while they’re being made and your company will make a huge amount of money if it hits big. And the big movies are the digital movies, the CGI movies, the movies that don’t tell you a story but take you on a ride.
Truth to tell, if CGI and all the tools of digital filmmaking had been available as the motion picture became the dominant medium of the first half of the 20th century, realistic cinematic storytelling might never have evolved at all. The ability to thrill and captivate through the creation of alternate worlds and alternate realities is so seductive, both for audiences and moviemakers, that it would have been hard to resist. Indeed, the very earliest surviving films, by the French director Georges Méliès, are dominated not by story but by visual and cinematic tricks. They were made in the 1890s.
Look. I’m 56. I’ve been going to the movies for 50 years now. And as for me, I don’t need a medium that has returned to its infancy, especially since there’s a chance I might be returned to my own infancy soon enough. I need a plot. (No, not a cemetery plot.)