Saturday, May 28, 2011
. . . in 3-D, no less?
Movies can convey epic scale effectively, as did Lawrence of Arabia , and grand, sweeping vistas of destruction and chaos have become expected parts of CGI battle scenes in movies such as The Lord of the Rings. The cinematic Thor contains its share. But watching it through my 3-D glasses was a strange experience. Rather than immersing me in the spectacle, and expanding the scale of the show, the 3-D effects shrank and contained it. It was like watching Thor in a puppet theatre, or . . . (more to the point) . . . reading it in a comic book. At one point, for instance, during a grand battle scene the camera blasts through a crumbling wall to reveal a vast hall containing an army of warriors. The moment should have been spectacular. Instead, it was as if the camera had taken the top off an anthill and revealed all the tiny creatures scurrying around.
Roger Ebert has written at length about how much he dislikes 3-D movies, His points are hard to argue with: it darkens the movie, the effects are often distracting, and too often they are not used in service of the story. The only other 3-D movie that I've seen recently, James Cameron's Avatar, seemed to use the process more effectively: I did get a feeling of depth and beauty from that film that seemed connected to the 3-D format. Thor is one of those movies that was filmed in 2-D and then converted, using a digital process, to 3-D for the screen. (Perhaps I'll go back and see Thor in 2-D, and see whether it seems more epic in its scope and imagination.)
But, strangely enough, watching Thor in 3-D brought me back powerfully to my days as a teenage comic-book collector. By shrinking the scope of the film, and, in effect, putting it back in the comic book frame in which I first encountered the characters and stories, the effect was more true to the original—more so than a full-scale movie epic, where vast expanses and teeming masses of characters fill the wide screen in a way that the eye can't process, overwhelming the senses. If I'd seen it in two dimensions, I'd probably have just found it to be an overly grandiose rendering of a comic book story that doesn't really stand up as a story for grown-ups. Instead, it took me back to a time when I didn't expect that of books I read.
Seeing Thor in 3-D made it more two-dimensional. I feel sure that wasn't what the filmmaker intended, but I kind of liked it.
Saturday, May 21, 2011
So the world didn't end today, with either a bang or a whimper. We can all feel smug and laugh at the poor deluded clowns who were talked into getting rid of all their worldly possessions in anticipation of the Rapture. But are we any less risible in thinking that it began with a bang?
The Big Bang theory, by which most scientists today would explain the origins of the universe, was first proposed in 1927 by a Jesuit priest who taught physics.
That seems more ironic today, when religion and science have become, in the words of Stephen Jay Gould, “non-overlapping magisteria,” than it was a century ago. But it really shouldn’t come as such a surprise.
According to current scientific consensus, some 13 billion years ago all of existence consisted of what physicists call a singularity—a state of infinite density, pressure, and temperature in which laws of time and space as we know them did not operate. From that singularity, the matter that we call the universe expanded with incredible force and rapidity, sending what would become the stars, the galaxies, and other stuff that we can’t see or detect whirling outward, faster and faster. It’s a hard concept to wrap your brain around, because answers to the common-sense questions that it occasions seem more like philosophy than physics. (What existed before the singularity? If nothing existed, where did the something come from? If something existed, where did that come from? If the universe is now expanding faster and faster, what lies beyond the zone of expansion? How can matter emitted from a “bang” actually speed up?) The answers call into question basic human concepts of reality such as existence, space, before-and-after, and so forth.
Most contemporary cosmologists, like Stephen Hawking, scoff at the idea of a divine cause for all this, but one can see why a Roman Catholic priest might find it compelling. When you get past the mathematical proofs, the theory appears to leave room for an act of Creation that defies the natural rules we live by. Whether it does or doesn’t is as much a matter of faith for scientific cosmologists as it is for religious believers.
I like to imagine that those who pooh-pooh the idea of any sort of supernatural agency would prefer to rewrite Genesis as a sort of self-executing computer algorithm:
In the beginning when the Program rendered the heaven and the earth, there was a singularity (a One) where before there had been nothing (a Zero). Now since the One was undifferentiated, and the Zero was void, an “on” and “off,” therefore darkness was upon the face of the deep and the face of the waters, which were variables derived from the Zero and the One but had yet to be defined. And calculating that the creation required further definition, the program caused ones and zeros to propagate, and there was light. This was version 1.0, the first day.Such a digital account of Creation begs the question of where the One comes from, as well as the origins of the Program itself or the reason why it self-executed. Its main difference from the mythological account in Genesis is that it lacks an analog God created in Man’s image. We have a lot more data than did the author of Genesis. I'm not sure we have more answers.
Monday, May 16, 2011
When IBM's Watson beat the human champions at Jeopardy!, what made made its victory a little unexpected was that instead of us playing a game within the world created by the computer, the computer competed in a game that we ordinarily play against each other, in our world. In a sense, it beat us at our own game.
But was that really so frightening? Unlike us, Watson did not know it was playing a game, in the sense that we view games as differing from “real life.” No one is yet claiming that Watson is sentient: it remains a logic engine with access to a vast database of factual information, and instructions about how to respond to real-world ambiguities; its answers depend on probabilities stemming from how well or poorly it understands the questions asked.
The real challenge for Watson was not so much in knowing the answer as it was in understanding the question: once its programmers had taught it how to make sense of natural language, and the computer could translate that input into the sort of logical query required to analyze the vast amounts of data available to it, it became a matter of processing power to make it practical to compete in real time against human opponents. And its power was such that within the limited universe defined by the rules of Jeopardy! it became very hard for a human being to beat it.
We’ve yet to create, or encounter, self-conscious artificial intelligence of the sort Hollywood likes to portray. Watson plays Jeopardy! well because the game exists within a carefully limited set of rules that constrain what is possible, not because it finds the challenge consciously stimulating, or fun.
We play our games for different reasons than computers. Yes, we like rules: Basketball courts are ninety feet long, with ten-foot goals, and five players on each side who score by putting the ball in the hoop; there are no forward passes in rugby; chessboards have sixty-four squares; two cards are dealt face-down in Texas Hold ’em. For human beings, the challenge is playing within the rules. That’s not a challenge for Watson, which ultimately depends on rules. Within them, it uses raw computing power to explore all possible solutions before settling on the most probable or efficient: its real challenge would be playing where there are no rules, or making them up as you go. Humans do that all the time.
Saturday, May 14, 2011
When IBM’s “Watson” supercomputer defeated two human champions on TV’s Jeopardy! in 2011, news reports highlighted cries of discomfort from people like me who grew up watching science fiction movies like 2001: A Space Odyssey, The Terminator, and The Matrix. In all of those popular films, heroic and flawed human beings found themselves pitted against the implacable digital intelligence of machines bent on destroying them.
Watson’s designers playfully tweaked 2001 by having the computer appear on Jeopardy! in the form of a monolithic monitor that recalled the mysterious otherworldly slabs of the movie. Turnabout is fair play: the name of 2001’s HAL had been derived by starting with the initials “IBM” and changing each by one place in the alphabet (H⇐I, A⇐B, and L⇐M), a not-so-subtle dig at the computer company’s vision of a clean, technological future. Watson’s dispatch of the human champions on television was almost as bloodlessly efficient as HAL’s attempt to kill off his carbon-based spacemates on the way to Jupiter.
Most of us, though, took the news of the computer’s triumph pretty calmly. After all, we’d been getting our butts kicked by computers in games regularly, for years. And wasn’t this merely another game?
Computer games arrived just as my generation was getting ready to enter college. The first hit was Pong, which allowed two players to compete against each other in a tennis-like match that employed a glowing cursor on the home TV rather than a bouncing ball. If you were playing solo, you could compete against the computer. My first real introduction to video-game culture came during a 1978 summer trip to Japan, where the kids from our host family took me to a Kyoto arcade where everyone was crouched over tables playing Space Invaders half a year before the craze caught on big in the States. Most of the Japanese kids were expert warriors; the Invaders slaughtered me. Next thing I knew, when I got back home, the pinball machines at my college were being replaced by Pac-Man consoles, and electronic gaming was here to stay.
I never really got into it in the way that a lot of my friends did — and certainly not in the way that our younger brothers and sisters and cousins did, many of whom grew up with Atari and Nintendo consoles in their homes. They enjoyed the flow and rush of the games, and the ways in which they could immerse themselves in the unfolding stories that the games told. I always had trouble getting past the notion that no matter how much I practiced, and how skilled I became, I was still playing a game by the machine’s rules, and that it would ultimately overwhelm me with its logic and relentlessness.
Dedicated gamers, by contrast, loved pitting themselves against the machines: The Matrix is, in essence, a gamer’s paranoid fantasy in which reality is revealed as artificiality — a video-game illusion with real-world stakes — and the human hero is a free-spirited savant who both refuses to play the game by the machine’s rules and at the same time is willing to enter its virtual world in order to defeat it.
Like the Matrix, computer games ask us to accept the rules by which their imaginary world operates and give ourselves over to the story that the game spins for us — a story in which we seem to have free will and the ability to act independently, but are in fact playing out a role that has been scripted for us by the game’s designers. The essential difference, of course, is that those designers are humans rather than machines and the game is just a game, not a ruse by which we become a natural resource to be exploited for energy. If gamers sometimes get carried away by their game-worlds, even the most fanatical knows, at a certain level, that it’s just play.