CNN has an amazing photo "spread" of the inauguration. They used a new photo stitching system called Photosynth to turn hundreds of shots from all over the mall into a virtual 3D space you can walk through. People sent in pictures from their cell phones and digital cameras, making this a very cool piece of collabrative user generated content. As more photos come in the "synth" will get bigger and richer.
Here's a screen shot from CNN's site.
To see the whole thing go to The Moment on CNN's site. It's easy to get "lost" just wandering around the crowd. I showed it to my artist wife. She usually isn't impressed when geeky little me says "Want to see something really cool?" But not this time.
You'll have to install the Photosynth viewer. It's a beta/1.0 release from Microsoft, so it's currently only available on Windows XP, Vista and some VM configs for Macs set up to boot Windows. You need a decent graphics card to get a good experience. It also has to move a lot of pixels, so don't try it on dial-up.
More And Better History
Just the walk throughs are cool enough, but imagine what we could do with meta-data from the photos! As you walk through the space, you could see who shot each picture, read the blogs or tweets they posted when they took the photo. You could create a 3D photo blog capturing the impressions and thoughts of hundreds of people.
Or maybe they could add a time dimension and soundtrack to the synth. Imagine being able to watch the Mall fill up and wander through it before, during and after the event. All while listening to Obama's acceptance speech and the reactions of the people watching.
Before his death Douglas Adams, author of The Hitchhiker's Guide To The Galaxy, was explorering ideas for creating interactive, user generated, real time travel guides and history books. He couldn't do it then because the technology wasn't there.
Now with cell phone cameras, 3G wireless Internet and Web services like this, we could actually build the Guide.
How Does It Work?
If this were an episode of Mythbusters, they'd include a "Warning: Science Content" disclaimer here. BTW, Mythbusters is my favorite show. Told you I was a geek.
Imagine you took a bunch of photos inside a room, shuffled them and gave them to some one. That person would be able to compare the photos to each other, figure out what part of the room each one represents and build a 3D model of the room. Just from the 2D photos.
Photosynth does the same thing, only using hundreds of photos taken by hundreds of different people. When you "move" through the synth you're actually changing your point of view in the virtual 3D space to wherever the photographer was standing when he took the shot. If you have enough photos, it can recreate a space as large as the National Mall, which is over 2 miles long.
Check out the Point Cloud view of this Golden Retriever's nose. It will show you how the 3D space is recreated.