Thanks to an introduction from Debbie Millman, I got to have lunch today with Josh Liberson. Josh is the cofounder of Helicopter. (For more on Helicopter, see the paragraph below, courtesty of Debbie's post on Air American.)
Somewhere in the middle of the conversation we found ourself talking about the social intelligence we would like to have built into our phone. We decided we wanted the social equivalent of Shazam, that wonderful program for iPhone that allows us to point the phone at a sound and await identification.
What Josh and I wanted was a program that takes a photo of the people, building, event, and searches for a match. Josh and I had lunch in a crowded restaurant. We felt certain that a social Schazam would make fast work of the faces, tailoring, and speech patterns of the guys at the table behind us. This sounds fantastic, implausible. But don't forget, Schazam performs an astonishing act of pattern recognition. It appears to "know" every song in the Western catalog and to be able to find a match in something like 14 seconds.
We didn't want to know anything very personal. We wanted a readout like this: the man facing east was raised in Missippippi, got his education in the Pacific Northwest, and has lived in Manhattan for 5 to 10 years. We want Dr. Doolittle in a box. I know it sounds implausible. But if you have described Schazam to me 6 weeks ago, I would have said, "Impossible, not in my life time!"
Helicopter is a full-service strategic design consultancy founded in 2002. Helicopter has worked with Condé Nast, Capitol Records, André Balazs Properties, Hachette, Time Inc., Warner Brothers, Universal, Arista, Rizzoli, Bloomsbury, The New York Times, and the Washington Post Company on a host of projects ranging from magazine design, book design, identity, web design and packaging to concept creation and luxury package production. Helicopter has won awards in ID, Print, AIGA, ADC, and been nominated for a Grammy Award in packaging. For more information about their work you can go to http://www.hellochopper.com
I think it kinda already exists. Check out the PhotoSynth demo on TED: http://www.ted.com/index.php/talks/blaise_aguera_y_arcas_demos_photosynth.html (if the URL is removed from this post, just Google “TED photosynth”. Midway through the presentation they show how image recognition has advanced a lot.
Also, check out services like Wikitude or Enkin on YouYube. They’re GPS and camera based services that will calculate where you are and point out on your phone’s screen what you are looking at right now. Especially Wikitude is impressive.
I know it’s not the same thing, but it’s in the same ballpark, somewhat.
I don’t know Schazam, or how it works, but I do know that more than 30 years ago Denys Parsons published a book that could identify many thousands of tunes from the first few notes. Read about it here http://en.wikipedia.org/wiki/Parsons_code
OK, Parsons used the beginning of the tune alone, so you would have to catch the start of the song with your iPhone. But everything he did was coded by hand. Imagine simply applying the same algorithm to the entire tune digitally.
Nothing to do with your anthropological read-out, I know, but I just wanted to add a bit of history to the idea.
Do you think we’ll miss public anonymity in the future?
The day when a stranger can snap a quick photo of me and instantly find out who I am, where I’m from and what I do doesn’t seem that far off. Maybe we’ll adjust once the technology is ubiquitous, but I think there could be a rough, spooky transition.
You think that Homeland Security and or the CIA are already working on this maybe?