If you haven’t yet signed up for Interaction08, space is filling up so sign up quick!
This is the first annual IxDA conference and I’m honored to be one of the speakers. It’s an amazing group of presenters, including keynotes by Alan Cooper, Bill Buxton, Sigi Moeslinger and Malcolm McCullough. Read more about IxDA and the conference in an interview with Dan Saffer on Boxes & Arrows.
They also have a pretty cool social networking site by crowdvine set up for the conference — if you are going, please connect with me on that site so we can meet up at the conference.
Minority Report style ads coming to taxicab near you. Ok, they aren’t reading your identity off your retina to display targeted ads, but GPS will display ads based on location. I had heard a few years ago that animated Flash ads were being placed on top of NYC taxis, but I can’t find a reference to that now. I did find this story
about taxitechnology.com which is offering ads on touchscreens in 13,000 New York taxicabs.
It’s hard to tell from the marketing materials, but it looks like this has been in the works for quite a while. I thought the interface was pretty ho hum for all the hi tech — shiny buttons, uninspired design. Any New Yorkers tried one yet? Is it cool or lame? helpful or intrusive?
Digging around some back woods newsgroups for bits of codec lore to solve a random challenge a couple of weeks ago, I ran across a post by someone at Metavid and I found the project intriguing. It seems that the video proceedings of the house and senate are public domain (as they should be), and Metavid is capturing these cable broadcasts and encoding them into a free video format (free as in liberty, free as in beer).
Search their archive for your favorite member of congress. This ain’t no YouTube — their aspirations for discourse aim higher than that, and you can see that this is still a work in progress. They are working on a wiki interface for video annotation and browsing. The video is worth downloading and checking out. Here’s a screen shot:
This interface would allow the great multitudes to annotate and transcribe the video to facilitate easy text searches. They have a snazzy auto-complete widget for specifying speaker names that even includes a headshot image. I also like the vertical timeline, color-coded by speaker, which allows navigation through time.
I can’t wait to play with it live.