ReadWriteWeb’s Bernard Lunn recently said it is “the Real-Time Web that will unseat Google” and linked to a presentation I created as an example of that real-time web. I’m flattered that he called it “the future of media,” but have to admit that it was the product of nothing more than old-fashioned editing.
I documented the the landing of of US Airways flight 1549 onto the Hudson river by pulling content from various social tools and editing them with the Storytlr lifestreaming platform.
One definition of the word edit is “to collect, prepare, and arrange materials for publication.”
- Traditional journalists collect comments, photos and video by conducting interviews and shooting still and moving images. I collected materials by searching Twitter, Flickr and YouTube.
- Traditional journalists prepare notes by transcribing them. They prepare digital pictures by adding meta information and saving them. They prepare video footage by transferring and logging it. I prepared content by importing it via RSS feeds.
- Traditional journalists arrange words, pictures and video with word processors, layout programs and video editing software. I arranged parts of the story by identifying those that helped create a story, and deleting the rest.
Storytlr can aggregate information in real-time, but the Hudson piece was created by pulling in not-so-real-time content, at least 8 hours after the Hudson landing took place. It was only in hindsight that I was able to look at the pieces and construct a rough story.
Lunn’s prediction might be right. Storytlr “could become an event-streaming mashup platform for media.” But that can only happen if the person using the tool can quickly identify interesting subjects and add their feeds to the live stream. That human will need to do that work in real-time.
Take a look at this small sample of tweets sent during the inauguration and tell me if you can identify/choose a story worth following …
Linked to from this post: