Author Archives: ayman

Interaction, movement, and dance at DIS 2010

Denmark. Århus. DIS 2010. I was particularly excited to be presenting the first detailed paper on Graffiti Dance (an art performance I co-organized last year with Renata Sheppard and Jürgen Schible). Unfortunately, Naaman wasn’t there; it’s fun for the two of us to storm into a distant country…hilarity ensues. The conference itself was spectacular. With all time lows for acceptance rates (I believe full papers were at 15% and short papers somewhere north of 21%; 2008 had about a 34% acceptance rate), the talks covered everything from prototypes to rich qualitative studies. Aaron Houssian liveblogged all three days in case you need to catch up: [Day 1, Day 2, Day 3]. I spoke on Day 3, the morning after we build a nail gun sculpture.

Now with any good talk you present, you should have some new insight to your work. In this case, I decided not to present what’s in the published article which covers some theory, design process, and system—concluding with an informal exit interview with the audience and the dancers. You should check out the video describing the performance on Vimeo. Instead, I presented the providence of the idea; how three artists far apart from each other made this happen.

First, as it was pointed out to me, nothing new was really created to make this installation happen. There were these system components for other performances that we reused to make something completely unique. The Computer Scientist in me appreciated this deeply. Sometimes, in particular with art, we fight for novelty. Henri Toulouse Lautrec put it best:

In our time there are many artists who do something because it is new.. they see their value and their justification in this newness. They are deceiving themselves.. novelty is seldom the essential. This has to do with one thing only.. making a subject better from its intrinsic nature.

Second, this takes a group painting and stencil image session and maps the on-screen movement (created by the scurry of 4 mouse cursors and brushes scrambling to create an image) and maps it to movement in the audience (facilitated through dancers). Why not map the dancers to the drawn image, rather than the movement of the cursors? It occurs to me (after a few discussions with Renata) that most approches proxy movement through audio cues, drawn images, or time of day. Our performances thing about connected action between people. Motion tied to motion is a much stronger link than an image tied to motion. Movement is not a proxy. This relates to a responsive dress Renata and I made last year, the lights in the dress respond to the dancers movements.

Light Dress

Finally, this performance carries the larger research agenda of mine: how do we build for connected synchronized action? For this embodiment that is this performance, that’s worth a longer journal paper.

[Note: once the ACM Digital Library hosts the proceedings, I’ll add a link to the published paper here]

iSticks iSteelpan iTaiko and iMan

Hey Naaman? You get one of those shiny new iPad things? Ever since I saw them…I thought there was something there. Such a nice big screen. So many colors. It’s stunning. Makes me want to hit it with something.

Apple, well Mr. Steve, seems to dislike the idea of input devices aside from your hand. No pens. No stylus. Use it naturally. I think there’s something to that mantra, but then again, we do a lot as humans with tools and instruments. The exacto knife, a spatula, a paint brush…all of these things let us manipulate and create things around us. Touching is great for interacting, but we tend to create with instruments.

So, when I thought to myself that I wanted to poke and hit an iPad, I had a problem. I had no iPad. As fortune would have it, I borrowed one for one month from a friend in exchange for a box of fancy chocolates.

The second issue arose when I remembered the touch screen is capacitive. Hit it all day long with a stick; nothing. It need to carry a charge and feel like a relatively fatty finger. I immediately thought of modern conductive fabric; much less greasy than a korean sausage though not as tasty.

Armed with a metal dowel, conductive fabric, textured cotton, and some string, I showed up at Music Hackday in SF one Saturday morning and made some drumsticks. You can see how I built the sticks on Instructables:



iSticks: How to make a drumstick for an iPad.More DIY How To Projects

Now…with sticks in hand, I built my second ever iPhone app. A Taiko drum. Just to test the idea out. Not wanting to make another R8 drum kit on my borrowed iPad, I thought of a more esoteric instrument. A Steel Pan drum! Once I built the steel drum, I realized I didn’t know how to play it. So I made a tutorial that acts like a whack a mole game and teaches you how to play twinkle twinkle little star. The app won two awards at the San Francisco Music Hack Day.

Currently, iSteelPan and iTaiko are free in the App Store, which took some doing (initially Apple said I had some trade mark infringements around the tutorial). Distribution of apps…someone should run a workshop on that. Oh right, Henriette Cramer is; Deadline’s in two days…good luck!

Conversation Shadows and Social Media

If you find yourself at ICWSM this week, say hi to us. I know I’ve been introduced to Naaman at least twice so far; I believe he still writes here. So far it’s been a nice mix of the standard social network analysis to S. Craig Watkins’s talk on Investigating What’s Social about Social Media (he’s from UT Austin’s Radio TV and Film department and gives a great perspective on personal motivations and behaviors). Yahoo!’s Jake Hofman gave a great tutorial on Large-scale social media analysis with Hadoop.

Tonight, I’ll be presenting my work on Conversational Shadows. In this work we look at how people tweeted during the inauguration and show some analytical methods for discovering what was important in the event, all based off of the shadow their Twitter activity casts upon the referent event. Let me give a clear example.

Ever go to a movie? Did you notice people chat with their friends through the previews. Once the lights go down and the movie starts, they stop chatting. Sure they might say “this will be good” or “yay” but the conversation stops. I began to wonder, shouldn’t this occur on Twitter while people are watching something on TV. Does the conversation slow down at that moment of onset or when the show starts?

During Obama’s Inauguration, we sampled about 600 tweets per minute from a Twitter push stream. The volume by minute varied insignificantly. However, “a conversation” on Twitter is exhibited via the @mention convention. The mention is highlighted. It calls for attention from the recipient. Our dataset averaged about 160 tweets per minute with an @ symbol. Curiously, there were 3 consecutive minutes where the number of @ symbols dropped significantly to about 35 @s per minute. We still sampled about 600 tweets, just there was a general loss of @s. People hushed their conversation. Perhaps even gasped. Here’s a graph to give you a better feel:

During those minutes where the @ symbols dropped, Obama’s hand hit the Lincoln bible and the swearing in took place. People were still shouting “Hurray!” but they weren’t calling to other’s via the @ symbol. Following the human centered insight (as we found by studying video watching behaviors), we can examine the @ symbols to find the moment of event onset. We call this a conversational shadow: the event has a clear interaction with the social behaviors to be found on the Twitter stream. We’ve found other shadows too, come by the poster session tonight to see them or, if you can’t attend, check out my paper.

HTTP and the 5th Beatle

Have you seen HTML5/CSS/Javascript lately? It’s crazy. More than a simple markup for a web page layout, we now design for interaction with the web. You can query the GPS for your location or even play Quake using Canvas. With all these amazing advancements, one thing does trouble me: HTTP/1.1 was last modified in 1999! Why is that? HTTP provides 7 verbs of which our browser uses two (GET/POST) or maybe three (if you are one of the few fancy people to use PUT). It’s asynchronous and transactional. You want a push update. Forget it. You want a stream to listen to, not happening. Much of the things I build need synchronous interaction which we can’t really do if we keep polling for information via GET requests. Is it ok just to fake it? After many a conversation with my colleagues, Elizabeth and Prabhakar who happened to be the panels cochairs for ACM WWW 2010, I really wanted to start thinking about what the next generation of the web should be and how we can build it?

So, last week at WWW 2010, I ran a panel entitled “What the Web Can’t Do” to address these issues. On the panel was: Adam Hupp (head engineer of the Facebook News Feed), Joe Gregorio (from Google and long time supporter of httplib2, REST, and web technologies), Ramesh Jain (UC Irvine and video guru), Seth Fitzsimmons (now at SimpleGeo with a past life at Flickr and Fire Eagle), and Kevin Marks (at BT former Googlite and Technorati).

Adam pointed out quickly that HTTP Long Poll works best for most applications. WebSockets might solve the rest of the gaps but one must consider this balance of latency and experience. The problem becomes how do we handle notifications, you can’t wake someone’s browser. To follow up, Joe reminded us (well me) the web is more than HTTP but rather a stack of technologies that becomes the whole browser experience. Furthermore, he cited HTTP to be Turing Complete (by the way, Turing began that definition with the phrase “Assuming the limitations of Time and Space are overcome…” which is the very nature of the problem with a transactional protocol and synchronous interactions).

Ramesh lead us from there into problems with interactive video online, and that we have no way to represent events in the stack (short of some SemanticWeb efforts which havent really taken full force, or as Naaman said are dead). Echoing this, Seth pointed out the current collection of the stack leads to privacy problems. If I delete a tweet from Twitter, it’s still in the search index of Google and Bing. Maybe the crawler will come to understand it should be deindexed and decached. Maybe!

Finally, Kevin Marks, who recorded the whole session using QIK [1, 2, 3] with a little help from his friend, pointed out the mess we are creating with no convergence. Pick an HTML5 browser and he’ll point out what video won’t work due to CODEC supports. More so, we traditionally handle these streaming connections through hidden Flash objects on the page; if we leave the plugin architecture in defense of open technologies, what will fill the gap?

At this point, I asked Joe Gregorio:

How come there is no FOLLOW or LISTEN verb in HTTP?

To which he responded:

You mean MONITOR. Actually it was in the original spec. It was abandoned because nobody could agree or figure out how it should work exactly. [NOTE: I’m paraphrasing here, cue the tape for the exact quote]

This floored me—an 8th verb! It’s like discovering there was a 5th Beatle or a 32nd flavor of Baskin Robbins! There was a verb at one point which would facilitate highly synchronous HTTP connections but it never made it to production. This all spoke to what Kevin was getting at: the issue is a combination of politics and technology. Seth and others on the panel started to conclude that OPEN technologies follow CLOSED innovation. In the case of the Web, we are are 2-12 years behind what should happen.

This is why now I can run Quake from 1997 in Chrome. HTML5 and Canvas does what Flash was doing years ago. As much as I would love it, running Dark Castle in a web browser, doesn’t help me very much. Furthermore, these technologies have never duplicated the other: HTML5 and Flash are mutually beneficial; they always have been. I’m not interested in building what I could have built years ago. If one’s to build highly interactive Web apps, then you have to sit closer to the metal, the chips, and the hardware. And while my panel was happy to tell me to wait because it will happen (perhaps MONITOR will make a comeback), where will the deep innovation occur while the web plays catchup? We need to think about the whole ecosystem of the WWW and innovate it faster. Till then, I’m likely to be resolved to writing Objective-C iPhone apps against web services and other arbitrary sockets. Research shouldn’t ride shotgun; it should be behind the wheel.

How many characters do you tweet?

I’m fresh back from CHI2010. Unlike our friends from the EU who were left stranded in Atlanta (and surviving off the good graces of GVU faculty, students and staff. Several trapped students received some extra travel assistance from SIGCHI…if you’re not a SIGCHI member, you should join). On the Sunday, there was a great workshop on microblogging where I had a great conversation with many people, one of whom was Michael Bernstein. We began to wonder, yes there’s 140 character limit, but how many characters do people actually type? Since I happened to have about 1.5 million tweets on hand and a little bit of R knowledge I did a quick investigation at the coffee break.

This is really not the distribution either of us expected. Clearly the bulk of tweets are around 40 characters long. But it’s really curious to see the large set of tweets that are verbose. More so, the exactly 140 count is high. I’d imagine the >135 character spike results from people trimming down verbose tweets to fit into the post size limit.

Are your a tweeter that walks the line or are your tweets short and concise? I wonder if Naaman’s meformers tweet a different distribution when compared to the informers.

Quake, Rumble, Tweet

Months ago, a 4.1 quake shook up San Francisco. Most people barely felt it, but it did make more of a rumble in the south bay, closer to the epicenter. Twitter became a flood of quake tweets. My follower/following friend @tomcoates sent out a tweet asking about the lack of geo-coded quake bots.

Startled by this, I began a little investigation of ‘how hard could it be’. By the end of the night, I had made the little python bot @sfusgs. It received quite a few followers and made the TC. Here’s a quick walkthrough of how I put it all together:

From there I made @lausgs and @earthusgs (which is really popular in Chile). Naaman, it’d be pretty easy to filter the @earthusgs feed with a pipe to get a stream for the NYC area. You can see a map of all the worlds quakes on my usgs quake page.

Statler at CSCW 2010

Statler & Waldorf Do you know these guys? They are described asThey are two ornery, disagreeable old men who…despite constantly complaining about the show and how terrible some acts were, they would always be back the following week in the best seats in the house.” Looking at their snark in aggregate, one finds them to be particularly noisy when Fozzie Bear performed. Early last summer, I began to wonder if now-a-days they would be tweeting snark during a show.

Fortunately, people have stepped up to fill the void and tweet while they watch tv. So last year I began investigating people tweeting during live events/performances in order to discover interesting moments, people’s sentiment, what people are talking about, and media segmentation. The Statler prototype embodies most of my findings to date:

Statler Screenshot

The prototype has two modes: Debate 2008 & Inauguration 2009. Based on a sample of tweets from the first debate of 2008, Statler automatically identified 9 topic segments which align to CSPAN’s editorial slices with an accuracy of 93%. You can also see the trending tweets in comparison to top terms from the debate speakers (taken from the closed captioning). For the Inauguration, Statler uses 50,000+ tweets taken from the public timeline to give a more ‘real-time’ feel to how the crowd is moving as the tweets, the tweet structures and terms change over the course of the swearing in and the speech. Of note here is Statler identified the moment of swearing in as the most interesting point during the 30 minute Inauguration video as well as identified the messing up of the oath as something which was conversationally interesting. The latter will not result as a salient term using a conventional vector-space approach.

Feel free to try out the demo and say be sure to say hi if you’re at CSCW. Look for me in the Horizons and Demo programs. If you can’t find me, look for Naaman who has a good line of sight to spot people in the crowd.

Yes, Virginia….

@santa: why are the chichfilas all closed?

Ever wonder who you are talking to? Or who’s talking back? Recently I came across Mentionmap by asterisq. It shows you a nice little viz of who any Twitter User has mentioned. Check out aplusk or naaman. The viz itself is quite nice, it shows you people, hashtags, and the stroke denote link degree.

Six months ago, I took a look at a mention map of some tweets I captured from the first presidential debate of 2008. Instead of examining in/out degree, I chose to take a look at eigenvector centrality (EV). If you dont know what that is, think PageRank in a social graph (actually PageRank is variant of EV). In such a network, EV shows you the most salient node. For example, the yellow brick road lead to the Emerald City. Presumably, other towns had more roads in and out of them (higher in/out degrees) but they all lead to the Wizard’s city (which had an degree of 1). EV Centrality would rank the Emerald City as the most salient city in all of Oz. Lets take a look at the debate tweets:

In this graph, the node’s size is relative to its EV Centrality. The larger the more salient. Clearly there is a cluster of importance. Lets take a closer look:

Obama, NewsHour and McCain take the top three spots.

The debaters and the moderator had the highest centrality. Despite not having the highest degree. Barack was significantly more salient than Jim or John. When I examined just the degrees of this network, the main characters became less important and we picked up on chatty micro-bloggers.

Think Santa knows about this? Is his list rank ordered? Does he cluster who’s been naughty? Should he be using NodeXL?

Reality Update: People Still Watch Live TV

Not just live TV, but people still tune in to catch their regularly scheduled programming. I’m not sure how Naaman does it in NYC, but I like watching Top Chef when it airs. I’ve been looking at social interactive TV for some time—thinking and building new interfaces for live TV watching and social sharing. Just about anyone I talk to about this work, at least in San Francisco, says “Oh I never watch TV”. If they do watch TV they say “this is useless, everyone uses a DVR now”. I do <3 my TiVo, but really I’d rather tune in on time to watch my stories.

Hulu and DVR addicts should know about this recent Nielsen study. The study found on average 1.15% of all TV watched is time shifted via a DVR. Tiny tiny tiny. What’s even more crazy is that 1.15% is up 21.1% from last year. This means for any month, 7 hours of your TV is time shifted.

You can read the full report; but I’d rather you tell me: Do you time shift your TV? If so how much?