What movie should I watch?
Where should I eat dinner?
Who should I go on a date with tonight?
To help users answer that last question, the mobile app Tinder provides a unique card-based interface1. Upon opening the app a user sees a deck of cards, and each card shows basic information about one potential match: photo, first name, age, and number of mutual friends. The cards are already filtered by location, and the user can tap into a card to find out more about her potential date. The user can also swipe the card left if she isn’t interested, or swipe right if she is. If two users share mutual interest, they are both notified and can exchange messages through the app. Tinder is designed to help users make choices as quickly and easily as possible, and then it gets out of the way. (These are the iPhone screenshots that Tinder provided for the App store.)
Technology is not only making it easier for users to find dates, but it is also providing them with instant access to nearly every movie ever made and telling them about more restaurants than they knew existed. Users face an increasing abundance of choices in a widening variety of contexts, and Tinder-like interfaces could be adapted to help users answer questions like those above2.
These questions — what to watch, where to eat, and who to go on a date with — all share common characteristics: there are many options, preferences are very personal, and they are important to short-term happiness. On most services, answers to these questions consist of text, images, and locations, and they are displayed in lists, grids, and maps, respectively:
When people are trying to choose what they want to do, however, these interfaces make the process more difficult than it needs to be. They provide minimal information about ten or more results at a time, and the user has to decide which warrant a closer look, or if they want to clear them all and move to the next page. Users searching from a desktop browser can open promising results in different tabs, but users searching from mobile apps have to remember their favorites for the duration of the search. The experience only worsens if the user wants to tweak the original query, since these interfaces do not keep any ‘state’ and with each revision the user is essentially starting from scratch.
Consider, for example, a user searching for a movie to watch. iTunes offers grids and lists based on what’s popular, regardless of the user’s mood or interests. While Netflix offers some personalization, it’s still structured like a brick-and-mortar store; both the desktop browser and mobile app interfaces organize movies by genre or category on horizontal shelves.
Both iTunes and Netflix require that users do work to get more information, just as brick-and-mortar customers have to physically remove the movie from the shelf, turn it over, and read the back. Furthermore, none of these interfaces provide a good way for users to keep track of ‘maybe’ candidates while they look at more options. In the store, the user can physically carry around a handful of movies, but will either need to put them back later or ask an employee to do it for her.
Netflix does let each user keep a List of movies to watch, but this feature has evolved from the DVD rental queue, and feels more like a “list of movies to watch eventually” and less like “a way to keep track of movies to watch now.” The latter implies movies for a certain day, mood, and audience, and consequently requires a more selective process. Most users resort to the inelegant practice of opening each potential movie for ‘right now’ in a new Netflix tab, but this requires a delicate hover-move-right-click mouse maneuver and makes it easy to accidentally start playing the movie instead. The custom iTunes Store browser is even worse: it doesn’t let users open multiple tabs, and it buries its Wish List feature deep in the UI.
As another example, consider a user who wants to go to a new restaurant for dinner. Services such as Foursquare and Yelp help users make these choices, and while there is some variation between their interfaces, results are generally provided in both list and map formats. In a desktop browser, there’s enough space to show the two formats side-by-side, but on mobile users must toggle between the different views (on Foursquare, scrolling the list minimizes the map, and tapping the map minimizes the list; Yelp offers a List/Map button in the upper right).
These interfaces are designed to show good options of a certain type in a certain area, but they make it difficult for the user to make informed choices, especially on mobile. While the desktop browser makes it easy to see the basic information about each venue along with where it’s located, on mobile the separate list and map views force the user to either tap through to the venue detail screen or cross-reference between the two screens in order to see where anything is.
In addition, neither interface helps the user narrow down their possibilities to a single choice, and instead they treat each new query as a brand new search. For example, imagine a user looking for Japanese food on the way home from work. First she scrolls through all of the results in her initial search area near her office, and takes mental note of a few promising-looking restaurants. Then she drags the map closer to her apartment to see more results, but if the two map areas overlap and the results near her office happen to rank highly, then she’ll have to look at those same results a second time. Even worse, if she’s using the map view in the mobile app rather than the list view, she’ll have to explicitly tap each pin again to see if it’s one that she’s already looked at. Similarly, if the user decides halfway through the search that she wants ramen rather than sushi, she’ll have to look through many of the same results yet again. The entire process requires a lot of tapping, a lot of remembering, and a lot of duplicated effort. If these services maintained state for each new decision and kept track of what the user had seen, then the user would only need to look at each option once.
Rather than the list, grid, and map interfaces that are currently common, services could repurpose Tinder’s card-based interface to help users make better choices faster. To understand how this would work, it’s helpful to go through the flow step by step:
Technology has made it easier for users to discover an increasing abundance of things they might enjoy, but it has not yet offered comparably improved tools for wading through this deluge of options. iTunes, Netflix, Foursquare, and Yelp are not the only services that might benefit from Tinder’s card-based interface, and users face similar choices when shopping for clothing, searching for an apartment, and choosing a book to read. By using cards to show enough information about each option for users to make a maybe/no decision, and by keeping track of which options users have seen and which they are considering, services can help users make better choices faster, and ultimately make users happier.
Tinder was certainly not the first app to ask users to rate a succession of items. The earliest example I can think of is the Hot or Not of a decade ago, and Pandora also offers thumbs-up or thumbs-down buttons for giving feedback on songs. These are slightly different, however, and don’t use cards in the interface. Other apps in other contexts have more recently used Tinder-like cards, such as Branch’s link-sharing service Potluck, but that was focused around content consumption rather than making choices.↩
Paul Adams recently wrote about the trend towards cards as general interactive containers for all types of content, and a startup called Wildcard wants to replace the mobile Internet with a library of cards designed for those devices. While these are compelling visions for the future, they are outside the scope of this post. Cards can greatly improve the user experience for certain types of choices regardless of their prevalence in other contexts.↩
In high school, my friends and I would spend nearly as long at Blockbuster deciding what to watch as we would spend watching the selected movie itself. Choices like these are often made collectively by groups of friends, and per-search adaptation could be especially useful in those circumstances. For example, perhaps a user loves sushi, but tonight she is eating with a friend who is vegan. It might be useful for the app to ask the user who she was with, or maybe the app could even let multiple users collaborate on the same search at the same time on their own devices with their own decks of cards… but these features lie too far beyond a minimum viable product.↩
Yet people have communication needs that Twitter does not yet satisfy, and these missing features present it with both strategic risks and opportunities. Specifically, shortcomings in Twitter’s products for interpersonal, private, and extended conversation are forcing users to go elsewhere.
And go elsewhere they have! A horde of upstart messaging apps, including WhatsApp, Kik, WeChat, Line, and KakaoTalk, each have at least 100 million users. While their features differ slightly, in general users can exchange text, photo, audio, video, and sticker messages privately both with other individuals and with groups through an SMS- or chat-like interface. While Twitter’s existing @-reply and DM features have recently improved to offer a more comparable experience, it still has major shortcomings that prevent it from competing effectively.
Twitter’s existing @-replies (also known as @-mentions) are great for short, public conversations involving a small handful of participants. It has recently added continuity to conversations by drawing vertical blue lines between @-replies in the home feed and by showing all threaded @-replies on tweet detail pages. While these changes make it easier for users to observe a conversation as it happens, the best conversations still overwhelm the medium, and as a result participants move those conversations to some other service. Obstacles presented by Twitter’s current @-replies include:
Twitter also offers DMs as an alternative to @-replies, which allow pairs of users to exchange private, 140-character messages. For years DMs were hidden in the official website and apps, but Twitter, to its credit, has recently tried to resurrect its private messaging product by moving it to a prominent place in the UI and adding support for photos.
One weird thing about DMs-as-chat is that a lot of the ones I get are now like @MagicRecs. In other words, not chat at all.
— MG Siegler (@parislemon) December 10, 2013
The other weird thing, of course, is that DMs have long been the stepchild locked in the basement that the parent wished would go away…
— MG Siegler (@parislemon) December 11, 2013
Can you revive such a product to become a core feature after years upon years of not only neglect, but contempt?
— MG Siegler (@parislemon) December 11, 2013
Yet, as TechCrunch columnist MG Siegler notes, there are lingering user perceptions that the company has yet to overcome. Twitter has trained users to think of DMs as a last-resort communication method, used only when the matter is urgent (traditionally, DMs would trigger in-app, email, and SMS notifications) or when the sender has no other way of contacting the recipient. Even with the recent changes, Twitter is still thinking of DMs as email, merging the email envelope icon with the messaging chat bubble icon in its new iOS tab bar:
The upstart messaging apps are successful, however, precisely because they are not email. Like email, DMs must be ‘composed’ because it takes extra effort to break up a thought into 140-character snippets, while the messaging apps try to replicate the casual, instantaneous nature of face-to-face interaction. Their typing notifications, for example, create a sense of mutual presence and attention, while stickers convey emotion when body language cannot.
More importantly, DMs are still dependent on Twitter’s existing follow graph: a user must follow another user in order to receive DMs from him. Twitter can either be a place where you follow the people you’re interested in, or a place where you follow the people you want to talk with, but it can’t be both. Furthermore, messaging is about conversation, so there’s no point in a user being able to contact someone who can’t respond. Twitter has experimented with giving users the option to relax this restriction, but it reverted the change several weeks later, suggesting that another solution is needed. Until Twitter resolves this issue, the subtle social friction and persistent fear of embarrassment will drive users to other messaging products.
While these problems with Twitter’s conversation products are solvable, they are not simple, and Twitter has focused on other user needs. This is understandable, especially because messaging has historically been difficult to monetize, because messaging is a commodity that is not central to Twitter’s core broadcasting product, and because richer messaging features could overcomplicate Twitter’s already-confusing product. The upstart messaging apps, however, pose a significant threat of disruptive innovation, as described by Clayton Christensen in his book The Innovator’s Dilemma. If you’re unfamiliar, watch the first ten minutes of this talk, but skip the two minutes of intro music!
As Christensen might predict, Twitter has seemed happy to let the other messaging companies relieve it of the least-profitable part of its product, so that it could better focus on what was important to the business. Meanwhile, Snapchat pioneered ephemeral social media, Line generated substantial revenue with virtual stickers (and others promptly copied them), and WhatsApp has shown that people would actually pay money for social apps ($0.99/year adds up over 400 million users). More problematically for Twitter, these apps are expanding upmarket into other aspects of social networking: Snapchat recently launched a broadcast feature called Stories, Kik is pushing a platform that enables third-parties to develop social apps, and WeChat offers a Twitter-like Moments feature.
As the other messaging apps move upmarket, they will compete directly with Twitter’s profitable social and information broadcasting products; they, too, will try to become the global town square. In order to defend its position, Twitter must move its own messaging products back downmarket. While Twitter could continue to incrementally iterate on DMs until it reached superficial feature parity with the competitors, this would not fix the underlying shortcomings described above.
More drastic changes are needed to revive DMs: Twitter should transition DMs into a separate messenger application focused on conversation1. (There are also many improvements that Twitter could make to @-replies, but those are outside the scope of this post.) People go to Twitter for two main reasons: to find out “what’s happening,” and to talk to other people. The timeline in the current app satisfies the first need, and the conversations in this new app would satisfy the second need, similarly to how Facebook has separated its Messenger iPhone app away from its flagship app. How should DMs work, and what should this new app look like?
The above wireframes show what Twitter’s DM app might look like if it were modeled after its successful messaging competitors. On the left, the Contacts tab shows the people with whom the logged-in user can exchange messages, and on the right, the conversation view in the Messages tab shows a standard group messaging interface. The only difference of note is that this new DM app shows the user how many messages she has exchanged with each friend; Twitter has been successful in lightly gamifying its products through the prominent placement of tweet, follow, and follower counts on user profiles, and these message counts are similarly intended to encourage interaction between users.
Twitter can, and will need to, do better than this standard interface in order to create a truly competitive and compelling messaging product. Fortunately, there are several aspects of its current products that it can leverage to create a differentiated user experience unique to its new messaging app. Specifically, Twitter should fork its follow graph, seed conversations with tweets, create serendipity through conversation discoverability, and use Twitter as a platform for growth.
Twitter’s social follow graph is both unique and valuable, but, for reasons described above, it should not rely on these relationships for messaging permissions in this new DM app. To solve these problems, rather than overburden its existing interest-based follow graph, Twitter should create a separate message graph. Users would not want to receive DMs from anyone, so some sort of permissions are necessary. At the same time, there’s little point in sending messages to someone who does not have permission to respond. Thus the new message graph should be symmetric: users would add other users as contacts, and if the recipient approves the request, then the two could then exchange messages. It’s also important that each user’s list of approved contacts would be private, unlike public follows, because users shouldn’t feel pressured into receiving messages they don’t want.
While traditionally a new social graph required substantial manual effort from users, this is no longer the case. First, Twitter’s message graph could be bootstrapped from user’s address books2, similarly to how other messaging apps create their users’ contacts lists. Second, the message graph could use the initial follow graph as a starting point, so users would automatically be able to exchange messages with everyone they follow who follows them back, and would automatically have a pending ‘friend’ request to everyone they follow who does not follow them back. After this one-time import, however, the message and follow graphs would diverge.
Because users would have the same account on both Twitter and this DM app, the follow graph still gives users of the DM app unique social opportunities. Because the current follow graph is asymmetric, users can follow whomever they find interesting, regardless of whether that interest is reciprocated. As a result Twitter’s network has become aspirational: users follow the people they wish they knew. Twitter allows users to stumble across someone new, learn about their interests, and interact casually through @-replies. It’s natural for users to then want to strengthen these relationships further through lightweight, synchronous, private conversations. This new DM app would provide a social space for these aspirational interactions3, which is something that the other messaging apps are not able to offer.
Finally, as in the background of this wireframe, the DM app could leverage Twitter to add additional social richness to the Contacts tab by showing statistics about interactions that happen on Twitter itself. This list could even be auto-sorted by the amount and recency of this user’s DMs, @-replies, and tweet favorites with each contact, since a user is more likely to want to talk with friends with whom she interacts regularly on Twitter.
Because the new DM app would integrate directly with Twitter, it could automatically display tweets from a conversation’s participants in the thread itself. This is a simple feature, but one that other messaging apps also could not offer, and it would give users additional context in a variety of conversations. For large groups that outgrew the constraints of @-replies, these embedded tweets would make it easy for users to see what had already been said. For small groups, these embedded tweets would enrich the conversation by giving a sense of what other participants were thinking about. Embedded tweets wouldn’t trigger notifications like normal messages, and they could be hidden either individually or for an entire conversation. They would make it easier to jump between Twitter’s conversation spaces and would facilitate discussion of specific tweets.
At a glance, it might seem like the DM app would not only compete with the other messaging apps, but would also cannibalize Twitter’s existing @-replies: if @-replies appear in the DM app, why bother with both? However, because of the aforementioned obstacles to conversation presented by @-replies, the DM app would, to use Christensen’s language, “compete with non-consumption.” It would enable conversations that otherwise wouldn’t happen on Twitter at all, and the public nature of @-replies and private nature of DMs sufficiently differentiates the products.
Conversations in the new DM app should remain private so as to provide a social space comparable to the other messaging apps. Users don’t always know which of their friends would be interested in a particular conversation, however, and sometimes they want a social space that allows for more serendipity than conversations that are typically both private and secret.
Discoverable DM conversations similar to those on Dashdash would meet this need, since the contacts of the participants could find and join a conversation if it looked interesting. Discoverability would be enabled on a per-conversation basis, and new participants wouldn’t be able to see messages sent before they had joined.
If a DM conversation had been set to be discoverable, perhaps using an interface like the one to the right, then participants might want to explicitly encourage their friends to join the conversation. From discoverable rooms, participants would be able to tweet a link to the DM conversation, and followers who clicked it would seamlessly see the conversation in a new browser window or an in-timeline Twitter card.
There are many ways in which Twitter could expose existing users to its new DM app, ranging from aggressive emails and UI overlays to less intrusive promoted tweets in the timeline, and these provide obvious strategies for rapid growth. The earliest adopters of the new DM app still need to have a good experience, however, before their friends have transitioned from the legacy DMs to the new DMs.
Foursquare, Instagram, and other social services solved this chicken-and-egg problem and grew quickly by making it easy for users to tweet their content. While the new DM app would partially adopt this strategy with the aforementioned tweetable links to conversations, it could also simply degrade gracefully to legacy DMs. If a new DM user wanted to send a message to a legacy DM user, then the sender’s messages would be received as a legacy DM (assuming the recipient already follows the sender on Twitter). Perhaps these messages could be preceded by an automated header from Twitter that invited the recipient to download the new DM app. This backwards compatibility would both accelerate the growth of the DM app and make it immediately useful for new users.
WhatsApp, Kik, WeChat, Line, and KakaoTalk are growing quickly despite their focus on the generally-unappealing messaging market. As they’ve grown, they have begun to move upmarket into the status broadcasting features core to Twitter, and it’s important to Twitter’s long-term success that it defends and develops its products accordingly. By forking its follow graph, seeding conversations with tweets, creating serendipity through conversation discoverability, and using its core product as a platform for growth, Twitter could create the compelling and viable messaging product it needs.
I strongly considered giving this new application a separate name and brand (perhaps Twig?), but because the most significant advantages of the new DM app would come from integration with the rest of Twitter, this would just confuse users further.
I also considered whether this new app could be built by a third party, rather than by Twitter itself. While all of the functionality I describe needs only the publicly-available APIs, Twitter has discouraged applications that allow consumers to engage with the core Twitter experience; this new messaging app falls squarely in that upper-right quadrant. An enterprising third party might be able to get Twitter to grant more than the default 100,000 API tokens, but it would be imperative that the short- and long-term incentives of both companies were aligned. For the purposes of this post, it’s simpler to assume that Twitter would be as tied to this app as it is to, say, Vine.↩
Users allow these services to have access to the names, emails, and phone numbers stored in their address books, and then the service can match those identifiers across users to quickly and automatically build a reasonably-accurate social graph. This results in connections that are similar to Facebook’s, but without the years of painstakingly sent and accepted Friend requests.↩
Dashdash, the messaging service I’ve been working on, provides exactly this sort of social space. Several of my private beta users had only previously known each other through Twitter, but were able to converse freely on Dashdash without needing to exchange additional contact information. While this was not one of the primary features I had in mind when designing the product, it is one of the things that my users have enjoyed the most.↩
Google Glass has sparked a passionate debate about privacy and the role that technology should play in our lives. This public resistance is certainly not futile: the success or failure of Glass depends on whether it can alleviate these concerns and/or provide enough value to offset them. The debate has focused on the camera and, to a lesser extent, the microphone; it’s difficult for people near the Glass wearer to know if they’re being recorded. Frog Design researcher Jan Chipchase describes a possible consequence of this uncertainty:
It will be interesting to see whether Glass is perceived as a threatening object and thus may force others in proximity of a wearer to maintain a hyper-awareness of the wearer and their own actions in places — whereas today, they are currently able to relax. This would be, in effect, like a blanket tax on the collective attention of society1.
Due to this attention tax, people might prefer that those around them not be wearing Glass. Many of the advertised features of Glass are utility apps (such as weather forecasts, email previews, and driving directions with traffic reports), but non-wearers who prefer not to be recorded are indifferent, at best, to the wearer’s convenience. The keys to the success of Glass, then, are applications that are useful not only to the wearer, but also to other people without Glass2. Specifically, non-wearers could derive value from Glass applications for photo sharing, optionally synchronous messaging, and telepresence.
While the camera is certainly the most troublesome aspect of Glass3, it also has the potential to provide the most value to non-wearers. The public reaction to Glass is reminiscent of the extreme distaste for amateur photographers that arose in the late nineteenth century. How did society transition from one in which these photographers were derided for privacy violations, to one in which everyone has a camera in their pocket?
Cameras became ubiquitous because people value the images they produce. It was critical both that people enjoyed using cameras to create the images, and that other people enjoyed viewing the images later. Imagine, for a moment, a birthday party in the early days of point-and-shoot cameras: everyone wants to have a friend with a camera to document the celebration4, and as a result everyone wants to buy a camera and be that friend.
Improvements in both capture technology (Daguerreotypes, film, digital sensors, etc.) as well as hardware form-factor (medium-format cameras, SLRs, mobile phones, etc.) have made photography faster and easier. The Glass camera is available at a moment’s notice with a touch or voice command5, which makes it easier for the wearer to document social events, so non-wearers can enjoy more photos than would have otherwise been taken. Wearing is sharing is caring.
These behaviors outline a possible social network focused on image sharing, but more can be done with Glass to both benefit those nearby and differentiate from existing services such as Facebook, Twitter, and Google+. Google+, specifically, is primarily used as a platform for public and community broadcasting, and feels like it was designed to resemble Facebook so that it could compete with Facebook. The service that replaces Facebook, however, will not look like Facebook, and Glass will suffer if integrated too tightly with Google+’s broken social graph.
Google should not force Glass to be yet-another stage for performative broadcasting, but should rather develop it as a novel device for directed, private communication. The Glass hardware is uniquely suited for having conversations in the moment with people far away using text, image, audio, and video messages:
Google knows this, as was apparent in the early promotional video above, yet the potential is much broader than simple video calls and photo broadcasts to social networks. Dustin Curtis points out that applications have recently begun to use images for communication, rather than artistic expression on Instagram or documentation on Facebook. A single photo can carry a large amount of information, and can tell the viewer “where someone is, what time it is, who they are with, and much more.” The popularity of Snapchat’s ephemeral, exploding content illustrates the communicative power of images, and the first-person perspective inherent to Glass6, as well as the low-friction capture process, are perfect for this use case. The friends and family of Glass wearers can enjoy this unique and personal communicative content even if they don’t have Glass themselves7.
The opportunity for new communication tools is larger, however, than incrementally-improved versions of apps we have today. The Glass hardware is well-suited for optionally synchronous conversations with individuals and groups who are far away. Optionally synchronous conversations are somewhere between the synchronicity of a phone call and the asynchronicity of email, and a good example is desktop instant messaging on AIM or Gchat. These services are great for talking with friends while watching television or with coworkers whom you don’t want to interrupt, and recipients can respond at their leisure with only an expectation of timeliness.
Glass will allow its wearers to integrate these conversations into many new contexts, and these conversations will benefit both wearers and non-wearers. The activities featured in the ‘How It Feels’ promotional video look too engrossing for the wearers to be engaged in a fully synchronous conversation with someone somewhere else, and optionally synchronous conversations are ideal for sharing these experiences with people elsewhere while staying present in the moment. Current mobile devices get cluttered with notifications, but the Glass wearer can engage in a conversation without seeing notifications from other apps, just as a landline telephone user can make a call without checking her voicemail.
There is also an interesting use case for synchronous conversations in situations where two or more Glass wearers are together in one place and have mutual friends or coworkers somewhere else. Current video conferencing tools require that users stay in front of a stationary screen with a camera (Google Hangouts, Skype), hold a mobile device in their hands (Apple Facetime), or sit in a fancy telepresence room like this:
As an alternative, imagine two Glass wearers are seated across from each other in a restaurant, and a mutual friend in a faraway city would like to join them from her computer. The Glass wearers can maintain eye contact, and the third friend can see each through the other’s camera. If something else captures their interest, then they’ll naturally turn toward it, and the third friend will automatically follow their gaze. The video from the third friend’s camera would also always be in the field of view of the Glass wearers, rather than off to the side on a screen. Furthermore, audio quality would improve because the Glass microphone is built-in, rather than resting on a conference table. Glass will enable casual, go-anywhere telepresence that helps both wearers and non-wearers feel like they’re with friends who are far away.
Social applications for Glass that focus on photo sharing, optionally synchronous messaging, and telepresence can all benefit non-wearers and balance concerns about privacy and the attention tax. The near-term8 success of Glass hinges on whether other people want their friends to be wearing it, regardless of whether or not they have Glass themselves. Concerns about privacy and attention can be overcome by social apps that bring people closer together, and Glass has an exciting future as a communication device9.
Other emerging technologies will levy this same tax, and that there’s an upper limit to how much attention is required in total. Security systems, autonomous delivery drones, and even gaming consoles will all have increasingly networked cameras, so perhaps we’ll just assume everything is always being recorded. While that future certainly seems bleak, at least Glass wouldn’t have an additional adverse effect.↩
This requisite benefit to non-wearers is distinct from network effects between Glass wearers: while there will certainly be applications that cause Glass to be more useful as more people have it (as with fax machines and Facebook), the network will be unable to reach critical mass in the face of negative public opinion.↩
Another concern is the prominence of the display: it is both on your own face, and in the face of everyone else. People are very adept, however, at knowing where other people are looking, and I suspect that a glance up-and-to-the-right at Glass will be just as obvious, and just as rude, as a Glance down at an iPhone. Even more conspicuously, Glass wearers will need to touch or speak to the device to use it, which will be obvious during face-to-face conversations.↩
Some people, of course, don’t enjoy looking at photos and don’t like to have their picture taken. While they would be nonplussed by Glass as an evolution of previous cameras, they might still be interested in the potential of Glass as a communication device.↩
While there are certain stigmas around talking to our devices in public, these voice interfaces have the benefit of being transparent: if I say “ok glass, take a picture,” then everyone around me knows that I just took a picture. Glass will make those adjacent to the wearer more comfortable by requiring that users verbally broadcast their usage of the device. In addition, voice interfaces can be used by others. This is often portrayed as a humorous weakness of Glass, since a non-wearer might say, “ok glass, google for photos of [something obscene],” but social norms can easily regulate such behavior.↩
Given the popularity of the selfie, how long will it be until the press conference announcing Google Hand Mirror?↩
I have plans to build app along these lines with the Mirror API (as opposed to the undocumented and unsupported “sideloading” Android API). I’ll post more details when I have something working!↩
There are many questions about the long-term consequences of Glass:
I’ve been invited to the Glass Explorers program, and I’m scheduled to pick up my device tomorrow. My three #ifihadglass applications are here, here, and here. I’ve thought a lot about whether the $1,500 price tag is worth it, but Glass represents a future worth exploring, and I’m excited about the opportunities. As @amybijamy said on Twitter, “It’s like getting into Hogwarts!”↩
People regularly gather into groups that are too large for a single conversation, so instead we break up into smaller clusters. We can look around, see which people are having which conversations, and join whichever conversations we choose. The things we say in these real-life conversations are private to those there at the time; as people come and go, those remaining can change what they talk about and how they act.
The dynamics of these real-world social spaces emerge from the fundamental experience of being human: how our voices carry through the air, how our bodies occupy physical space, how we recognize faces, and so on. These interactions feel rich and natural because they are rich and natural.
Dashdash emulates these real-world social spaces. It’s in private beta and under active development, but right now it’s an instant messaging service built on top of the XMPP/Jabber standard. This means Dashdash works with existing chat clients1 that people already use across a variety of platforms. These apps all support multiple services, so users can add their Dashdash account alongside their Gchat, AIM, and other accounts.
Dashdash creates two new groups in Alice’s contact list that make up her shared social space. Similarly to other messaging services, the Dashdash Contacts group shows Alice the people she knows who are currently online and available to chat2; it’s like a list of everyone who is at the cocktail party. The Dashdash Conversations group shows the conversations that Alice can see, including the ones that she’s participating in; it’s like a list of the clusters of people talking at that party.
Alice can join a conversation by selecting it in her contact/buddy list and then sending it a message. She can’t see what was said before she arrived, and she won’t know what will be said after she leaves, just as if she had walked up to a conversation in real life. It’s not scalable for everyone to see every conversation, so Alice can only see the Dashdash conversations that her friends are in. Because friends can seamlessly join each two-person conversation, groups can form serendipitously based on who’s online and what they’re talking about. This makes Dashdash different from other services, which offer social spaces that are either too public or too private:
Some services, such as Facebook Chat3 and WhatsApp, only let friends join a conversation if they are explicitly invited, but it’s too much work for participants to know who to invite, when to invite them, and which conversations will interest them. Others, such as Facebook and Twitter4, blur the edges and make the conversation public to the participants’ entire social graphs, forcing them to perform for a large and unseen audience. Still others, such as IRC and Hipchat, have static, predefined 'rooms' that are easy to join but make it difficult to have side conversations. There are many, many other services5, yet none mirror how conversations actually work in social spaces in the real world.
While this early version of Dashdash is useful for groups of friends and co-workers, there’s much more to build. When people are in face-to-face conversations, there’s information being exchanged other than just the words being spoken. Each participant is aware of the others’ intonation, facial expressions6, and body language as well as the context for the conversation. Data (and data exhaust) from other services can help the participants understand these shared contexts: Where is each participant? Who else is in the same physical space? What is their attention focused on? What other desktop/mobile applications are they actively using? What was their most recent tweet or code check-in? The more ambient context the participants share for a conversation, the more rich and natural that conversation can feel.
Communication technology will continue to improve, and these improvements will create recurring opportunities for new social applications. The instant messaging service described above is just the beginning of the vision for Dashdash: to make our face-to-face conversations indistinguishable from our conversations with people who are far away.
Supported chat clients include:
In general, Dashdash should work with any app that supports XMPP/Jabber.↩In mobile messaging apps, the notion of status/presence becomes considerably more nuanced. Rather than the standard available, idle, away, and offline states that are supported by XMPP, mobile users are in some sense always available, so at some point I’ll update Dashdash accordingly.↩
It was recently leaked that Facebook was working on browsable chat rooms similar to those in Dashdash. Social norms on Facebook, however, encourage users to oversaturate their networks over time, and as a result Facebook users will be less comfortable making their conversations visible to all of their friends. Alternatively, Facebook could make conversations visible to only subsets of their users’ friends, but if Facebook does this algorithmically then users won’t know who is nearby and won’t feel comfortable, and if Facebook relies on manual categorization then they give their users a large, tedious project. Also, historically, Facebook has not succeeded in adding new features to their bloated platform in order to compete with more focused startups.↩
Twitter users can create a somewhat similar social environment with @-replies between protected accounts, but the privacy model is still bizarrely different from that in the real world: if a user approves a follow request, then the new follower will be able to see that user’s @-replies for the present conversation as well as all past and future conversations. I plan to write an in-depth analysis of conversation on Twitter in an upcoming post.↩
Part of the reason the market for messaging apps is so crowded is precisely because the product design problems are unsolved. Fortunately for Dashdash, contact lists are portable and users are happy to re-add their friends somewhere new.↩
Messaging interfaces already let users communicate their facial expressions and emotions using icons and images. People have been using typographical marks to represent faces and emotions for a long time, but recently apps have been offering more detailed stickers. These stickers, perhaps pioneered by Japanese messaging app Line, portray animated characters expressing a variety of emotions, and users can send them to others to wordlessly communicate how they feel.↩
Facebook hopes to be the last great social network. They want to own the network of connections between people upon which all future applications will be built. Mark Zuckerberg has made this ambition clear:
Our strategy is very horizontal. We’re trying to build a social layer for everything. Basically we’re trying to make it so that every app everywhere can be social whether it’s on the web, or mobile, or other devices. So inherently our whole approach has to be a breadth-first approach rather than a depth-first one.
Facebook rose to dominance as an application running in a web browser on desktop computers, but recently consumer attention has shifted to mobile apps on iOS and Android. This shift is not unique, but rather simply the most recent disruption in a long history of evolving communication technologies. It’s useful to think of this stack of technologies as consisting of four distinct layers1:
As an application, nine-year-old Facebook has struggled to adapt to the shift from web browsers to mobile phones. It acquired photo-sharing app Instagram to strengthen its mobile presence, yet other competitive mobile apps have continued to grow exponentially: users of two-year-old Snapchat now send 200 million photos per day, and users of four-year-old WhatsApp send 10 billion messages per day2. Can Facebook successfully transition its application to mobile, and either defeat or pay off3 these barbarians at its gates? Must they keep up with every new application-layer innovation? Or is there another strategy?
Alternatively, Facebook can try to become the social layer that Zuckerberg described. If Facebook establishes itself as a new layer between the OS and application layers, then former competitors in the application layer would be reliant on Facebook for the social graph, and Facebook would be protected.
This strategy makes Facebook vulnerable in a variety of other ways, however. First, it is uncommon for any single company to have a monopoly spanning an entire layer. Microsoft Windows, for example, succeeded in monopolizing the desktop OS market, but innovation moved to the web browser. A second example is Apple’s iOS, which had a monopoly in the mobile OS market until Google succeeded in building Android into a viable competitor; this would not have been possible without the help of mobile operators, device manufacturers, developers, and consumers who understood that it was in their interests to cooperate against the iOS monopoly.
Facebook, similarly, faces the cooperative efforts of Twitter, Tumblr, Google, and others who are wary of giving Facebook too much control. Facebook wants all applications to rely on them for their social graph, but it is difficult for Facebook to prevent those companies from leveraging that graph to create a competing social layer. As a result, Facebook encourages smaller applications to use their platform, but restricts access for larger companies. By restricting access, however, Facebook also forces those companies to seek alternative social layers or build their own, thus strengthening competitors. Facebook’s social layer monopoly is further destabilized by the ease with which users can rely on multiple social graphs simultaneously, in contrast to how inconvenient it is for them to use multiple data networks, hardware devices, or operating systems.
Even if Facebook is able to weather the challenges against its monopoly from competitors in the application layer, it faces even more disruptive and inevitable changes in the underlying data network, end-user hardware, and operating system layers upon which this social layer is built. Facebook still rallies behind the cry, “this journey is 1% finished,” and this is true partially because the underlying communication technologies are constantly evolving. There is nothing unique about the ongoing shift in consumer attention from desktop to mobile, and this constant evolution ensures that neither Facebook nor any other single company will own this social layer indefinitely and be the last great social network. Examples of previous disruptive technologies include:
In each case the changes in underlying technology enabled new consumer behaviors and disrupted old monetization strategies, and it’s difficult for established companies to anticipate these changes and execute accordingly. For example, the current mobile devices are certainly not the apex of communication technology. Regardless of the whether Google can foster healthy social conventions around Glass, it’s clear that we will have wearable communication devices, and that those devices will enable yet-to-be-imagined user experiences in both the social and application layers. There will be many future technologies after Glass at the network, hardware, and OS layers, and each technology will destabilize these higher layers in which Facebook has built its empire and create new opportunities in undeveloped territory.
This recurring rebirth of the technological landscape is related to Clay Christensen’s Innovator’s Dilemma: established companies are vulnerable to smaller companies with disruptive and low-margin technologies that, over time, become profitable and overtake those of the established companies. Should Facebook spend resources developing a new social layer (perhaps enabled by passive data collection from wearable devices like Glass), and risk being too early or building the wrong thing? Or should they wait until the market is established and the interaction paradigms proven, and risk being too late? The barbarians are independent, so each can try a different approach, but a sprawling empire must choose its strategies more carefully.
Furthermore, each product that Facebook builds makes it more difficult for them to add new products at either the social or application layers, both from the perspective of the organization (more teams with vested interests, more products to support/integrate) and the users (more confusing user experiences, more complicated relationships with the brand). Decommissioning old products is difficult for similar reasons, and Facebook can’t rely on network effects between users to protect its borders — Albert Wenger from Union Square Ventures has said that network effects “work very well on the way up, but they also work very well on the way down.”
These questions are not particular to Facebook and its products and strategies, but rather are fundamental to the market for communication technology. Facebook, mired by their secrets and moored by a billion users clicking on advertisements and Like buttons, will not be the last great social network5. Nor will Facebook be the second-to-last great social network — whatever company comes next will face all of the same challenges, and will eventually be replaced by a new social layer for the same reasons. As Zuckerberg said when asked if mobile phones were the future of Facebook, “They’re the future for now, but nothing is the future forever.”
The only question is when. Each new great social network will look different from the previous great social network, because they will be built on a different stack of technologies. Each new great social network will be great for us, the users, because they will bring us ever closer to people who are far away.
The paintings depict neither Facebook nor Ancient Rome (the last great empire?), but rather an imaginary city. Titled The Consummation of Empire, Destruction, Desolation, The Savage State, and The Arcadian or Pastoral State, respectively, they were painted by Thomas Cole from 1833-36 as part of his five-part series The Course of Empire.
The layer terminology was inspired by Steve Cheney’s differentiation between application and infrastructure layers in Google Voice and FaceTime – Why the Carriers Are Losing Their Voice, but there are also clear similarities to the seven-layer OSI model (which does not refer to a burrito). Note that the layers described above — data networks, end-user hardware, operating systems, and applications — may be merged or missing in some communication technologies (such as hand-written letters), or duplicated in others (rich web applications like Facebook treat the browser as an OS, even though there is also a host OS).↩
There are many messaging apps similar to WhatsApp, including: WeChat, Line, Kakao Talk, Kik, Hike, Path, and MessageMe.↩
Snapchat is currently valued at $860 million, and WhatsApp is perhaps worth $1.1 billion, so Facebook could probably just buy them both and be done with it… but they cannot afford to do that with every successful competitive app forever. Furthermore, just as Facebook declined early acquisition offers, these new companies may simply be unwilling to sell at a reasonable price.↩
Address book matching has been around for a while, but has recently become increasingly common. It was perhaps pioneered by Kik, and Path was fined by the FTC for automatically collecting this data. Josh Miller has written more recently about this new social graph portability here.↩
Facebook may still make a lot of money for its employees and investors, of course, and might never disappear completely.↩
Imagine that going to a specific web page is like purchasing a new car with lots of custom options. A customer’s order gets sent to a factory, and that order gets processed alongside many other orders, each of which must travel down an assembly line:
Assembling web pages isn’t quite so complicated, but there are still many moving parts: PHP scripts, MySQL tables, password-protected admin interfaces, caching plugins, and so on. Previously, I was assembling my web pages with WordPress on DreamHost’s shared hosting infrastructure1. While this is a reliable process, it’s often slow, especially when you share the factory/server with other manufacturers/bloggers who may be receiving a lot of requests for cars/web pages.
Fortunately for bloggers, it’s much easier to make an identical copy of a web page than an identical copy of a car. Alternatives to publishing platforms like WordPress include static site generators that will assemble every possible web page in advance of any requests. Web pages usually consist of small files, so it’s easier and cheaper to store them all than it would be to store every possible car.
Whenever I make a change to my blog, I use a static site generator called Jekyll to assemble various files on my computer into another set of files which I upload to Amazon’s Simple Storage Service (S3). While my DreamHost site was assembled by a single server in a single data center, these files on S3 have automatic redundant storage and are designed to remain accessible even if there are hardware failures.
When a visitor’s browser asks for a web page, the request goes to Amazon CloudFront, which keeps cached copies of the S3 files in different data centers all around the world, just as a manufacturer might keep finished cars in geographically distributed warehouses. CloudFront chooses the copy nearest to the visitor, and sends the web page back. There’s no work required because the web pages are pre-assembled, so the whole process usually happens in much less than a second.
While I was doing this migration, I took the opportunity to redesign the site itself. I had been using a slightly-customized version of a common WordPress theme, but wanted a cleaner layout that focused attention on the posts and projects that were most important. I based the new design on the Octopress Minimalist theme by Ryan Deussing, but made many changes to the metadata, layouts, fonts, and colors. I hope the new design makes reading and browsing easy, but feedback, and pull requests, are always appreciated!
I considered moving to a Virtual Private Server, but DreamHost has somewhat-regular data center issues that have arisen at inconvenient times before, so I wanted a better solution.↩
While people who work at startups tend to be eager to tell anyone who will listen about their products, large companies often forbid their employees from talking outside of the company about the specifics of their work1. While those companies can afford to conduct lots of formal market research, build robust prototypes, and use A/B testing frameworks, those things do not provide quick feedback. People at small, unproven startups get to talk about what they are doing with all sorts of people all the time.
Imagine the following conversation between a software engineer at a startup and a new acquaintance at an apartment party, who may or may not be the target audience for the product, who may or may not have domain expertise, and who may or may not be sober:
Compare it to a similar conversation between a software engineer at a large, secretive company and that same party guest:
Even if the startup employee already knows that real-time mobile local social video sharing is actually a terrible idea, the former conversation is substantially more useful. Companies that are open about their ideas get this sort of feedback constantly, for free, as part of their employees’ daily lives. Some of the conversations might be at parties as above, some might be with friends or family or former co-workers, while others might happen during the casual networking at tech meetups.
There are many examples of companies that might have benefited by being more open about their “stealth” consumer products. Would Twitter have suffered if the public had known months ago that they were renaming the four main tabs to be “Home, “Connect”, “Discover”, and “Me”, or might they have chosen different words? Would Google+ have been less successful so far if its employees had been able to discuss Circles at apartment parties, or might they have learned that people don’t really want to sort their friends?
Perhaps companies that keep secrets are wary of attention from the press, but this risk is minimized if a company is constantly floating out a variety of possibly-incompatible ideas. The tech press is able to make big stories out of big launches precisely because the companies themselves have hyped those launches up to be so big. Perhaps companies that keep secrets are afraid of imitation by competitors, but in reality those competitors are usually so entrenched in their own products and worldview that they are incapable of copying anything even if they wanted to.
It’s rare that any single comment could change a company’s direction, but in the aggregate these conversations help the company to iteratively refine its ideas at a pace faster than otherwise possible. External feedback is valuable because it’s free from internal politics and propaganda, and because it can be a source of fresh ideas from potential users. Secrets create press interest and secrets scare competitors, but secrets also slow you down and secrets do not engender good products.
Apple is a special case, but it’s worth noting that not everything they make is successful.↩
Dropbox1 is great for making transparent backups of whatever I’m working on, for syncing my Application Support folders, and also for giving me access to unfinished blog posts, shopping lists, email drafts, and other things while on the go. The Dropbox iOS app lets you view but not edit files, so I tried out PlainText by Hogs Bay Software. I’ve been using their Mac OS X app, WriteRoom, for years, and decided it was worth the $4.99 to get rid of the ads and upgrade to the WriteRoom iOS app.
My Dropbox folder looks like this:
I wanted mobile access to everything except the other
, sync
, and work-src
directories. I didn’t want WriteRoom to have to do the extra work of syncing the tens-of-thousands of Application Support files, and I didn’t want them taking up space on my iPhone or iPad. I reached out to Hogs Bay to see if there was a way to perform a “Sync All Now” on only some directories, and Grey Burkart got back to me with some helpful suggestions.
Since the Dropbox API only gives iOS client applications access to a single directory path, he suggested I make a new directory for WriteRoom to sync with (you specify that directory when you set up the app for the first time). Mine is called ~/Dropbox/sync/WriteRoom/
, and GoBoLinux users might call it the ~/Dropbox/Depot/
, but you can use whatever you like. I then made symbolic links from the directories containing the text files I wanted to be able to edit to this new Depot
directory. When I’m using the WriteRoom iOS app it can only see the directories I’ve told it about, while at my other computers I see the full tree. Note that some directories such as notepad docs
are located elsewhere in Dropbox, but I can still make the appropriate symlinks. It’s a bit of a hack to set up, but works perfectly.
Thanks, Grey! And others should feel free to comment or contact me if you have any questions or tips.
http://db.tt/wwbWOwB is a referral link! Use it when you sign up and we both get 250MB more free space :)↩
Wanderli.st, the ITP thesis project I presented last May, grew out of similar ideas about social contexts. It was an application that would let us socialize within online contexts that are like our offline contexts, and a tool for managing and synchronizing relationships across social websites. I’m no longer working on it for a variety of reasons, but the most important of them is this: Regardless of the interfaces and features of Lists on Facebook or Circles on Google+, I don’t think people actually want to sort their contacts.
Since we are so good at deciding what is appropriate to say to a given group, it seems backwards for our applications to make us define those groups before we even know what we’re going to say. In real life, the thing we want to share and the group with whom we want to share can influence each other, so our software should work the same way. There are several issues with manual sorting of contacts:
Others, such as Foursquare’s Harry Heymann, have expressed similar sentiments:
It doesn’t matter how slick your UI is. No one wants to manually group their friends into groups.
Even Mark Zuckerberg said today that that many users don’t manually build their social graphs:
A lot of our users just accept a lot of friend requests and don’t do any of the work of wiring up their network themselves.
Facebook offers a comparable but relatively unused feature, Lists, that lets users organize their friends, but Circles has a superior user interface that makes the categorization work much more enjoyable. Former Facebook employee Yishan Wong, however, makes a slightly different critique Google+ and Circles:
Besides such features being unwieldy to operate, one’s “friend circles” tend to be fluid around the edges and highly context-dependent, and real humans rely often on the judgment of the listener to realize when something that is said publicly is any of their business, or if they should exercise discretion in knowing whether to get involved or just “butt out.”
Google’s ideas around Circles can be partially attributed to former employee Paul Adams, who gave a compelling presentation about social contexts last year. He left Google for Facebook several months later, but wrote a new blog post that asks “two big questions”:
- Our offline relationships are very complex. Should we try and replicate the attributes and structure of those relationships online, or will online communication need to be different?
- If we do try and replicate the attributes of our relationships, will people take the time and effort to build and curate relationships online, or will they fall back to offline interactions to deal with the nuances?
I now think the answer to first question is “No” and the answer to the second question is “Neither.” Offline relationships are too complex to be modeled online, but I also don’t think those models are important to online social interaction. It’s worth noting how simple my social interactions feel offline — I can see all of the people within ear shot, so I know who can hear me and who might overhear, and this allows me to adjust the things I say accordingly. Furthermore, creating these contexts is straightforward — if I want to talk about something with a specific group of people, I’ll organize a time when we can all talk face-to-face. Offline I only need to keep track of my relationships with individuals, and I can adjust my group behavior based on the individuals present.
With email, my online conversations can work in a similar way. If I have something to say, I’ll think of precisely the people I want to say it to, and compose/address my message accordingly. Each person who receives it can decide if they feel comfortable responding to the initial group, or to some other group. Furthermore, email threads do not span the entirety of a group’s communication, so it’s easy to add or remove someone for a different conversation about a different topic, just as we can do in face-to-face conversations. With email, the group does not persist longer than the conversation. Facebook’s recently revised Messages and Groups features address some of these issues of social context, but those groups are still uncomfortably permanent, and the single-threaded conversation history feels unnatural.
Email, notably, has no explicit representation of a relationship at all. Anyone can email anyone else, yet we’ve reached a functional equilibrium through a combination of social conventions, email address secrecy, and filters. Despite this lack of explicit data, email has rich implicit data about our relationships, and in 2009 Google launched a new feature in Gmail Labs with little fanfare: “Suggest more recipients”. Wired.com wrote about Google+ shortly after launch, and hinted to the future of this data:
It’s conceivable that Google might indeed provide plenty of nonbinding suggestions for who you might want it your Circles. “We’ve got this whole system already in place that hasn’t been used that much where we keep track of every time you e-mail someone or chat to them or things like that,” says Smarr. “Then we compute affinity scores. So we’re able to do suggestions not only about who you should add to a circle, or even what circles you could create out of whole cloth.”
Rather than use this data to make static Circles that will inevitably become irrelevant for future conversation, Google should let the list of individuals in each previous conversation serve as a suggestion for future conversations. If Gmail is able to make guesses about who should be included in a conversation based on who else has already been included, it could also leverage the content that I intend to share to make dynamic suggestions. It can help me remember who I might want to carbon-copy on a message before I send it, and it can do this without overburdening me with the overgeneralized Circles of my past1. Once the spatial boundaries of that conversation have been defined, the discussion can continue until no one else has anything left to say or until a subgroup wants to split off and have a side conversation, much like a social interaction in real life. The fundamental design of email has shown more promise than the categorization-based alternatives2.
We want some of the things we say on the Internet to be public and accessible to anyone who is interested3. For everything else, explicit persistent groupings of the people I know are tedious to maintain and unnatural to use. Each discussion is different, so we need discussion tools that support robust privacy control on a per-message basis.
There are many ways to improve this recipient suggestion interface, and profile photo thumbnails would be a good place to start. It could also suggest some Circle-like groups, such as my family, and even let me upload my own photo to make those groups easier to identify. It should not, however, present me with a list of all of my groups, because then that is something to manage — I only need to see the groups when I am addressing a message.↩
It is important not to let our thinking get bogged down by the current limitations of our inbox interfaces. What if, when you searched for a person in Gmail, you got a grid of attached photos in addition to a list of conversations? What if Gmail was as “real time” as Gchat or Facebook? What if Gmail didn’t make you feel like you needed to read every message? What if Gmail searches were, dare I say it, fast? Some of these changes would break how we currently use our inboxes, so perhaps a separate tool that was modeled after email would be better, but that’s a detail. Other changes, such as streamlined Rapportive-style contact information for the people in a conversation, are already beginning to be built-in.↩
I have some ideas on this, but that’s a separate blog post. See Pinterest and Subjot in the interim :)↩
If a piece has high production values — whether its a video shot with proper lighting or a website designed with an effective grid and color scheme — people will pay attention. They may mock you for a mismatch between the quality of your content and the quality of your production — would anyone care about “Friday” if it had been shot and edited with an iPhone 4? — but they will pay attention. In our information-overloaded, failed-filter future, attention is the most valuable commodity, and hacking the attention economy with a polished piece of content is a useful and lucrative thing to be able to do.
]]>We humans also molt as we mature, but it is our social identity, rather than a specific body part, that we outgrow. As we transition from childhood into our teenage years and through adulthood our preferences around the foods we like, the clothes we wear, the skills we practice, and the company we keep can all change dramatically. These changes are often relatively abrupt, like the physiological changes of many other species. Afterward we still enjoy reminiscing about what we used to do and we appreciate how our present self grew out of our past preferences, both privately and with others who knew us then. I fondly recall the Lego castles and spaceships that filled my childhood, and while I now build with other things and only really talk about those memories with my family, Lego still contributed significantly to my present identity.
The creators of social web services have yet to take human molting behaviors into account when designing their digital spaces. As a result, the changes I am able to make to my online profiles are either destructive or cumulative. On Facebook I can untag myself from old photos but then have no ability to go back and reminisce either alone or with the others who were there then. Alternatively, I can let the photos accumulate over the years, but then my entire social history is accessible to new acquaintances, and it makes me uncomfortable to have so much information available so soon in new relationships.
Facebook and other services do offer some tools for managing this problem. I can organize my friends into “Lists” and make different pieces of content available to different groups of friends, but I do not know of many people actually doing this in practice. It would be mentally exhausting to consider each of hundreds of people I know on Facebook and sort them into categories based on the time and place of our friendship.
People have, in contrast, shown a high degree of tolerance for friend request emails on new web services − I’ve both sent and received hundreds of such requests on many post-Facebook services such as Twitter, Foursquare and Quora. These new services offer a fresh start, a blank slate, a new skin, which I can use to define my present self for the people I currently care about. The old memories remain intact each time I undergo this digital molting process — I can always log back in to past accounts and find the people and content that were important to me at the time — but this personal collection of outgrown identities need not be central to my present self.
This does not necessarily mean that social web services are doomed to be left behind as repeated generations of users outgrow their old selves, and I can imagine features that would allow users to molt as they progressed through different stages of life. Facebook, for example, could have explicitly mirrored my real-life graduation from college on the site. They could have encapsulated all of my old data in an interface that was separate from my current profile and easily accessible to me and my friends from college, but not to all of the people I would meet in the future. They could have given me a new profile to fill out slowly over time, and perhaps it could focus more on my profession and less on my education. And they could have let me decide which of my old friends I wanted to re-add as friends for the next stage of life, and done it in such a way that there would be no social pressure to re-add all of the friends from my freshman dorm with whom I hadn’t talked in two years, since our college friendships would still be intact. I, as a user, would have gladly spent a few hours at my computer flipping through old memories and sketching the outlines for a new self, and I could see this feature as a coming-of-age ritual that people approach eagerly with pride.
People, and especially those who have not yet reached adulthood, will continue to molt their social identities as they grow and mature. Until services offer features that support digital molting, people will continue to shift their focus away from the web services they used to love.
]]>In the beginning of his recent book, Clay Shirky compares the early-18th-century London Gin Craze to the relatively recent American obsession with television. Both are things into which people sank an incredible amount of their free time, but now the Internet is providing us with new social tools that allow us to harness this time:
One thing that makes the current age remarkable is that we can now treat free time as a general social asset that can be harnessed for large, communally created projects, rather than a set of individual minutes to be whiled away one person at a time.
If we, as a society, spend an amount of time watching television every year equivalent to how long it would take to write 2,000 Wikipedias, what else could we do with that time?
I was chatting with Nina Khosla about Facebook, and she contrasted it to Google, Wikipedia, Flickr, and other “aspirational tools […] that suggest that we can do more, see more.” Facebook isn’t like this — in some ways we can use it to keep up with a larger number of people than we otherwise could, but perhaps 80% of a person’s time on the site is spent engaging with perhaps 20% of their friends.
So what else are we doing with that cognitive surplus? We’re spending it on Facebook, mindlessly reading and liking and commenting on our friends’ posts, just as we used to spend it mindlessly watching television before that, and just as we used to spend it mindlessly drinking gin before that.
Cheers!
This is not necessarily an unhappy conclusion to reach though — I think there’s more value (however one wants to measure it) to time spent on Facebook than there is to time spent watching television, just as I think time spent watching television is of more value than time spent drunk. But I think it’s foolish to expect thousands of Wikipedias to emerge any second from our ‘series of tubes’, and it’s going to take concentrated design effort to create rewarding and enjoyable yet productive places where we can spend our time and attention. Sharing isn’t an end in and of itself. Facebook is compelling, but we can do better.
]]>As technology allows us to measure things that were previously unknowable, we will design new games that improve our ability to live in this increasingly complex world.
Jesse Schell gave an excellent talk at the 2010 DICE Summit:
(I’ll quote/paraphrase the applicable parts of both talks, so feel free to keep reading and watch them later.)
Schell observes that many of the unexpectedly wildly popular games from the past couple of years (such as Farmville and Guitar Hero) “are all busting through to reality.” He predicts that increasingly inexpensive data sensors will become ubiquitous, and will record where we go as well as the things we buy, read, eat, drink and talk about. This data will enable corporations and governments to reward our behaviors with game-like ‘points’, when really those points are just a way to trick us into paying more attention to advertisements, and we will consent to this because we will be able to redeem those points for discounts and tax incentives. Schell concludes:
The sensors that we’re going to have on us and all around us and everywhere are going to be tracking and watching what we’re doing forever, […] and you get to thinking about how, wow, is it possible maybe that, since all this stuff is being watched and measured and judged, that maybe, I should change my behavior a little bit and be a little better than I would have been? And so it could be that these systems are just all crass commercialization and it’s terrible, but it’s possible that they’ll inspire us to be better people if the game systems are designed right.
Jane McGonigal recently gave a compelling talk at TED that approaches similar ideas about how gaming can save the world:
McGonigal observes that games, and especially immersive massively multiplayer online role-playing games such as World of Warcraft, always offer quests that are perfectly tailored so as to be both challenging and possible. She offers four useful descriptive terms for these activities:
These are part of a larger argument that gamers can save the world if they play games that are designed to have positive effects outside of the magic circles of the games. McGonigal cites an example from Herodotus in which the ancient Lydians survived a famine by distracting themselves from hunger with dice games, and she makes an argument that we can similarly use contemporary games to solve contemporary problems. That example breaks down, however, because we don’t need games that distract us from our problems like the Lydians did — we need games that enable/encourage us to face our problems and overcome them.
She goes on to describe an example of a game she worked on at the Institute For The Future called World Without Oil, which was “an online game in which you tried to survive an oil shortage.” The game provides online content to the players that presents a fictional oil crisis as real, and the game is intended to get people thinking about that problem and how they might solve it. But just as the first example is missing the direct applicability of the game to the real world, this one is missing the application of the data from the real world to the game, and both directions of influence are important.
When the ubiquitous sensors described by Schell are combined with McGonigal’s vision of games designed explicitly to save the world, the content surrounding the games (that presents real-world crises as ‘quests’) will no longer be fictional, and can instead be based on real-world data. The games will provide frameworks for understanding and leveraging all of this new data about the world. They will motivate us to act for the greater good through both monetary rewards such as tax incentives and social rewards that play to our instinctive desire for the esteem of our peers. Some games might make the model of the real world immersive, so that we as players can ignore distractions and concentrate; others might be similar to the digital tree that grows inside the dashboard of the Ford Fusion Hybrid, and will provide subtle yet constant feedback for our behavior.
We live in a world in which ‘all models are wrong but some models are useful,’ and as that world becomes increasingly interconnected and complex, the games will provide us with the data collection tools and data processing shortcuts that enable us to act intelligently. In this future we will design game-like incentives that teach and encourage us to make wise long-term decisions, so that we can outrun that tiger and save this planet. Which is important, because we only get one shot at each.
—
Note: the remainder of this post contains spoilers of the novel Ender’s Game by Orson Scott Card.
—
In the book, Card’s characters play war games that they do not know are actually quite real. The protagonist Ender, who is just a child, excels at the games because he thinks they are games. He uses ruthless tactics to win, unaware that he is actually committing those atrocities in the real world. Ender, unburdened by the extreme pressure resulting from real-world consequences, believes that he is merely playing a game and is thus able to save humanity from an alien threat.
Of course the games of our future need not be so ethically questionable, but the point — that games can simplify the world to enhance our focus and remove our hesitation if we are less sure that they are actually real — is still important.
]]>It’s worth reading through the entire thing, but there were a few groups of slides I found particularly clear/insightful/interesting (you can jump to a particular slide from the bottom toolbar) —
There are lots of other useful ideas in there, but there’s one in particular on which I want to expand. Adams discusses the categorization of our relationships into strong ties and weak ties, saying that, “Strong ties are the people you care about most. Your best friends. Your family. People often refer to strong ties as their “circle of trust.‘ […] Weak ties are people you know, but don’t care much about. Your friends’ friends. Some people you met recently. Typically, we communicate with weak ties infrequently.” Adams then goes on to define a new type of relationship online, the temporary tie, for “people that you have no recognized relationship with, but that you temporarily interact with,” such as strangers in public online social spaces.
He also discusses the cognitive limitations of the human brain that make us unable to stay up-to-date with more than 150 weak ties at a time (see Dunbar’s number). Given that we now have social tools for keeping track of many more people than that — Facebook ‘friendship’ seems to be for “everyone I know and don’t actively dislike”1 — I wanted one additional term to help me think about the portion of my 859 Facebook friends with whom I wasn’t keeping up at all and had some sort of tie that was weaker than a weak tie.
Latent ties seems to work nicely here, for those people with whom I’m not at all in touch but also have not forgotten, and who could potentially become a bigger part of my life and replace one of my weak ties. This is a new type of tie — it used to be possible to have no way to contact someone I once knew but hadn’t heard from in years, and these new tools will prevent this from ever again being the case. I think it’s especially important to design for these latent relationships on Facebook/other websites where there are social stigmas around friending and unfriending that make it difficult for the user to keep her ‘friends list’ as an accurate representation of only her current strong and/or weak ties.
Who was it that first said this? Please let me know if you have a source for that quote.↩
If you’re interested in self-quantification in general, The NYT Magazine recently ran a good article titled The Data-Driven Life. About a month ago I received my Fitbit, one of the devices mentioned in the article, and you can see the public data I have collected so far here. I’ve been using it primarily to get a sense of how much I actually walk and how little I actually sleep — two things about which it’s slightly too tedious to make daily notes but which still might be interesting to examine in the aggregate.
]]>He argues that there are several kinds of “code” that can shape human behavior. So, for any given problem (e.g., speeding in a residential neighborhood) there are several codes at work trying to prevent you from committing it. There’s a moral/social/ethical code (e.g., you diswant the disapproval of your neighbors who have small children), there’s legal code (e.g., if you speed you’ll get a ticket, maybe lose your license, go to jail), and there’s physical/reality code (e.g., a speed bump that physically prevents you from going too fast). The premise of the book is that there’s increasingly a new kind of code, computer code, that is stronger than laws and social norms, almost on par with reality (e.g., if your car has software that prevents it from going faster than a certain speed, perhaps tied to GPS to track what street you’re on and what the speed limit is).
Taking this even further, distributed applications operating on a blockchain platform could perhaps operate outside of the reach of the law. Wired explains how this might work:
]]>Corporations and economic transactions are fundamentally driven by contracts. By providing the foundation to validate these contracts, Ethereum allows for the deployment of so-called distributed autonomous companies (DACs) or organizations (DAOs). These systems operate on the blockchain with an autonomy of their own. They earn money by charging users for the services they provide (in the example applications cited above, those services are DNS resolution and social networking) so that they can pay others for the resources they need (such as the processing power and bandwidth necessary to run the network).
As the name suggests, DAOs are autonomous entities that subsist independently from any legal or moral entity. After they have been created and deployed onto the internet, they no longer need (nor heed) their creators. Yes, they need to interact with their users, but they are not dependent on any one of them. Smart contracts are automatically enforced by the applications running over the blockchain.
The arguments range widely in their responsible citation of academic research, reliance on anecdotal evidence, and general quality. While I agree more with Shirky and Bilton than with Carr and Richtel, I don’t see that any of them have examined the issue from the perspective that I find most interesting: It should not be a question of whether or not the Internet is making us better or worse at thinking. Instead, it should be a question of whether we are better at thinking now that we have the Internet than we were without it.
We, as humans, are unique in that we modify the things in our environment into tools that extend our natural abilities. Consider, for example, our ability to run. Running is useful for both catching trains when we’re late and for escaping sabre-toothed tigers when we don’t want to be eaten. Thousands of years ago, when we were more likely to be worried about the latter situation, we had thick calluses on our feet to protect them from the uneven and unpredictable surfaces on which we might have to run. Now, we have shoes that we have engineered to serve that same purpose and to provide additional support, making us even faster than before. If we were to be caught by a tiger without our shoes then we woud not be able to run as well as our ancestors could have, since the calluses and muscles in our feet have adapted themselves to the shoes and are no longer optimized for running barefoot. But that is practically never the case — we always have our shoes, so they always augment our natural abilities, and we’re always better runners than we could otherwise be.
The Internet does for our thinking what shoes do for our running. My thought processes have developed in a world in which I am always connected to the vast resources of the Internet for both seeking information and communicating with others. The Internet exposes us constantly to additional pieces of information (in-line hyperlinks, emails, tweets, etc.), and while Carr sees these things as distractions that make it difficult to focus on the task at hand, I experience them as sources to be synthesized into the broader thought to which I am devoting my energy. The multiplicity of inputs enhances the output.
Of course not all of these ‘distractions’ are relevant to the task at hand, and we must make intelligent decisions about what it is to which we are connected at any given time. It would be silly to try to read a book while at a noisy bar with friends, and it would be foolish to blame the book if the reader found it difficult to focus in that physical environment. Similarly, it would be silly to try to read a PDF at my computer while receiving constant notifications of new tweets, and it is foolish to blame the Internet if the reader found it difficult to focus in that digital environment. When Carr cites the study in which students using Internet-connected laptops during a lecture retained less information than those who did not, he should be blaming the students for not paying attention, not the Internet for making distractions available.
There are many other specific statements made in the articles that I’d like to discuss, but I don’t have time to go through them point by point. For now, I’d like to draw attention to the illustration by Charis Tsevis in the last article linked above, in which the connected devices are plugged directly into the Internet-augmented thinker.
As we become increasingly connected to the Internet through an ever widening array of devices, our ways of thinking will adjust further to take advantage of the increased access to the Internet’s vast resources. Those that feel that the Internet is making them dumber should re-examine the ways in which they are using it. Tools must be used properly in order to be effective — running shoes work best when the laces are tied — and I feel that I’ve found ways to use the Internet that make me smarter than it was possible for me to be before.
]]>I’ve been thinking a lot the best way to continue this work now that I am free from academia, and about how Wanderli.st fits with other proposals such as Diaspora (which has gotten incredible support). I’ll continue to publish updates here, and let me know if you have any ideas you’d like to discuss!
Note: I had originally wanted to synchronize the PDF of the slides with the video, but I couldn’t find a good tool to help me with this — Omnisio has disabled the ability to create new presentations since they were acquired by Google, and the Zentation player was simply too ugly (despite their much prettier main website). Let me know if you can recommend something!
]]>A PDF of the one-page handout I passed around is here. I’ve included the notes that I followed roughly during the presentation below the fold, since Google makes it hard to find them otherwise. The presentation can be conveniently accessed at http://bit.ly/wanderlist-20100327.
Slide Numbers:
Hi!
My name is Steven Lehrburger.
I’m in my last semester at ITP and I code part time at bit.ly.
For my thesis I’m building an alpha version of Wanderli.st,
So that people can wander the Internet, and bring their friends.
Wanderli.st will enable us to socialize within online contexts that are like the offline contexts we are so used to.
It will be an online tool for managing and synchronizing relationships across social web sites.
This xkcd map of online communities suggests a parallel between offline and online social spaces.
Physical space is an essential part of our social interactions — only the people in this room can hear me now, and I’m able (and expected!) to act differently in different social contexts — say, if I’m with my family around the dinner table or at a bar with my friends.
(comic from http://xkcd.com/256/)
I was looking at this map and thinking about how, on the Internet, we have different websites instead of different spaces.
I have accounts on Facebook and Twitter and Foursquare and Flickr and GitHub and each of them could be for sharing a different sort of thing with a different group of people, just like I get to do with physical space in the real world..
But instead, I have only the vaguest of ideas about what my Internet-wide social graph looks like, across web sites.
I have hundreds more “friends” on Facebook than I actually do in real life, so that graph is hopelessly over-saturated.
On the other hand, there are many websites where I’ve only bothered to add a handful of friends.
My real-world social contexts don’t currently map very well to my online social contexts, and it’s too much work to fix.
I don’t know know who I’m friends with on which sites, and I don’t even know who has accounts on those sites.
I don’t know who can see which of the things I share, and I certainly don’t have a single place I can go to understand and manage my tangled social graph.
I’m building Wanderli.st to solve this problem.
(photo from http://www.flickr.com/photos/insunlight/3946559430/)
So how is this going to work?
On one side we have various ways of accessing a user’s address book, and Google Contacts is one with a nice API.
On the other side we have the aforementioned social web sites that also have APIs.
Wanderli.st will act as a layer between the two, using the APIs on both sides to read in information about a user’s social data.
Google essentially has all the email addresses I’ve ever used, and these web sites have APIs that let you search for a user based on email address.
(And this is essentially how their address book import tools work, but those don’t scale when you have to make a decision about hundreds of people for each additional website.)
Wanderli.st could then show me visualizations of the people I know, which sites they use, and where I am friends with them. Which will be pretty cool.
But there’s more!
What if I could organize my contacts into groups, based on the real-world contexts in which I actually know them?
These are some of the ones I might use, I’d have others as well, and each person could have different contexts.
And many of these web sites have *write* APIs for their social data in addition to read APIs.
So Wanderli.st could then modify relationships on these other web sites, based on these custom groups, adding and removing other people as friends accordingly.
For example, my classmates at ITP and my coworkers at bit.ly can be my friends on Foursquare and know where I am, but I my ex-girlfriends, not so much.
To summarize, this is cool because users get to understand their social lives in a way they couldn’t before.
Users can maintain a single list of their friends and contacts
And if they want to join a new site, they can just select the appropriate groups, and this lowers the activation cost of trying new sites.
This is a better way for users to manage their privacy. Rather than deal with confusing settings within an account, users can simply restrict content to be viewable only by their friends, and manage those connections accordingly.
And everything can sync! If you meet someone new it’s easy to add them to the places you want and not anywhere else.
When people can move from one web site to another and take their friends with them, they can go to the places want to be, and have a better social experience on the Internet.
So that’s Wanderli.st.
Moving forward, I’m looking primarily for a software developer to build this with me, and for a business person to help me run this as a company.
I’d love to hear your feedback, and thank you for listening!
I propose to design and build Wanderli.st, a new tool that will enable people to manage their contacts across social web services. Wanderli.st will be a web-based contact management application that synchronizes a user’s friend lists on both new and familiar web sites. It will serve as a layer between currently-unconnected applications on the social web, linking existing online contact management tools (such as Google Contacts) with the myriad sites on which people share content (such as Twitter, Vimeo, Foursquare, and GitHub).
Wanderli.st will provide users with an improved interface for organizing their existing contacts (which can number in the hundreds or thousands) into a set of manageable, custom groups. Google Contacts currently provides only a rudimentary user interface for those wishing to organize their contacts in this way, so I will create improved tools to make this initial set-up step as fast and easy as possible.
Once a user has organized her contacts into groups, the user will then authenticate her Wanderli.st account with those third-party web services on which she wants to manage her contacts. The user will make selections from her custom groups and assign them to the different services using basic set operations; Foursquare, for example, might be assigned everyone who is in either the ‘school’ or ‘family’ groups except those people who are also in either the ‘coworkers’ or ‘ex-boyfriend’ groups. Wanderli.st will then search for user accounts on each of those web services using the names and/or email addresses of only that set of contacts that they’ve been assigned, and it will then automatically send friend requests to (or, on sites with asymmetric social networks, simply follow) those users.
Later, if the user makes a change to one of her groups, either by adding a new person she recently met or moving someone from one group to another, Wanderli.st will automatically synchronize that change on each service to which that group has been assigned. In this way Wanderli.st will both lower the barrier to entry that a user faces when trying out a new web service, and will decrease the thought and effort required to keep one’s social graph up-to-date across services.
By making it easy for the user to understand and manage her contacts, Wanderli.st will enable the user to share content that is either very private or simply not “for” everyone with comfort and certainty about which of the people she knows can see each different piece of content. It will also benefit the third-party web services themselves because their users will more actively share content with the users’ more complete and easily-managed list of friends.
I plan to write Wanderli.st in Scala using the Lift web framework, and I’ll host it on the Stax Networks elastic application platform.
Note: I acknowledge that Facebook offers some of this functionality with Facebook Connect, and that in the future it might allow users to leverage Facebook’s Friend Lists to selectively export their social network to other sites; Wanderli.st, however, will differ in several critical ways. First, third-party services will only be required to provide a read/write API for users to add and remove contacts, and they will not need to write custom code as they would for Facebook Connect. Second, Wanderli.st will actually duplicate (and then synchronize) the user’s social graph on the third-party service, and this is to the advantage of those services because they will then own their social data, rather than rely on Facebook for continued access to the social graph. Finally, because Wanderli.st will only be a collection of spokes (social connections) without a hub (personal profile data, photos, and other content), users can feel confident about connecting Wanderli.st to third party sites, as there will be no private data that users will risk exposing.
]]>What’s in a name? that which we call a rose
By any other name would smell as sweet;
Wanderli.st is the name on which I finally settled for my social web contact management tool. It took me over five months to come up with something I liked, and it might have been longer had I not used it as my project for the 1-in-1 event during ITP’s 30th Anniversary Celebration, thus imposing a deadline on myself.
I want people to be able to wander freely between the different social web services on the Internet, I want users to leverage custom dynamic lists of their friends when creating relationships on new services, and I want to make existing social relationships more portable and manageable than they are now.
I needed a name that was short, memorable, descriptive, somewhat clever, and not already commonly used. I initially wanted a .com, but relaxed that requirement in favor of choosing a name that was reasonably Good, since pretty much every .com imaginable is being squatted. I think I should be pretty well-off in terms of trademarks and SEO, and perhaps I’ll have the funding (haha) to buy the proper domain around the same time as I’m ready to expand my userbase beyond those familiar with non-standard TLD’s.
The temporary name I used back in May was Constellate, and other ones that I’ve especially liked from the process were Netropolis, Tag, Telegraph, SocialLava, Cloudship, and Relationshift. I had columns of words that had names like ‘social words’, ‘movement words’, ‘group words’, ‘gathering places’, ‘infrastructure words’, ‘vehicle words’, and ‘elements’. It was not an easy task.
THANK YOU SO MUCH TO EVERYONE WHO HELPED! It looks from the Google Spreadsheet revision history like Wanderli.st first appeared on September 6th — if anyone remembers suggesting it to me around that time, let me know and I’ll give you credit here :)
(The title of this post comes from the global find-replace syntax in the text editor Vi.)
]]>Nothing can go faster than the speed of light, and this restriction applies to influences (such as the force of gravity) between massive elements (planets, particles, people) in the universe. It is these forces that determine the movement of objects, the swinging of a pendulum in a clock, the workings of our consciousnesses, and the travel of light.
When the objects in question are moving with respect to some other perceiving entity, the things traveling between the moving objects have to travel slightly further because the destination object will have moved further away from the original point of the source object by the time the things moving between them can arrive. It follows that those things then have to either travel faster or take longer to get there. If those things are already traveling at the speed of light (such as influence due to gravity), then they cannot go any faster, so they must take longer, and the interactions between the moving objects slow from the point of view of the outside observer.
The moving object, however, still behaves just as it did before, albeit slightly more slowly. Because the workings of its own ability to perceive time — either with a clock, or with a human consciousness, or with something else — have all slowed down in precisely the same fashion, the moving object experiences time just as it did before.
Particularly useful sources:
I have 787 friends on Facebook. On Twitter I am following 216 people and am being followed be 285 people. I have 1,190 cards in Address Book, all of which are synced with my Google Contacts and my iPhone. I have 49 friends on Foursquare, 33 connections on LinkedIn, 18 friends on Goodreads, 8 contacts on Flickr, 1 contact on Vimeo, 0 friends on Yelp, and 146 buddies on AOL Instant Messenger.
If I want to sign up for some new website, it’s not at all easy to re-use these existing relationships: I can go through and add people individually; I can ignore the security risk, enter my Gmail login information, and selectively choose which (or all) of my 1,190 Google Contacts need an email invitation to the website; I might be able to connect with my Twitter account, but the nature of the information shared on Twitter results in the people I’m following being a strange subset of my social graph; I might be able to connect with my Facebook account, but I rarely want to publish a summary of my activity on the new site in the news feed of every single person I know on Facebook.
My social life on the Internet is somewhat of a mess, and it’s becoming increasingly unmanaged and unmanageable. Social networking websites are not going away, and I want better tools to consolidate and manage these myriad representations of my real-world relationships as I wander the Internet…
(photo of me at the xkcd book party by insunlight on Flickr | CC BY-NC 2.0)
Wanderli.st will be my attempt to solve this problem. I want to take my existing friendships and relationships with me wherever I go on the Internet. I want more powerful tools for managing my contacts, I want this private information to sync with a giant social graph in the cloud, and I want websites to access subsections of this social graph based on the permissions I grant them.
More specifically, I want Wanderli.st to help me organize everyone I’ve ever met using a simple system of custom tags (‘ITP classmates’, ‘bit.ly coworkers’, ‘Scala programmers’, ‘SXSW 2010’, etc.) and lists that are combinations of these tags (‘all of my family and photography friends, but none of my ex-girlfriends’), and then let me use those lists to automatically specify my relationships on social websites. I want an intuitive yet powerful address book application with standard fields for phone numbers and mailing addresses but also with dynamic fields for usernames on social websites. I don’t want Wanderli.st to bother with actual content — let other websites specialize in the sharing of photographs, videos, status updates, long blog posts, short blog posts, and restaurant reviews — Wanderli.st can simply be a social graph provider.
I want my social data to be device- and website-independent, and I want to be able to export all of it to a standardized XML file. But I also don’t want to worry constantly about importing and exporting, and instead I want to be able to make one change in one place whenever I make a new friend, and I want that change to be pushed automatically to all of the applicable social networks.
I’d like to be able to sign up for a new social photography website, assign that site a list (i.e. some combination of tags), and then have the option of inviting friends to that website based on some other combination of tags (perhaps I have a tag named ‘people it is okay to invite to random websites’). If I make a new friend who is interested in photography, then I want it to be sufficient for us to only have a) exchanged email addresses and b) associated our usernames on various photography websites with our Wanderli.st accounts — it will then seamlessly create our connection on those websites and automatically add those usernames to each other’s personal address books, with no “Steven has added you as a friend on Flickr” emails required.
I want to have the option of managing my privacy simply and intuitively at the level of the website, and not at the level of the individual piece of content: you can see the pictures of me drinking in college if we are friends on the site on which they are posted, but if I don’t want you to see them then I simply won’t be your friend on that site, and I can use a second site (or second account on that same site!) to share my other pictures with you.
Wanderli.st will also make it easier for me to move among social websites. Both established and fledgling websites will benefit from this because it will be easier for them acquire new users and provide existing users with the best possible social experience. Furthermore, there have been mass diasporas of users in the past as people have moved on from Friendster and MySpace, and I predict Facebook faces a similar future (more on this in a future blog post). I’m willing to re-create my social network only one more time after I’m ready to move on from where I am now (and Facebook still won’t let me export my data), but after that I want my data to be open and portable and mine so that I never have to re-friend a thousand people again.
I also intend to make Wanderli.st my ITP thesis. I have been thinking about the project for several months, and wrote up and presented an early draft of the idea in Kio Stark’s When Strangers Meet class last Spring. I think that Wanderli.st should be compatible and complimentary with existing standards and upcoming proposals (OpenSocial, Portable Contacts, WebFinger, etc.), but I think it is important that the project be a new site in and of itself that hosts the data and popularizes the platform through actual successful use cases.
I’ve read that the best software is made by people who are building the tools for themselves, and I’m excited to create Wanderli.st and improve how I socialize on the Internet. If what I’ve described here sounds like something you’d like to use as well, comment below — I’ll let you know when it’s ready for beta testing.
]]>I had been vaguely considering getting two identical computers, keeping one in my locker at school and one at my apartment, and syncing everything (files, applications, operating system, all of it) between them, but the expected technical headaches/failures made it impossible to justify the cost of two shiny new Macs. The combined stimuli of a) hearing from lots of people who love Dropbox (referral link!), b) a growing number of friends at ITP with Hackintoshed netbooks and c) an offer of an iMac to use at bit.ly when I started contracting work (as opposed to the previous internship work) made re-examine the problem.
I decided to keep my MBP at home nearly all of the time (to prolong it’s lifespan), use the iMac at work, and get a netbook off of craigslist as an experiment to keep in my locker at school (I ended up getting a Dell Mini 9 for $215; installing OS X was mostly painless). I’d install OS X (one Leopard, two Snow Leopard) on all three computers, install my favorite applications (I was unwilling to use another operating system primarly because I like my Mac-only apps so much), synchronize crazily and seamlessly, and walk without being encumbered. (The fourth computer is my iPhone 3GS.)
It’s now been four weeks, and my scheme has been working well. I’m doing different things for different applications, as described below:
Note that I am very careful to quit all of these applications and let Dropbox do its thing before I shut down any of these computers to go to another place, but it keeps ‘conflict copies’ of the files in case I forget. I’m also not doing anything at all with my music beyond keeping a good chunk of it on my iPhone, and that hasn’t really bothered me yet as I don’t often need my whole music library at work or at school.
It’s hard for me to accurately describe the psychological freedom that comes with having all of my most important data easily accessible at whatever computer I might find myself in front of (in theory, I could install everything on a new machine without having any access to the others and be up and running completely comfortably relatively quickly). I’m enjoying it and have been quite satisfied.
And one more thing — I love my netbook. Train rides aside, it was incredibly practical for traveling in Europe for two weeks with my family, it’s super-easy to carry casually in one hand around the floor at ITP, the three hour battery life seems absurdly luxurious (in comparison to the ~30 minutes I get on my MBP), and it was sooo cheap.
Let me know if you have any questions or want help setting this up for yourself!
]]>Nearly a year ago I posted about this blog’s “web idea” category, and wrote:
During conversations with friends, I regularly have ideas for websites, services, or other technologies. These conversations happen in person, over email, on instant messenger, via text message.
[…]
Ideas, I have found, are relatively commonplace; the real work is in their execution. Sometimes people guard their ideas, keeping them secret out of fear that they will be taken and implemented by someone else first. None of that here. Please, take these ideas, and please, bring them to life. I have more of them than I’ll ever have time for, and even if I were to eventually have time for all of them, they would have long-since lost their relevance. If you find yourself with sufficient knowledge, time, and interest to start work on one of these, I would love to talk about it and hear what you are thinking about what I was thinking.
I’ve continued to have these ideas, and I scrawled some of them in my notebook, left some of them in instant messenger conversation logs for later searching, and saved many of them in Things. I’ve been dissatisfied with how private all of these storage media are, and have been inspired by Alex Payne’s posts on http://ideas.al3x.net/. I’ve considered doing something similar, but I’ve worried that the formality of the text boxes on a proper blog of any sort (whether it’s here or on Posterous or Tumblr) would discourage me from regular and casual posting.
UserVoice to the rescue! Built as a robust user-feedback tool for websites, I realized that I could re-purpose it as a tool for organizing these ideas in a public and collaborative way. Anyone can post ideas, anyone can comment on them, and anyone can allocate one of their limited number of votes to indicate that they like an idea. I hope that this will provide a mechanism for bringing the best/most-desired ideas to the top of the list and act as a useful metric for prioritizing projects.
(Note that there are several other sites that do this sort of thing, but the others are either not free to use or do not have the same collaborative vibe. Full disclosure: betaworks has invested in UserVoice as well as in bit.ly, the startup for which I work.)
There isn’t much on http://webideas.uservoice.com yet, but I’ll be adding more as I migrate over my old ideas and come up with new ones. Feel free to contribute, and please take one and build something!
]]>Eventually, sites for the social sharing of content appeared, and each of these maintained separate representations of the social graph. Over time people collected contacts on Flickr, friends on Facebook, and followers on Twitter, and such sites became oases of social functionality. At first these patches of green built walls to keep out the marauding hordes of anonymity, but as they grew larger they also grew more open, and they started to trade content amongst themselves.
Each real-world individual, however, was forced to maintain a separate existence in each of them simultaneously. It was difficult for people to travel with various aspects of their digital identities between walled oases, and it was nearly impossible for them to take their friends with them when they did. As a result, people were forced to duplicate their selves and their relationships. Some walled gardens tried to build roads to connect with (and undermine!) the others, but nothing really improved. Everyone had to maintain a copy of themselves in each oasis in which they wanted to produce and share content, and life was a mess for everyone. It was time to build something new, a sort of subway under the blossoming desert, so that every aspect of every person could be wherever it was appropriate for it to be, all the time, all at the same time…
The Internet has outgrown its social infrastructure. It’s becoming increasingly inconvenient and infeasible to create and maintain multiple copies of our networks, with all of their social complexities (friends, family, coworkers, acquaintances, ex-girlfriends, etc), and with all of their nuances of interaction (friend requests, @replies, emails, wall posts, blog comments, etc). Tools such as Facebook Connect are hopelessly hindered by over-saturated social graphs, pre-existing notions of privacy, and misguided attempts to pull content back into single cluttered interfaces. Identity and content aggregators such as Chi.mp, Plaxo and Friendfeed don’t provide the tools for web-wide social graph management. Put simply, we need new tools for the modern social Internet.
What will they be like?
Who will build them?
Towards the beginning of his aforementioned blog post, John Borthwick writes:
what emerges out of this is a new metaphor — think streams vs. pages
I think that this progression of metaphors is moving in the right direction, but it needs to be taken further. Streams consist of web content that is delivered directly to the user (to a Twitter client, to an RSS reader, etc), and this is in direct contrast to content that lives on specific pages to which a user must navigate in a web browser. Streams are dynamic, up-to-date and are delivered in (near) real time, while pages are static and not necessarily current.
Streams, however, are just collections of individual pieces of content, or packets. Tweets, status updates, blog posts, photos, mp3 files, and video clips are all discrete packets of content. These packets are the units which a user actually consumes as information, and streams are just a way to group those packets over time, usually based on on source (such as a specific blog) or topic (such as a search term on Twitter). But there are potentially more potent ways in which these packets can be organized than by their original source/topic, and this is important because these streams tend to be overwhelming in their aggregate. Borthwick continues about the future of content delivery via streams:
Overload isn’t a problem anymore since we have no choice but to acknowledge that we cant wade through all this information. This isn’t an inbox we have to empty, or a page we have to get to the bottom of — it’s a flow of data that we can dip into at will but we cant attempt to gain an all encompassing view of it.
I suspect that there is a more optimistic solution, however, and that there are better-than-random ways to organize the flow of content from our collections of streams. There will be some packets in these streams that are more important to individual users than others, so I want services that surface the best ones and hide the others. I predict a future in which streams are cut, rearranged, reordered and remixed into a single source of content that always has that moment’s most important/relevant/enjoyable packet at the front of the queue. The future of content on the web will be based on tools that focus on perfecting the delivery of these individual packets of information to users for consumption.
I agree with Borthwick, and think that the re-conceptualization of the destination web of pages into a real time stream of pages is the next (or current?) big thing. But I think the re-conceptualization of those streams as collections of individual operable packets is the big thing after that.
]]>Anyone whose goal is “something higher” must expect someday to suffer vertigo. What is vertigo? Fear of falling? Then why do we feel it even when the observation tower comes equipped with a sturdy handrail? No, vertigo is something other than the fear of falling. It is the voice of the emptiness below us which tempts and lures us, it is the desire to fall, against which, terrified, we defend ourselves.
– Milan Kundera, Czechoslovakian novelist (1929 — ), in The Unbearable Lightness of Being
I have vertigo. My somewhat lofty goal is to read and digest all of the information that interests me, as it is created in real time, regardless of medium. My desire to fall is my desire to abandon all of my information sources, not bother keeping up with anything, and fall endlessly into ignorance. And this terrifies me; at the very least, the Internet could come equipped with better handrails.
I am interested in information from a variety of sources — blogs, people on Twitter, email lists, search terms in the NY Times, etc — and I subscribe to these things because I think they are worth reading. Although I wish I could read all of it, I know I can’t. But I want a better way to read only some of it, without having to face the infinities that I don’t have time to read, without having to make painfully arbitrary decisions about what to read and what to ignore, and thus without having the subsequent vertiginous desire to give up, declare email bankruptcy, and read none of it at all. So the question is, then, one of designing a sturdier handrail that I can grasp while observing information on the Internet as it streams by. And that handrail must be a tool for filtering content, not a source that recommends even more.
Recommendation sites/services such as Digg, Reddit and StumbleUpon have (as far as I know) user-preference modeling algorithms to make selections of what content to show to users, based on what those users and other users with similar past preferences have liked in the past. Netflix does something similar to make movie recommendations. They’re good systems, and have some cool machine-learning stuff going on, but I find their application to be fundamentally conceptually inverted.
I want to read the blogs of danah boyd, Jan Chipchase, and John Gruber. I want to follow Alex Payne, Jorge Ortiz, and 180 odd others on Twitter. Yet it’s too much. As Clay Shirky pointed out at the Web2.0 Expo, our filters have failed, not because of ‘information overload’ of ever-increasing magnitudes, but instead because of ‘filter failure’. Content was once primarily filtered by the editors and publishers, yet those systems are crumbling and I no longer have effective filters for this smorgasbord of carefully selected and professionally prepared feeds.
And I certainly don’t need Digg/Reddit/StumbleUpon to make additional recommendations. But given that I’m already not going to see all of the things I know that I care about, why can’t those same algorithms be used to filter incoming content instead? These information filtration systems would ideally have a few particular characteristics:
I don’t see this problem of perceived information overload (and consequent vertigo) getting any better on it’s own. Are there other solutions I’m not seeing? Anyone looking for a new giant software project?
]]>Click through for the Flickr page, and never sleep.
]]>]]>Hi,
I’ve dreamed about having a tiling window manager for years, and I was fortunate enough to see Alex Payne’s tweet making a similar lament; I found SizeUp as a result. I’ve been using it for a week or two now, and I love it. Both on my 15″ MacBook Pro, and when the MBP plugged into my 30″ external display. Nicely done.
I have a suggestion though — I wish I had slightly more control over how windows were sized. I understand how it could be a user interface and/or Mac OS X nightmare to let users specify custom sizes or behavior for how windows snap to other windows. But at the same time, I’d love to be able to assign windows to a grid location occupying less than a quarter of my screen. For example, Adium chats don’t need that much screen real estate, even if iCal and Firefox do.
Rather than use the arrow keys to indicate location, what if you used the physical QWERTY keyboard locations of the letter keys? In it’s most simple form, W S A D could correspond to Up Down Left Right. But you can do more interesting things too, and split a screen into a six- or nine-rectangle grid using Q W E A S D or Q W E A S D Z X C. Any one of those keys would place a window in the respective corner of the grid, so [shortcut combination]+Q on a six rectangle grid would make the window use one sixth of the total size, with a width equal to a third of the screen’s width and a height equal to half of the screen’s height. If you wanted a larger window, you could press multiple adjacent letter keys simultaneously — [shortcut]+Q+A would make the window take the leftmost third of the screen, and [shortcut]+Q+W+E would have the same effect as the current [shortcut]+Up. It might make sense to use shortcut keys on the right side of the keyboard with these letters (shift+option seem to be sufficiently unused in this context), or you could use a different set of letters and keep the shortcut keys where they were. In the SizeUp Preferences, users could even specify the number of rows and columns in the grid they wanted to use and the keys they wanted to assign to each screen location.
(This idea is somewhat reminiscent of a still-incomplete project I was doing a few months ago: a Javascript portfolio website that used the keyboard for navigation, had a representation of the keyboard on the screen, had all content exist on that visual representation of a keyboard but at different zoom levels, and used the physical keyboard keys and spacebar to zoom in and out on the on-screen keyboard to view content at different levels in the hierarchy. There are a few blog posts about it here.)
Sorry that was so long-winded, and please email me if you want to discuss further or need me to clarify.
Keep up the good work!
Best,
Steven Lehrburger
Recognizing that Textonic was a larger project than I was going to be able to finish in my free time over the summer (or that my group was going to be able to finish in our collective free time), I decided that the project would benefit from a more formal web presence than a handful of blog posts and a GitHub page. I registered a domain, set up WordPress, presented what we had accomplished, laid out what there is to be done, and tried to create a place where people could express their interest in getting involved.
The Conversation page on the new site is of particular interest. I’m using a Twitter search for the term ‘textonic’ as a sort of guestbook or message board. People who find the site and are interested can look to see who else has been there as well as when they expressed interest. Twitter itself can serve as a way for them to get in touch. The intransient1 and public (since all of a user’s followers will also see the tweets) nature of these expressions of interest will help to catalyze the formation of a community around the project.
Credit for this idea goes to @n8han (who writes the blog Coderspiel) — I first saw it on his site for Databinder Dispatch. In addition, recent ITP graduate @joshbg2k created something similar for his Überbaster project.
Note that Twitter’s search API only exposes that last ~3 months of tweets, so at some point I’ll need to archive the messages so that the entire conversation history is displayed.↩
I was awarded a student scholarship to attend the annual Apple Worldwide Developer Conference, so I’ll be flying to San Francisco in June. According to the website, the event “provides developers and IT professionals with in-depth technical information and hands-on learning about the powerful technologies in iPhone OS and Mac OS X from the Apple engineers who created them.” Many of the planned sessions sound interesting, and I’m also looking forward to the Apple Design Awards. I’ll probably spend the next month working on Meetapp (rather than Delvicious) so that I’m better prepared for the iPhone development labs.
I’m also planning to attend the Scala Lift Off on June 6th, a conference for developers using Scala and the associated web framework Lift. I hope to have more time to talk with some of the people who attended the Scala BASE meeting at which I presented my TwiTerra project in January. Martin Odersky, the creator of Scala, will also be speaking, and my friend Jorge Ortiz plans to attend as well.
I’m excited to go to both events and to visit other friends in California, so I’ll be sure to post about them again.
]]>A very alpha version of Delvicious is live at the above link! You can currently sign in with your Delicious account (or use one of the ones I posted towards the bottom of the GitHub page), tell it to fetch all of your bookmarks (making a copy of the URLs in Google’s datastore), and then perform searches of those pages using a Google Custom Search Engine that is created on the fly1 from XML files that are automatically generated and served by App Engine. I’ve had a little trouble getting the CSE’s to work the first time a user tries to use the search box, and I suspect that the search engines are not immediately ready for use after their (implicit) creation (which I think happens when the user first tries to do a search). One of the first of many next steps is to find a way to initialize that search engine automatically and inform the user (via email) when it is ready. If it’s not working for you, wait a little while and try again later.
There’s a lot of other functionality I want to add, and other next steps include:
I will continue to work on the project over the summer, and I hope to launch an initial version within the next month or two.
I did this using [Linked Custom Search Engines][5], and it’s worth noting that this was not the original plan. I had intended to ask users to authenticate with both Google and Delicious, and use the Google authentication for the creation of and interaction with CSE’s using HTTP requests. CSE’s only support [Client Login][6] authentication, however, which is intended for desktop/mobile applications and requires users to manually type in their usernames and passwords. I knew that I would not be able to convince users to give me their unencrypted Google login information (nor did I really want to possess it), so, without the ability to authenticate Google users on a google.com Google sign-in page, I was going to need an alternate solution.↩
David Nolen’s Little Computers class is now over, and I’ve made substantial progress this semester on Meetapp, the iPhone application I have been building for users of Meetup.com. Currently, it will fetch a list of a user’s groups from the API at that user’s request, make secondary API requests to fetch the events associated with each of those groups, and then display all of those events (i.e. all of those events that a user might be attending) in a table. It will also filter that table to show only those events that the user is organizing (rather than simply attending), and the user can drill down for more detailed information about both types of events. So far, I have –
The app has some use in it’s current form, but there’s a lot I still want to do before it’s ready to be posted for the public in Apple’s App Store.
The most recent code is on GitHub, and please feel free to contact me with questions about my progress so far or future plans. I hope to continue the project later this summer, but Delvicious will be my prioritized side-project, at least for the time being. I’m looking forward to watching the lectures from Stanford’s iPhone class to brush up when I jump back in to Objective-C in a couple of months.
]]>I’ve had to focus on other projects for the past couple of weeks, but I finally got to turn my attention back to Delvicious. It will now:
There’s enough functionality to make it worth visiting the appspot page here. Sign in with Google, enter your Delicious credentials, and fetch your bookmarks – you should see them displayed in a list. You could then make a custom search engine, go to the advanced tab on the left side bar, and add a URL of the form delv-icio-us.appspot.com/delvicious/annotations/[your_delicious_username].xml
as an annotations feed. Your bookmarks should then start appearing in your search results. Test out the new version (that searches memento85’s small number of Delicious bookmarks) below:
I need to clean up the interface navigation, but the real next step is to dive into the CSE API documentation and automate the creation of the search engine so that one is automatically paired with a Google users Delicious account.
]]>Note the inaugural use of the ‘thesis’ category! It’s not due for over a year, but this is the task, or at least an early formulation of the task, that I want to tackle with my final ITP project.
]]>Acquire some text. Visualize it. Source and methodology are up to you, but be prepared to justify your choices.
I decided to use my papers from college as my source text. I copy pasted the contents of the papers into plain text files, and had hoped to see how my vocabulary evolved through time (hence the project name… not the most clever one I’ve come up with, but it will do for a weekly assignment). (Note I didn’t include the writing I did for certain technical Linguistics, Computer Science, and Physics classes. I also didn’t include papers that were group-authored.)
In week four of the class we had looked at how to represent word counts as a hashmap of words to the number of times that they occurred in a text (see the WordCount class in the notes). WordCount extends TextFilter, however, and TextFilter is built to only be able to read data from a single file. I thought about combining the files into one or trying to use multiple TextFilters, but it seemed easier and more elegant to start from scratch.
Scala seemed like it would be well-suited to this sort of problem, and I was eager to find a use for it since it had been a couple months since I last worked on TwiTerra. My aforementioned friend Jorge pointed me towards a class written to parse Ruby log files; that code, which uses a Scala community library called Scalax, served as a useful starting point.
You can see the full source code for the assignment here, but I’ve pasted a particularly interesting function below. There’s some “deep Scalax magic” going on here (as Jorge says), which I’ll explain —
The function takes a list of the names of the files mentioned above, and returns a map that has each word mapped to another map, and each of those maps has the names of the files in which that word occurred mapped to the number of times that word occurred in that file. There are three parts of the function:
wordsInFiles
is a long list of words and file names, with an entry for every word on every line of every file.emptyMap
) with default values for the words and file names, and 0 for the word counts. This eliminates the need for a lot of hassle later on checking to see if words/file names are in the map — we can just assume they are there and trust it to use the default values if they aren’t.Finally, it operates on each pair in the wordsInFiles
list and updates the map accordingly. foldLeft
is explained thoroughly on the Ruby log file example linked above, but I’ll go through this specific case. It starts off with the emptyMap
, goes through each pair in the wordsInFiles
list, and performs a function on the pair of that map paired with that pair from the list ((map, (word, filename))
) to fold that list pair into the map. The result of that fold is a map that is then used in the fold of the next item in the list, and this process continues for each (word, filename)
pair in the wordsInFiles
list.
The function performed on each item in the list is not as complicated as it looks, and note that each of the commented lines is equivalent to the uncommented one — I left them to show the progressive application of syntactic sugar (which I won’t go into here). The purpose of this next line is to increment the count of the number of occurrences of the current word in the current file.
The outermost map.update
finds the word in the map and replaces the map with which it is associated with a new one. This new map needs to be an updated version of the previous map, which we retrieve with map.apply(word)
. We want to update only one of the values corresponding to the file names in that words’ map of filenames to occurrence-counts, so we need to get the previous count (using two map.apply
’s to get to the value of a key in a map within a map) and increment it before the resulting map is sent to the update.
… “deep Scalax magic.”
I’ve saved the results of the visualization in the below images. The program didn’t quite create the effect that I had intended — that of showing how my vocabulary evolved over time — but I did give some sense of the topics about which I was thinking and the words I used to describe them. I thought about eliminating common words, but it seemed like it would be hard to make those decisions in a non-arbitrary manner. Click each image for a larger version in which the differences are more visibly apparent, and I recommend opening the below images in tabs and cycling through them with shortcut keys so that it’s easier to make quick comparisons.
]]>Since I had already worked with XML and web services for my midterm project, I decided to take this week’s assignment for Programming A to Z as an opportunity to continue work on that project. I thought that a good next step would be to use App Engine’s Datastore to store the necessary information about all of a user’s bookmarks, and I would again access that data initially as an XML document obtained from the Delicious API.
I had already begun to learn the Python web framework Django for the Textonic project for Design for UNICEF, and it seemed like it would provide a rich set of useful tools for this project as well. The Datastore does not work, however, like the relational databases with which I am more familiar. Django relies by default on a relational database and is thus incompatible with Google’s Datastore out-of-the-box, but there are two software projects that aim to reconcile these differences. Google’s App Engine Helper (tutorial, code) seemed less well- and less actively-developed than app-engine-patch (tutorial, code), so I decided to go with the latter.
Django is quite powerful and gives you a lot of functionality for free, but when trying to branch out from various tutorials I encountered the somewhat strange challenge of figuring out exactly what it did for you and what it didn’t. It took me a quite a while to get the hang of working with Django on App Engine, so I didn’t have time to actually get the XML stored in a reasonable set of Models in the database. I have, however, gotten successful storage of Delicious logins to work, which is a good first step. The updated code is available on GitHub, and I should be able to make additional improvements soon.
]]>I’ve studied context free grammars in the past (ahh undergrad memories of Ling120 (syllabus here, but not the one I had)), so I have a good sense of how they work. I made a few quick modifications to the grammar file used by Adam’s ContextFilter class to handle certain conjunctions, adverbs/adverbial phrases, and prepositional phrases. I also made some modifications to support quotations, but they aren’t particularly refined — I couldn’t come up with a simple solution to the nesting problem in the below example that didn’t involve duplication of many of the other rules:
this smug dichotomy thought “ that smug walrus foregrounds this time that yelled ” the seagull that spins this important sea said “ that restaurant that sneezes has come ” “ ”
Thinking about ways to solve this problem highlighted how CFGs can get rather complex rather quickly. When, for example, do you want a period/comma at the end of the quotation? When is one period/comma sufficient for multiple quotations? How do you resolve those situations programmatically?
My test.grammar file is online here, and you can run it with Adam’s Java classes that are zipped here. I recommend you only test a few of my rules at a time and comment out the others — otherwise you might get sentences like this:
the important trombone yelled “ wow, the blue amoeba said ” oh, this blue thing interprets this seagull that said “ wow, the trombone sneezes ” “ but this amoeba said ” this suburb that quickly whines in that sea that daydreams by that corsage that quickly or habitually computes this dichotomy slowly yet habitually vocalizes and damn, that luxurious restaurant habitually prefers this seagull “ but that suburb interprets the time yet damn, that sea said ” that restaurant tediously slobbers “ yet wow, that seagull quickly yet slowly foregrounds the restaurant or the boiling hot time spins this bald restaurant of this trombone but that amoeba computes this smug restaurant but the seagull that quickly or slowly prefers this sea yelled ” oh, the thing that spins this restaurant tediously yet quietly whines “ or wow, this amoeba coughs through the important sea and oh, that time tediously yet slowly coughs yet oh, that trombone habitually computes that luxurious suburb or wow, that thing that said ” my, that amoeba that said “ my, that time tediously or slowly sneezes ” quietly yet tediously foregrounds that trombone that has come “ said ” my, the restaurant that habitually but quietly prefers the dichotomy that said “ damn, this boiling hot trombone quietly slobbers by the trombone that quietly coughs ” said “ oh, the amoeba habitually coughs ” “ yet damn, this seagull that spins the seagull that sneezes for the important trombone quietly coughs but my, this sea that slowly yet habitually has come foregrounds this dichotomy and that blue thing tediously slobbers ”
For the assignment on Bayesian classification I combined Adam’s BayesClassifier.java with the previous Markov chain examples to use n-grams instead of words as the tokens for analysis. BayesNGramClassifier.java can be found here as a .txt file, and you can download all of the required files here. Note you might have to increase the amount of memory available to Java to run the analysis with higher values for n. Try something like java -Xmx1024m BayesNGramClassifier 10 shakespeare.txt twain.txt austen.txt sonnets.txt
if you’re having trouble.
I compared sonnets.txt to shakespeare.txt, twain.txt and austen.txt as in the example using various values of n for the analysis. The data is below, with the word-level analysis first. Note that higher numbers (i.e. those closer to zero) indicate a greater degree of similarity.
n | shakespeare.txt | twain.txt | austen.txt |
word | -59886.12916378722 | -64716.741899235174 | -66448.68994538345 |
1 | -311.94997977326625 | -348.2797252624347 | -351.8612295074756 |
2 | -6688.356420467105 | -6824.843204592283 | -7076.488251510615 |
3 | -46332.8806624305 | -47629.58502376338 | -49557.906858505885 |
4 | -155190.04376334642 | -161815.95665896614 | -167839.50470553883 |
5 | -350322.9494161118 | -369897.08857782563 | -379600.90797560615 |
6 | -581892.4161591302 | -620871.7848829604 | -629557.118086935 |
7 | -798094.5896325088 | -851043.4785550251 | -856926.3903304675 |
8 | -977428.4318098201 | -1033391.2297240103 | -1037851.0025613104 |
9 | -1106125.9701775634 | -1153251.0919529926 | -1159479.8816597122 |
10 | -1184654.361656962 | -1218599.6808817217 | -1227484.9929278728 |
11 | -1221770.2880299168 | -1242286.351341775 | -1255024.535274092 |
12 | -1228641.7908902294 | -1238848.5031651617 | -1254404.827728626 |
13 | -1214247.043351669 | -1217403.6233457213 | -1235480.9184978919 |
14 | -1187489.0276571538 | -1186476.2556523846 | -1205959.6178494398 |
15 | -1153511.2780243065 | -1150529.1594209142 | -1170746.8132369826 |
When the n-grams become 14 characters long (which is very long, considering the average length of English words) the analysis finally starts to break down, and it no longer correctly classifies sonnets.txt as being most similar to shakespeare.txt. Some values of n certainly perform better than others, but I’d need to delve further into the mathematics of how these numbers are calculated in order to do more detailed analysis.
]]>I’ve made some initial progress on organizing the interface for my Meetup.com iPhone Application. I have a tab bar at the bottom, table views with navigation bars for each tab, and an array of strings populating one of table views with sample text. I started off with David’s UITableView tutorial, but ran into a series of problems when I tried to integrate it into Apple’s Tab Bar Application project template. I eventually gave up on using Interface Builder and decided to do the entire thing programmatically using this excellent tutorial that I found online. That worked without any trouble, and I was able to modify the example to serve as the basics of my application.
I have uploaded what I have so far to a repository on GitHub – more frequent updates will be committed there, but I’ll also post here at major milestones.
A side note – Joe Hewitt, the developer of the Facebook iPhone app, recently open-sourced several of the iPhone libraries that he used as the ‘Three20 Project’. They look like they might be useful, and the post certainly deserves a link and a thank you.
]]>*We might not actually use this name, but I like it and am going to enjoy it, at least for now.
Our RapidSMS/Mechanical Turk project is moving forward. Last week we met with the software development team at UNICEF that built RapidSMS, re-focused our efforts on creating a tool to process incoming SMS messages with Mechanical Turk, and divided out the tasks before our next meeting. I thought about what specific features we would need to provide to administrators of the system in the field for them to be able to set up and configure the system to work with RapidSMS. I made a few slides to present the ideas to our group, and the deck is below.
(I made both this presentation and the previous Meetapp presentation with 280 Slides, a web-based presentation editor made by a startup called 280 North. Give it a try — I find it great for sharing presentations, and I prefer it to working with Google Docs.)
]]>I started off by looking at various options for getting a user’s bookmarks on the Delicious Tools page. I decided to use RSS feeds for the very first version, but those are limited to 100 bookmarks and I knew that I’d have to switch to something else later. I spent a long time familiarizing myself with Google’s Custom Search Engine tools — there are a lot of options for customizing the sites available to the custom searches. I ultimately decided that I needed the power and flexibility of a self-hosted XML file of annotations that would contain the URL’s of the sites to be searched. In addition, this seemed like a good project to start learning a programming language called Python.
I dusted off an old Delicious account, memento85, and added a few random bookmarks. I made a Python script that retrieved the most recent 100 bookmarks for a user as a JSON object and wrote the url’s from those bookmarks to a properly formatted annotations XML file. It took some trial and error, but Python was generally painless and you can see the script that I used here as a txt file. I then set it to run a few times an hour as a cron job on my server, and this made sure that my annotations XML file would be updated when my bookmarks changed. (Note that changes to the XML are not immediately reflected in the CSE, but this is ok — people can remember the sites that they’ve been to in the last few hours on their own).
Once that was working I set up a second CSE to use another Delicious account, lehrblogger, that had many more of my bookmarks imported. The annotations file made by bookmarksearch.py for this account looked like this. Adding this xml as the annotations feed in the CSE results in the following functional custom search engine — try it out below or go to the search’s <a href=“ homepage.
But why, exactly, is such a thing especially useful? Let’s say that I am looking for a specific site that I am sure that I had bookmarked a while ago and want to find again. I know it has something to do with SMS, but can’t be sure of any other keywords. If I do a search for ‘sms’ on my Delicious account, I get only one result. It is returned by the search because I tagged this result with ‘sms’, but perhaps it is not the site I was looking for and I am still certain that the other site is in my bookmarks. I could use the Custom Search Engine to search the full text of these same bookmarks, and this returns these four results, the one found by Delicious and three others. If I had been looking for, say, the first result, it would have been very difficult/tedious to find with only the tools offered by Delicious.
After that initial part of the project was both working and useful I started to think about ways in which it could be expanded. The Google CSE supports refinements, or categories of search results, which allow a user to quickly filter for results of a given topic. I thought there was a nice parallel between refinements and Delicious’ tags, and it seemed like a good next step to use the tags as refinements by pairing them in the annotations XML file with their respective URLs.
This feature also requires, however, that the main file that defines the CSE list all of the refinements. Google does provide an API for easily modifying this file, but a user must be authenticated with Google as the owner of CSE. I needed the updates to happen regularly as part of a cron job, and it would not work for each user to need to authenticate (i.e. type in her Google username and password) each time the CSE was updated. Even if I found a way to use authentication data as part of the cron job, I was concerned about storing that sort of sensitive information on my own server.
Thus it made sense to make a much larger jump forward in the project than I intended so early on: I decided to rebuild the application to run on Google’s App Engine. App Engine is a scalable hosting/infrastructure system on which to build rich web applications, and it offers substantial free bandwidth and CPU time as well as a competitively priced payment plan for larger/more popular applications.
App Engine uses Python and is well documented, so I dove in with the Hello World example. A good first step seemed to be to get the annotations XML file populated by the bookmarks returned by Delicious API call made by App Engine, and next I needed a way to serve that file at a persistent URL for the CSE to use. These things were more challenging than I expected — I had difficulty authenticating with Delicious, parsing the XML (as opposed to JSON) data that came back, and finding a way to serve those URLs as a properly formatted XML file. I initially looked for a way to write the URLs to a static file, but eventually found a detailed tutorial on writing blogging software for the App Engine, and I was able to adapt the RSS publishing portion of that example for my purposes.
The annotations XML files were now being published to URLs such as http://delv-icio-us.appspot.com/annotations/memento85 but currently it only works for that one user and you can actually put whatever you want after “annotations/”. Once I had App Engine making a call to the Delicious API and serving the resulting bookmark URLs in an annotations XML file, it was easy to set up a new custom search engine, this time for the handful of bookmarks of memento85.
Because the process of getting the above working involved so much trial-and-error, and because I intend to continue developing the project into a more complex application, I set up a GitHub project for the App Engine portion of Delvicious. There are many, many things left to do before the project is complete. I will need to:
I’m excited about implementing these features and hope to continue this project for the remainder of the course. Delvicious will be a good opportunity to learn Python, familiarize myself with building web applications using the App Engine, and create a mashup that people might find truly useful.
]]>I am going to build an iPhone application for users of Meetup.com for my primary project in Little Computers. Meetup is a pretty cool company — if you’re unfamiliar, it’s worth reading the following and checking out their site.
Meetup is the world’s largest network of local groups. Meetup makes it easy for anyone to organize a local group or find one of the thousands already meeting up face-to-face. More than 2,000 groups get together in local communities each day, each one with the goal of improving themselves or their communities.
Meetup’s mission is to revitalize local community and help people around the world self-organize. Meetup believes that people can change their personal world, or the whole world, by organizing themselves into groups that are powerful enough to make a difference.
I considered many alternative apps to build — xkcd, Diplomacy, a super simple app for exchanging contact info, a very complex app for music sharing, and others — but settled on this one. It seems like something that could be truly useful to a lot of people, doable within the course of the semester, and challenging and interesting to make. The slide deck and sketches I made to present the idea to the class is below.
]]>After the keynote, the NYT developers presented the following APIs —
The one that seemed to have the most potential and significant uses was the Newswire API — it is “an up-to-the-minute stream of links and metadata for items published on NYTimes.com. […‘] Better than RSS, the Times Newswire API offers chronologically ordered cross-site results, including rich metadata.” I also thought ShifD had potential as a basis for building interesting things — it’s a tool for coherently shifting content and information between devices and contexts, and Ted Roden, one of the developers gave a cool demonstration of a quick little app that can track personal expense data in a spreadsheet using SMS messages. The Times People API also seemed to be promising, and it is built so that its content-sharing capability can be incorporated into that of other social networks. Derek Gottfrid, Senior Software Architect at the NYTimes, put it well: “Our goal is not to own the social graph — we actually have a pretty good news and information site.” Hopefully this becomes a trend in social networking websites — they should focus on applications or content (like Twitter does on communication, Flickr does with photos, or Times People does with articles) and allow themselves to be integrated into coherent user experiences. Facebook, in contrast, offers only limited integration between its own functionality and that of external sites, and keeps the user in a ‘walled garden’ cut off from the rest of the internet. I’m formulating a separate post on this topic for later.
The final presentation was from Jacob Harris, who works on the Interactive Newsroom Technologies Team. He described interactive news as “kind of like pornography — you know it when you see it, but it’s kind of hard to define” (haha), and listed three essential components: data, story, and interaction. He presented a few recent interactive pieces, including the presidential election map used in the 2008 election (which was the best and most informative available online), a confidential government document that had been leaked to the NYT and posted online with associated metadata and enhanced browsing ability, and an easily accessible database of the prisoners at Guantanamo Bay.
I had an idea about considering NYTimes.com as a one larger interactive experience, with articles and other media as the individual pieces of data, so I asked about it. Jacob said that his department ‘tried to stay small and do one off things, rather than deal with rearranging the homepage and interacting with 80 committees,’ which makes complete sense. But it reminded me that I had heard once of Google making tiny, pixel-level changes to the home page and using the behavior of huge numbers of people to determine what was the best design. I also remembered a presentation I heard at a NY Tech meetup several months ago from someone who worked at the Huffington Post — they have a real time traffic monitoring system to determine what articles on their main page are the most popular, allowing them to rearrange their content layout dynamically. I wonder if the NYT could use similar strategies to optimize their website?
As a side note, Adam Harvey (above left), another first-year student from ITP, was at the event. Nathan (above right), whose blog I was reading last semester when I was starting to work with Processing and Scala, was there too.
And, as a former aspiring architect, I must comment that Renzo Piano’s New York Times Building was both well designed and well executed. The lobby was spacious and inviting, the (climbable) rods along the sides filtered the light nicely and cast interesting shadows, the finishes were attractive, the spaces were pleasant and flowed nicely, and the elevators (with the floor buttons in the waiting hallways) were very cool.
Photos are from everyplace and Times Open on Flickr, thanks!
]]>Mobile Tech 4 Social Change Barcamps are local events for people passionate about using mobile technology for social impact and to make the world a better place. Each event includes interactive discussions, hands-on-demos, and collaborations about ways to use, deploy, develop and promote mobile technology in health, advocacy, economic development, environment, human rights, citizen media, to name a few areas. Participants for Mobile Tech 4 Social Change barcamps include nonprofits, mobile app developers, researchers, donors, intermediary organizations, and mobile operators.
The event began with a talk (via Skype!) from Ethan Zuckerman, who I had seen speak previously in my Applications class last semester. He’s been involved in various service projects in Africa and is a co-founder of Global Voices Online, a community brought together by ‘bridgebloggers’ that translate posts between languages and cultures. His most salient and useful point was that mobile technology was most powerful when it was paired with another medium, such as FM radio.
I went to three breakout sessions. The first was given by a few people from the Innovations Team at UNICEF and a couple of students from Columbia working on the aforementioned Malawi project. I had seen some of what they presented before, but got to play with the RapidAndroid version of the RapidSMS software (which runs on a G1 mobile phone), and I saw some sample database inputs and SMS form instructions. In addition, I learned that while initial SMS error rates have been high in the pilot studies, the system will respond asking the user to resend the message, and this feedback loop is effective at teaching users to send correctly formatted SMS messages. My group in the Design for UNICEF class will continue with our Mechanical Turk project — there will still be unparseable messages or messages that don’t get resent — but it will be good to keep this in mind as we develop.
In between sessions I saw a demo from an MIT PhD student named Nadav Ahrony at the Viral Communications group at the Media Lab. He was working on a not-yet-released general platform for development of wifi/bluetooth peer-to-peer mobile applications. He had built a demo application that would let people associate their phones with a particular group of phones, and then automatically sync content on these phones over an ad-hoc network. The most interesting use case he suggested: if a protester takes a photo with the device, and there is risk that the device might be confiscated, it will automatically be downloaded by the others in the group immediately after being taken, so even if the original device is lost the data is not.
The second breakout session was lead by Josh Nesbit, a current undergrad at Stanford graduating this year. He presented an SMS-based project he did in Malawi for hospitals and the surrounding villages that used FrontlineSMS, an alternative SMS platform (that isn’t necessarily comparable in aim to RapidSMS). More information on that project is available here.
The last breakout session was for mobile developers, and we had an interesting conversation about developing for Android. Overall I enjoyed the day and found it useful, and I’m looking forward to going to the next m4change barcamp.
Photos are from Meredith Whitefield on Flickr, thanks!
]]>Modify, augment, or replace the in-class Markov chain example.
As presented by Adam, the MarkovFilter example looked at each series of, say, three characters to build a model of what the next character is most likely to be, and then used this model to generate new lines of text by starting with an initial series of three characters used by the actual text, choosing the next character based on the first three characters, choosing the character after that based on the new most recent three characters (the latter two from the first set of three, and the new fourth character), and so on.
It occurred to me that perhaps it was unnecessary that the algorithm examine the text as we read it, from left to right. Instead, I wanted to rewrite MarkovFilter.java to work backwards — starting with the last three characters of a line and working backwards instead of forwards, looking at the set of three characters at the (temporary) beginning of the line, choosing a new first character for the line from a modified statistical model about which character is most likely to precede them, and repeating until the line is of the desired length. VokramFilter.java represents this reversed Markov filter, and the zip file of all the classes is here.
The text this generated seemed approximately as similar to English as that generated using the forward looking method, and I considered for a while how to be sure this is the case. I suspected that the operation performed by backward-looking Vokram analysis was equivalent to reversing the text that was input to a forward-looking Markov algorithm and then reversing the result, and it seemed like that operation would do the same thing as the Markov algorithm on its own. Yet I couldn’t quite work out a more thorough proof of that intuition, and will see if Adam has any insights. I considered doing a comparative analysis of several large texts using both MarkovFilter and VokramFilter (by comparing the Markov analysis of a text generated from applying VokramFilter to an original text to a Markov analysis of that actual original text), but didn’t have a chance.
]]>Investigate Java’s Collections class. See if you can figure out how to use
Collections.sort()
to sort the output of ConcordanceFilter.java—first in alphabetical order, then ordered by word count. (See the official Sun tutorial.)
The Java documentation was relatively clear about what I needed to do using the Collections and Comparator classes to get it working, and Google answered any remaining questions I had about syntax. There are a few files that I edited to run it (including AlphabeticComparator and a WordCountComparator classes), and you can download a zip file of the assignment here. When run, ConcordanceFilter.java will search for a word within a text and output each line on which that word occurred, then output those lines again in alphabetical order, and then output those lines a third time with the line with the fewest words first. For example:
]]>$ java ConcordanceFilter place <lovecraft_dreams.txt
All contexts
remote place beyond the horizon, showing the ruin and antiquity of the city,
over a bridge to a place where the houses grew thinner and thinner. And it was
Contexts sorted alphabeticallydsfadsfd
over a bridge to a place where the houses grew thinner and thinner. And it was
remote place beyond the horizon, showing the ruin and antiquity of the city,
Contexts sorted by word count
remote place beyond the horizon, showing the ruin and antiquity of the city,
over a bridge to a place where the houses grew thinner and thinner. And it was
I’m excited to be working in a group with Thomas Robertson, Lina Maria Giraldo, Amanda Syarfuan, and Yaminie Patodia. Our project in a sentence: We plan to extend UNICEF’s existing RapidSMS platform and RapidSMS-based projects to use Amazon’s Mechanical Turk online task marketplace to provide automated correction of malformatted SMS database inputs.
RapidSMS (link 1, link 2) is a project developed by Evan Wheeler, Adam Mckaig and others in UNICEF’s Division of Communication. It is designed to be an extensible platform for sending and receiving SMS text messages using a computer server. Mobile phone penetration in Africa is relatively high and growing quickly, and SMS is a powerful tool that can be applied to a wide variety of UNICEF-type projects. It is particularly useful for quickly aggregating large amounts of data from the field; where previous methods required the tedious process of faxing in and compiling paper forms, mobile phones can be used to submit that data quickly via text message, and this ultimately allows coordinators to make better decisions about the allocation of limited resources. RapidSMS allows for automatic insertion of SMS messages into a centralized database, as well as the export of this data in human- and machine-readable formats (such as graphs and Excel files). It has already been deployed for a food supply distribution project in Ethiopia (link) and a child malnutrition monitoring project in Malawi (link).
One challenge for such SMS-based database input systems is the problem of malformed texts inputs — users won’t always know the proper message format or might be in a hurry and mis-type a key. It’s practically impossible to design a system that can handle all database inputs, so as a result valuable information gets thrown out, even though it is present in the messages. An actual person might be able to successfully parse many of these malformed messages and determine which pieces of information from the SMS goes in which database fields; UNICEF workers, however, generally have more pressing tasks while in the field.
We plan to extend the open-source RapidSMS system to have the functionality of automatically sending these malformed SMS database inputs to Amazon’s Mechanical Turk for conversion into proper database inputs. (Mechanical Turk is an online marketplace that automatically pairs tasks that are simple (yet too hard for a computer to do) with people who want to do them for money (often just a few cents).) This RapidSMS extension could then be integrated into existing projects mentioned above, making them more scalable and more effective.
I’ll post more as the project progresses throughout the semester, and please leave a comment with any feedback!
]]>Make a program that creatively transforms or performs analysis on a text using regular expressions. The program should take its input from the keyboard and send its to the screen (or redirect to/from a file). Your program might (a) filter lines from the input, based on whether they match a pattern; (b) match and display certain portions of each line; © replace certain portions of each line with new text; or (d) any combination of the above.
Sample ideas: Replace all words in a text of a certain length with a random word; find telephone numbers or e-mail addresses in a text; locate words within a word list that will have a certain score in Scrabble; etc.
Bonus challenge 1: Use one or more features of regular expression syntax that we didn’t discuss in class. Reference here.
Bonus challenge 2: Use one or more features of the Pattern or Matcher class that we didn’t discuss in class. Of particular interest: regex flags (
CASE_INSENSITIVE
,MULTILINE
), “back references” inreplaceAll
. Matcher class reference here.
I’m planning on doing a much larger project involving analysis of links on Twitter, and I decided to do a very tiny piece of that project for this assignment. I used the XML results from a Twitter search as my input and used a regular expression to look for URLs in the individual tweets. I stored the URLs and the number of times each of them occured in a hashmap, and then printed that information at the end of the analysis.
Usage of Java’s HashMap, Set, and Iterator classes came back to me quickly, and the only tricky part was the regular expression. I ended up using
<title>.*(http://)(\\S+)(.*)(</title>)+
The content of each message posted to Twitter is enclosed in a <title>``</title>
tag, and including that in the regular expression insures that we don’t capture data that are not part of any message but still contain URLs. I require that tag at the beginning of the line and then look for any number of characters before the beginning of the URL, as represented by .*
. Then I get all of the characters up until the first white space with (\\S+)
, any characters that happen to be after the end of the URL with (.*)
, and then finally the closing </title>
tag, with a +
to require at least one occurrence because I know it must be present. The .java file is here, and the compiled .class file is here. You’ll need to add Adam’s a2z.jar file to your classpath, so be sure to get that too if you want to recompile it.
The New York Times did a visualization of tweets during the Superbowl last week, and it was widely circulated on Twitter. A search for “http superbowl nyt” returns a a list of tweets in which most people are sharing links to that visualization, and the results of that search make suitable example input. One specific tinyurl link to the visualization is shared several times, and it demonstrates that the code is functional. The input file is here, and the output file is here.
]]>Create a program (using, e.g., the tools presented in class) that behaves like a UNIX text processing program (such as cat, grep, tr, etc.). Your program should take text as input (any text, or a particular text of your choosing) and output a version of the text that has been filtered and/or munged. Your program should use at least one method of Java’s String class that we didn’t discuss in class.
Be creative, insightful, or intentionally banal. Optional: Use the program that you created in tandem with another UNIX command line utility.
Expanding on/explicitly exacerbating the problem of punctuation I had last week with rearranging the couplets (when the couplets were reordered, you’d often get two lines ending with commas and then two lines ending with periods, and it distracted from the semantic munging I had intended), I wrote a quick little Java program to randomly replace marks of punctuation in the input file. It extends Adam’s TextFilter library, so it works like the command line tools we used last week. I kept certain characters (such as parenthesis and quotation marks) the same because I wanted to keep the text readable while making more subtle changes to the intonation and flow .
The .java file is here, and the compiled .class file is here. The original text of Robert Frost’s ‘Stopping By Woods On A Snowy Evening’ can be found here, and the repunctuated text can be found here. You’ll need to add Adam’s a2z.jar file to your classpath, so be sure to get that too if you want to recompile it.
The Repunctuate.java program also works nicely with the various command line utilities from last week. For example
grep , <frost.txt | java Repunctuate >output.txt
will first filter for only those lines in frost.txt with a comma, and will then repunctuate them and save the output.
Use a combination of the UNIX commands discussed in class (along with any other commands that you discover) to compose a text. Your “source code” for this assignment will simply consist of what you executed on the command line. Indicate what kind of source text the “program” expects, and give an example of what text it generates. Use man to discover command line options that you might not have known about (grep -i is a good one).
I decided to work off of the sonnets.txt file that Adam Parrish (the instructor) had provided as a resource — it’s just a long list of (Shakespearean?) sonnets, with the ending couplets notable indented by two spaces. I decided to extract only these couplets and then reorder the lines slighty so that the AABBCCDDetc rhyme scheme was changed to ABABCDCDetc. It wasn’t going to be a fascinating work of ‘procedural poetics,’ but it seemed like an interesting challenge that would teach me a little more about the command line.
I had to Google around a lot and looked at the man pages to figure out how to make it work — it was somewhat frustrating dealing with UNIX syntax. The code with detailed comments is here, and the code without the comments is here (if you want to run it yourself, use the latter — it wasn’t working with the comments, and I need to ask Adam about it on Tuesday).
The output of the file when sonnets.txt (above) is used as the input is here. It should accept any input file with the couplets indented by two spaces and the other lines not indented, regardless of whether or not they are true sonnets. If there are an odd number of couplets, it will ignore the last one.
(I realize that the punctuation at the end of the lines gets somewhat messed up. This is fixable — I could put commas at the end of one line and periods at the end of the next — but probably isn’t worth the effort.)
]]>Unfortunately, this feature works only for emails that are in your inbox. If you have the emails going to that list filtered to skip your inbox and go straight to a particular label (which is common practice for dealing with multiple lists), then muting doesn’t help — the emails are already skipping your inbox, and the threads will still show up as unread in that list anyway.
But there is a solution! Albeit a somewhat complicated one, so read on if you have the same problem (note part of it is Mac OS X only):
First, I made a new Gmail account — the name doesn’t matter, as I was the only one who was going to use it. I changed my filter that had previously been applying the ‘ITP — student’ label and skipping the primary address’ inbox to instead forward the emails to that secondary address and skip the primary inbox.
Then, in the secondary Gmail account, I set up that filter again to apply the same label (although this isn’t strictly necessary) but kept everything in the inbox. If I want to mute a conversation, I can just select it and type ’m’ (with keyboard shortcuts on), and it won’t show up in that inbox any more, unless someone replies to me specifically. So far so good.
What about sending? I want to be able to reply to the list too, and do it as I was replying before. So I went into the Accounts tab of settings of the secondary email address, and added my primary email address as the default address from which to send mail. Since this email account never receives or sends email to individuals and is not synching with my Address Book, whenever I accidentally try to send a regular email from that account the person won’t show up in the auto-complete, I’ll catch myself, and switch applications. Using a different theme for each account can also help you differentiate.
But there is a problem — emails sent from that secondary account will be in that account’s sent items, and not in the sent items of my primary account. Ah, but when I sent an email to the list it was getting sent back to me anyways, so my primary account will receive that new copy of the email I sent from my secondary account (pretending to be my primary account). They should then be searchable as normal. I even tweaked the filter in my primary account to not forward emails that are both to the student list and from my primary account to the secondary account, since I already responded to the thread and don’t need to see my response again.
And finally — and this is the key piece of the entire puzzle — I used Fluid to make a site specific browser application for Gmail, and I use that app to stay logged in to my secondary account. With Firefox logged in to my primary account, I can stay logged into both at all times (if you use Safari as your primary browser it might only let you be logged into one… not sure what the solution to this is). And because Fluid does not behave as a full browser, if I click a link in a list email in my List-specific app, it will open in Firefox (and thus be, forever, in my AwesomeBar history). As an added bonus, I get a little red flag in the dock saying how many list emails I have unread, and I can simply close the Fluid app and not concern myself with the list if I am too busy for the distraction. (Update: I am now using Mailplane, which makes it easy to switch between multiple Gmail accounts.)
Feel free to comment or email if you have any ideas or questions or need help setting it up!
(Note that I tried another solution first — using Gmail’s stars to mark the student list threads as muted, and then trying to match those stars in the incoming filter. So new emails going to that starred thread would theoretically not be re-labeled with the student list label once they were starred and archived once. This doesn’t work, however, because the “is:starred” filter does not ever match incoming messages.)
]]>On the day after I got two or three times as many site views as I had on any previous day, this is pretty inconvenient, and I’m probably going to switch hosts. Al3x recommended Gandi, and I’ll look more into it in the next couple of days.
(I really wish Google or Amazon offered the sort of hosting service I needed.. those are the only companies I would really trust.)
I apologize to everyone for the inconvenience. Hopefully it doesn’t happen again.
]]>It was during this realization that I went to post my finished blog entry for the M[]leskines, and noticed that DreamHost was down. I spent a while trying to diagnose the problem (before they finally posted a proper status update) and researching other web hosts.
Somewhat simultaneously I received another flurry of random Twitter followers, and, since I hadn’t looked at my feed in days, decided that organizing them was more urgent, and made that my project. There’s a blog post that will come soon about some UI design thoughts that were crystalized by this project, but I don’t have time to finish it now.
I went through the 107 people I was following on Twitter, and copied their user names and real names (if specified) into one of five txt files based on my relation to them (click for full size) —
Those groups are Friends, ITP, Networking, Unknown, and Bots. I then used those lists to create groups using the functionality in TweetDeck, and now my Tweets are sorted (see rotated image below, or click for a horizontal version). This should make it easier to keep up with the groups which are most important and look through the less important feeds when I have time. I’ll keep both sets of lists up to date as I follow more people. It was a small project, and not as creative as I would have liked, but it needed to happen anyway and my day didn’t quite go as expected.
If you look carefully, you’ll notice that a rectangular piece has been cut out of each page so that a pen can be stored inside when the book is closed. I had started to use a Moleskine notebook during the program at Harvard, and when I finished my first one I cut my second one like Zaha’s using a ruler and an X-Acto knife. That particular brand of notebooks is relatively popular and often recognized, but I think the reputation is deserved, as they are quite durable. I carry them everywhere, and with some black electric tape on the binding they will last more than six months.
I was shopping for architecture supplies at Accent Arts in Palo Alto, and noticed a short black lead-holder style pencil that looked like it would fit horizontally across the top of a notebook rather than vertically next to the spine. I compared it to one they had in stock, and it fit perfectly. My notebooks since then have all had a whole cut out at the top, and I’m currently on my eighth.
In addition to the pencil, Gabriela gave me a fountain pen that fits nicely, and I’ve been using Pilot G2 Mini’s more recently for their simplicity and reliability.
It takes an hour or two to cut each notebook, and I decided to try getting them laser cut. I took one to Canal Plastics, and talked to Raymond, with whom I had worked to get pieces of acrylic laser cut for architecture models at Kevin Kennon Architect. Both covers of the notebooks bend back, allowing for a cut straight through all of the pages, and he agreed to give it a try.
The experiment was a success, so I ordered ten more moleskines from Amazon. I had planned my third 4-in-4 project to be to set up a store on Etsy on which I could sell the lasercut Moleskines.
He agreed to cut those too, but had some trouble with burning on one of them, and didn’t want to cut any more. I cleaned off the ash …
… and will probably give a few to the friends that have requested them (Dan, Kabir, Jorge, Cassidy) and save the rest for later. It’s a somewhat scaled back third project, but my second one was pretty ambitious, so I’m satisfied.
]]>The presentation was to a group of two-dozen-plus programmers at the Twitter offices in SOMA. I gave a demo of the application and then walked through much of the code, focusing on Java-Scala integration, Actors, Lift’s Object Relational Mapping, and the World Wind SDK. It was nice to give a much longer (80 minute?) presentation of the project, in contrast to the <5 minute presentations I gave at the show and at the NY Tech meetup. The presentation went well, and the Twitter employees seemed to particularly enjoy the visualization – Steve Jenson asked if I could put the .app file on the Mac Mini connected to the TV so that he could show the rest of the office in the morning.
My flight left from JFK yesterday at 9:00am, arrived in SFO at 12:45pm, and I had time for lunch with a friend before meeting with Jorge to prepare for the 7:00pm presentation at the Twitter offices. I was in a cab back to SFO by 9:15pm and at my gate with plenty of time before my 10:55pm flight, which landed back at JFK at 7:15am. I used Twitter to document the day as it went:
Although all ITP students live within a relatively small distance from 721 Broadway, and despite the fact that there are significant advantages for each student of knowing which other students live nearby (sharing cabs after TNO, going out for a last-minute brunch, etc), there is currently no good way to learn who lives nearby. Several people on the student email list have proposed the creating an online (Google) map on which everyone could mark the location, but nothing has come of it that I’m aware of. There are issues of security, privacy (even if it’s protected), maintenance, and difficulty involving people who don’t read the list.
The solution I had in mind was to use an actual physical map hanging from a wall on the floor, with labeled push pins that people can use to mark their locations. It would not be as searchable as an online map, it can’t be easily archived (if one took a photograph facing the map, the labels would be perpendicular to the plane of the image and thus unreadable), the pin holes would not need to be particularly precise, it could only be accessed by people on the floor who could see the map, it would be easy to update, hard to forget about, and everyone would be aware of it.
I envisioned a giant map of all five boroughs, so I went to the Hagstrom map store in midtown. They did have a large map, but the scale was still relatively small, it did not include Jersey City, and it was expensive ($150). They did have other individual folded maps of each boroughs, and those had somewhat larger (although different) scales and were much cheaper ($5). I bought the maps for Manhattan, Brooklyn, Queens, and Jersey City, since I couldn’t think of anyone that lived anywhere else) —
Opened up and spread out, they look roughly like this —
I mounted them to foam core with double-stick tape (which brough back memories of architecture models)(thanks Meredith for the photos!) —
And here they are mounted to the foam core and leaned against a wall —
And viewed from above —
Since everyone knows where their apartments are, I decided to cut off the visually distracting street-finder tools on the sides of the map. The resulting shapes are irregular, but it should look cleaner against a white wall on the floor. It would be nice if they could all be part of one map and at the same scale, but for these purposes it didn’t really matter since this is for locations within the boroughs and not travel between them.
Finally, I bought pins at Kmart and labels at Staples. There are 75 of them right now, but I hope to get a second color (so the first and second-years can be differentiated) before the semester starts —
An additional use of the maps is specifically for people who were looking for apartments — I’ll have extra blank piece of foam core people could move their pins to if they were looking for a new place, and then people could see who else had pins there and find roommates. Furthermore, those people could see who lived where, and easily decide where to look for a new apartment based on where their friends were.
Once the construction is done on the floor I can hang the maps, and I’ll post more pictures then. Hopefully people use it!
]]>This NYTM was the first one organized by Nate Westheimer, who was sworn in at the beginning of the meeting by previous organizer (and Meetup.com co-founder) Scott Heiferman. Here’s a video of the ‘ceremony’, and if you watch carefully you’ll see that it’s my MacBook that they used :)
The event got a fair amount of press, and here’s a collection of the articles that mentioned my project:
In addition, lots of people were posting to Twitter during the event itself:
Whitney Hess (whose blog is linked above) live-tweeted the whole event, and these were the ones that were specifically relevant:
Also, if you happen to find any photos or video of me presenting on Tuesday, *please* send them to me! I was too busy worrying about other things, and half forgot to ask someone to take a couple photos, and half assumed there were enough cameras in the room that it would happen on it’s own. I haven’t been able to find any yet, so let me know if you do.
For my second project for the 4-in-4, I decided to fly to California so that I could be there to present TwiTerra to the monthly meeting of the Bay Area Scala Enthusiasts (BASE) at the Twitter offices in San Francisco. Jorge Ortiz (the friend who has been teaching me the language) was planning to present it anyway, but I decided that it was worth it for me to be there in person. Since I only have one day for each project, and since I need to be here on the other days to do other projects and help coordinate, I’m going to be gone less than 24 hours. I’m looking forward to a longer-than-five-minute discussion of the project and its implementation, and I’ll post again afterward.
]]>As of the time of this writing, I have 161,984 tweets in the database. 72,215 of them are root tweets, or original tweets that were then retweeted but are not retweets themselves. This means that most of my chains of retweets consist only of one original tweet and one retweet. Furthermore, of these original root tweets,
So, of all retweets, less than three twentieths of one percent have ten or more retweets. There are, however, several very interesting tweets with over one hundred retweets. They are as follows:
Some of these are not surprising — @chrisbrogan has over 32,000 followers, so when he tweets about something as important to the Twitter community as Twitter phishing scams, especially when that tweet is short, it will get retweeted a lot. @armano has only (only) 8,387 followers, but his tweet is compellingly humanitarian, and I can see the appeal of retweeting it.
The first four, however, are fascinating viral marketing campaigns. @shefinds has only 1,208 followers, @eMom has considerably more at 7,294 followers, @karllong has 2,774 followers, and @camiseteria has 4,767 followers.
An unusually large fraction of the followers of each of these accounts retweeted these viral messages. By intertwining the methodology of spreading the idea (“retweet this”) with an incentive (“to enter this contest”) the accounts were able to market the brand of whatever they were selling (as well as the brand of that specific account) to large audiences at low cost and low annoyance (clearly no one who retweeted it was annoyed with the message, and if a recipient of a retweet was annoyed, that is likely to be annoyance with the retweeter and not the brand). Remember that the retweet counts are not the number of people who received a message, but are instead the number of people who broadcast it — actual numbers of recipients would likely be one or more magnitudes larger.
Finally, note that these are only the retweets with associated geographic data — because I planned to display them on the globe, those are the only ones I kept in the database. Based on my informal observations while watching the application run of the other retweets in the database, there are not very many that either start in or are retweeted from South America. Thus the huge number of Brazil retweets that Camiseteria got that had geographic data associated with them is probably only the tip of the iceberg of retweets that did not have geographic data. I imagine that the t-shirt company got a huge amount of exposure for very little effort and very little cost.
I’ve made a slight modification of TwiTerra to highlight the Camiseteria retweets. Most retweets are in Brazil (where the original tweet was), but they also stretch to a variety of other places. (I’ll add that I came across that particular t-shirt store over a year ago; I can’t remember the circumstances, but they have nice shirts.) Download a Mac, Windows, or Linux version, be patient as it launches, and be sure to spin around the globe to Japan — two of the retweets are there, and the travelling of the information is visualized as going in opposite directions around the world from Brazil to Japan :)
]]>Also, I’ve made some significant improvements to the code, and it should now require much less RAM. I’ve made packages of TwiTerra that will run on both Mac and Windows computers. Both should open full-screen and without menu bars, so press Command-Q or Alt-F4 to exit. It requires an internet connection to run, and please be patient as it initializes the globe and database connection on startup.
Download the Mac version here — it should run as a normal application without any further steps.
Download the Windows version here — be sure to follow the instructions in the readme to get it working.
Download a Linux version here — I have no idea if it works and have no computer to test it on, but I thought I’d offer it anyway.
]]>I had been going regularly to the monthly meetings, but had to stop last semester because they conflicted with my Applications class. When I went to look at this month’s meeting, I noticed they were looking for presenters, so I posted about TwiTerra. The organizers decided to give the meetup the theme “Built on Twitter”, and my project fit right in.
It should be an interesting night, and you can RSVP at that first link. Kate Hartman (former ITP student and now adjunct faculty) is also presenting a project called Botanicalls.
There’s a fair amount of preparation for me to do (beautifying the code, printing business cards, writing blog posts like this one), but I’m very excited!
]]>(I apologize for not posting again sooner on this project – things got pretty hectic as the show neared, and I kept the project page (linked above) up to date instead.)
On December 17th and 18th I presented TwiTerra at ITP’s Winter Show – an estimated 2400 people attended to see the 100+ student projects on display. People seemed to enjoy my visualization, and it was a lot of fun (and exhausting) to explain it so many times to so many different people. My standard line was, “Hi, are you familiar with Twitter?”
I’d like to thank everyone who took pictures at the show, especially second-year ITP student David Steele Overholt, whose photos are below:
TwiTerra got mentioned in several write-ups of the show (let me know if you know of more!
A list of general online press from the show can be found here.
Notably, several people at the show mentioned Facebook’s Project Palantir, a “project that visualizes all the data Facebook receives.” I hadn’t known of it until after the show (perhaps I was too busy with TwiTerra and missed when the link was spreading), but is another globe-based visualization of online communication. It doesn’t show the actual content of the messages, though (and can’t, since Facebook users expect privacy), and it’s something only Facebook employees/engineers could build (since the rest of us don’t have access to that data). Twitter, in contrast, makes its data publicly available and easily accessible via a powerful API.
Finally, I want to thank everyone who helped at various stages of this project, especially my friend Jorge Ortiz (who finally has a blog!), my instructor Dan Shiffman (who can look at a function in a programming language he doesn’t know and instantly find a way to make it two thirds shorter and much simpler), and Patrick Murris of the World Wind Development Team for his prompt responses to a few technical issues I had.
]]>We’ve set up a Google Group and there’s a page in ITPedia with the signup list. Also, be sure to check out the pages for past iterations of the idea — June 2008’s 7 in Seven and July 2008’s 5-in-5.
Our 4-in-4.com and/or itp.nyu.edu/4-in-4 will go up in the next week or so — be sure to check back!
]]>I have been fascinated by Twitter since I signed up for it several months ago. I am particularly interested in the widespread social customs (including the use of retweets and hashtags) that have become popular without being fundamental to the system, as well as in the potential for exploring the data that these conventions make accessible.
I am also interested in the negative social consequences of homophily, and I am interested in the ways in which new technologies such as Twitter can break down those barriers of similarity and create more geographically and culturally diverse communities.
My primary research has been several months of personal Twitter use. I have also spent time browsing the public feed of tweets from all users, exploring various search queries, and reading articles by a variety of bloggers. In addition, I explored political Twitter memes in somewhat more depth in a paper for Clay Shirky’s class on the election.
Everyone: Twitter non-users who are skeptical about its usefulness and worth as well as current Twitter users will appreciate the visualization of how it brings the world together.
A visitor or group of visitors would see the globe visualization on a large screen from several yards away, and would be able to watch several iterations of retweet trees during their approach. Upon arrival, visitors would be able to read the text of a handful of tweets in the visualization before I presented the one-line pitch. For the majority that is unlikely to be familiar with Twitter, I could explain the basic concept (“140-character status updates for interested friends, family, coworkers and strangers”) and the idea of a retweet (“repeating an idea or passing on a message from a person that you are following to all of the people who are following you, with attribution given to the original author”). I would provide further explanation to those that required it, and I would offer implementation details to those that had sufficient aptitude/interest.
Prior to this project, I did not have substantive experience with either Scala, PHP or MySQL, and I learned a lot about working with those tools. In addition, I now feel relatively comfortable with NASA’s World Wind Java libraries, despite their relatively sparse documentation. I also became much better at searching around on the Internet for solutions to programming problems.
Twitter, memes, social media, geolocation, NASA, networks, homophily
]]>The success of Barack Obama’s 2008 presidential campaign can be attributed to the enthusiastic efforts of a large number of supporters, many of whom created and distributed pro-Obama video media. These media are poised to play an increasingly important role in the future, and campaigns should adopt strategies for the 2012 election to more fully support the efforts of these supporters1.
Hip-hop artist will.i.am’s “Yes We Can” video has been viewed well over fifteen million times2, the “Dear Mr. Obama” video by an Iraq War veteran has over thirteen million views3, and Obama Girl’s first video has been viewed nearly twelve million times4. These videos represent only the tip of the tip of an iceberg of user-generated content relating to the 2008 election. A small number of videos have reached this uppermost level of popularity, a larger number have been somewhat less popular, and a huge number have only hundreds or even dozens of views (a YouTube search for “obama” returns 784,000 results5).
This ‘long tail’ of hundreds of thousands of non-viral videos might have had substantial and under-appreciated political import 6. In an article titled “It’s the Conversations Stupid! — The Link between Social Interaction and Political Choice,” Valdis Krebs observes that, “after controlling for personal attitudes and demographic membership, researchers found social networks, that voters are embedded in, exert powerful influences on their voting behavior” 7. Imagine the video creator who spends hours on a short video with a political agenda. That person certainly wants as many people as possible to see the video, so she will email it to all of her friends and ask that they email it to their friends. Critically, for the first few times it is forwarded, the video has an increased effect on the viewer because that viewer has a social connection to the person who created it and whose opinions it represents. The enhanced effects of these relatively unpopular videos can be aggregated over the huge number of them that constitute the long tail, and this aggregate results in electoral influence that a campaign can use to its advantage.
The explosion of video in this election occurred for two primary technological reasons: first, there are free online forums such as YouTube for the hosting, searching, and sharing of video media; and second, the computers used to make these videos have become easier to operate and less expensive to purchase. The digital landscape will change again before the 2012 election, and it would serve a campaign well to anticipate (and potentially direct) these changes to better support the user creation of media. Although many people did have the knowledge and tools necessary to produce political content for the recent election, many did not. A campaign can provide these tools, information about how to use them, primary source content that can enrich them, and a community to encourage their production.
Web-based applications that run in a browser window are becoming increasingly popular for common tasks such as email, calendar management, and document editing. Although often less powerful than their desktop counterparts, they have the significant advantage that users do not need to download or install any software. Jumpcut.com is a web application that offers free video hosting services (similar to YouTube’s) and free video editing tools (similar to that found in a basic desktop application such as Apple’s iMovie) 8. The startup was founded in 2005, launched a public beta in April 2006, and was bought by Yahoo that October 9. If a campaign licensed the use of this functionality from Yahoo or hired developers to recreate it, then it could empower all of its supporters with Internet access (either at home or at a public location) to create and distribute political videos. Although this would be unlikely to have an effect on the quantity or quality of the highly viral videos, it would stretch out the long tail so that even more supporters can create content and send it to trusting contacts.
In addition to providing these tools, the campaign could provide official instructional videos, help documents, and other information to teach supporters how to use them. This would further stretch the long tail to include nearly all supporters interested in creating content, regardless of hardware/software ownership or pre-existing technical skill.
The campaign can further facilitate the media creation process by providing easily accessible source content. Currently, supporters find clips on YouTube and then download, edit, and re-upload them as parts of their own videos. The campaign could provide original, high-quality versions of all candidate speeches, interviews and other appearances, thus saving supporters time that was previously spent searching through YouTube videos for quality source files. To further facilitate finding this content, the campaign could offer searching of not just videos but also the transcribed text of those videos. It is currently very difficult for a supporter to find an instance of a candidate discussing a particular issue if that person does not remember where or when the candidate spoke on the topic, and searchable transcripts would make supporters no longer limited to what they had previously seen.
Finally, the campaign can further strengthen its existing online social network by focusing activity around this process of video creation. Online forums and chat rooms would enable supporters to discuss their videos, share tips, answer questions, and provide feedback that would refocus content to be maximally effective. In addition, it might encourage users to reframe videos that had been intended only to be humorous or to market their creators (such as Taryn Southern’s “Hott 4 Hill[ary]” Obama-Girl copycat videos 10) so that they made a stronger political statement. The campaign could also attempt to replicate some of Flickr’s success with groups focused around particular types of image creation by supporting groups focused on a particular video technique. Just as Flickr has groups for those interested in high dynamic range photography, groups could be created on the campaign’s website for those interested in making videos that used a green screen to combine clips (as in the Obama Girl videos).
Furthermore, the situating of supporters’ video editing activities within the context of the campaign’s website allows the campaign some degree of message direction. Decisions about page design, the wording of instructions, and the choice of example videos can all set the tone that the supporters will be working within when making their own videos. In addition, an active community might have a moderating influence on the content of the videos so that damaging outliers (such as the pro-Obama “Sing for Change” video that was repurposed by Republicans 11) might be toned down before going public.
Note that the campaign would still be able to avoid direct involvement with the video creation process and therefore abdicate responsibility for problematic content. The campaign should also be careful not to give supporters the impression that creation of media is a sufficient substitute for other forms of involvement such as canvassing or phone banking. Instead, the campaign should highlight videos that demonstrated that their creators were volunteering in additional ways. Supporters who contribute to the campaign often consider themselves to have made an investment in it, and thus it is to their advantage to further help that campaign to succeed because they want a return on that investment. The campaign should design the opportunities for supporter involvement to be mutually complimentary, encouraging supporters to become actively involved in multiple ways.
In conclusion, a 2012 presidential campaign should take advantage of existing frameworks and upcoming technologies to support its supporters in producing political video media. The campaign should embrace the aggregate electoral importance of the long tail of supporter created videos. Specifically, campaigns should offer free online video editing and hosting services so that as many supporters as possible can make a contribution. The campaign should also provide supporters with information about how to use these video creation tools, the source content necessary to make their arguments, and a social community in which to discuss their creations. Supporters would feel more invested in getting their candidate elected to office, thus strengthening the campaign on multiple levels in a variety of social networks.
The “Support the supporters” phrase is from unpublished articles and lectures by Clay Shirky↩
“Yes We Can” — http://www.youtube.com/watch?v=jjXyqcx-mYY↩
“Dear Mr. Obama” — http://www.youtube.com/watch?v=TG4fe9GlWS8↩
“I Got a Crush…On Obama” — http://www.youtube.com/watch?v=wKsoXHYICqU↩
“Youtube search for ‘obama’ — http://www.youtube.com/results?search_query=obama — YouTube limits search functionality to provide a limited number of results per query, and this complicates gathering more detailed information for less popular videos.”↩
The Long Tail on Wikipedia — http://en.wikipedia.org/wiki/The_Long_Tail↩
Valdis Krebs — http://www.extremedemocracy.com/chapters/Chapter%20Nine-Krebs.pdf↩
Jumpcut.com — http://jumpcut.com/↩
Jumpcut.com on Wikipedia — http://en.wikipedia.org/wiki/Jumpcut.com↩
“Hott 4 Hill — She’s Hott For Hillary!!” — http://www.youtube.com/watch?v=-Sudw4ghVe8↩
“Sing for Change Obama” — http://www.youtube.com/watch?v=Pb8ntODQha4↩
The first three were the top three in the preferences that I listed, and the final course was fifth, so I am quite thrilled with what I got. I still plan to shop several other courses though, including 1’, 2’, 10’, When Strangers Meet (for which I am first on the wait list) and Digital Imaging: Reset (which have descriptions here).
]]>I resolved a couple of bugs in the line drawing animation, and now my database calls are made for each tree before it attempts to start drawing lines (allowing for a smoother animation). I also got the built-in View state iterator to work and make the globe rotate. Although much needs to be done before the project is ready to present, i think I’ve found solutions to the major challenges. A video is below:
TwiTerra Milestone from me on Vimeo.
I am hoping to do a number of things before presenting it in class next week, but for now I need to spend a few days working on my AJAX keyboard portfolio project. This list includes:
I rewrote my Scala code from the Midterm to access a MySQL database for its tweet data instead of the Twitter search API’s XML feeds. I used Lift, the Scala web framework, to access the database — this tutorial and this one were helpful, but I would not have been able to get it working without Jorge Ortiz’s help. (Thanks!)
The current version is relatively rudimentary, and simply iterates through each of the root tweets in the database (those without a parent) and displays them. It uses the parent_id fields of the children to recursively build trees for each root, and then draws the lines corresponding to the locations of the tweets in that tree on the globe. I have functionality for filtering these trees by the minimum distance between tweets, but later I want to improve it to show only the most interesting/compelling tweets. I also need to figure out how to animate the globe to show the tweets in a smooth and appealing way.
The submission deadline for the ITP Winter Show is Monday December 1st, and I have re-posted the text of my submission over on TwiTerra’s project page — I’ll be likely be updating it as the show approaches.
]]>I’ve made substantial progress on my Twitter/globe project for ICM. I rewrote and improved the Twitter search code that fetched new retweets in PHP to work with a MySQL database on my Dreamhost web server. For each retweet found by that PHP script, it uses the @username syntax and text of the retweet to find the original retweet. If another retweet is found (i.e. there is a chain of retweets) it recurses back until it finds the original. If all of the tweets in the chain have valid location data, it stores each tweet with a unique ID (assigned by Twitter), the author’s username, the text of the tweet, the latitude/longitude, and the time.
It also stores the number of retweets that a tweet has — in a three tweet chain, the original tweet will have 2 retweets, the first of the two retweets will have 1 retweet, and the final retweet will have 0 retweets. It also stores the ID of the tweet’s parent tweet — in that same example, the original tweet will have no parent, the first retweet will have the ID of the original as its parent, and the second retweet will have the ID of the first retweet as its parent. This data will make it relatively simply to choose and display chains and trees of retweets on the globe by filtering for interesting structures and long distances between tweets.
You can see that PHP script here (sorry for the lack of comments, I’ll add them later), and it’s been running a few times an hour for the past several days. I have over 10,000 tweets in my database; about 5,500 of them are leaf nodes and have no retweets, 4,400 of them have one retweet, 500 have two retweets, 100 have three retweets, 30 have four retweets, and 25 have five or more retweets. I’m getting good data and will be able to draw interesting chains and trees on the globe, especially after it has been running for a few weeks.
As a side note, it’s been interesting to watch the sort of things that people are retweeting. For example, there was a flood of heavily retweeted content after the terrorist attacks in Mumbai. The three most heavily-retweeted tweets are (with 11, 14 and 11 retweets, respectively):
In addition, I’ve finalized the name (formerly Retweet Tree) to TwiTerra. I registered the twiterra.com domain, and for now it currently redirects to a WordPress page for the project. A project description and one-line pitch are coming soon, since the submission deadline to be in the show is December 1st.
]]>I’ll be taking four classes in total, and ideally I will have a spot in each of the first three and then choose from one of the next four. I’m also interested in the subsequent ranked (and unranked) courses, as well as several that aren’t on that list. The final ordering I settled on isn’t necessarily a direct representation of my preferences, as factors such as expected popularity, expected future course offerings, time conflicts, and subject matter similarities all needed to be taken into account. Additionally, it’s difficult to judge a course from a paragraph of description, and I intend to attend as many classes as I can during the first week (regardless of whether or not I am on the waiting list), so my schedule will be in flux until then.
]]>]]>ITP Winter Show 2008
Wednesday, December 17 and
Thursday, December 18
from 5 to 9pm at ITPA two-day festival of interactive sight, sound and technology from
the student artists and innovators at ITP.Founded in 1979 as the first graduate education program in
alternative media, ITP has grown into a living community of
technologists, theorists, engineers, designers, and artists. This
two-year graduate program gives 220 students the opportunity to
explore the imaginative uses of communications technologies — how
they augment, improve, and bring delight and creativity into people’s
lives.Housed in the studios of NYU’s Tisch School of the Arts, ITP takes a
hands-on approach. Students learn to realize their ideas through a
hands-on approach of building, prototyping, and testing with people.Interactive Telecommunications Program
Kanbar Institute of Film and Television
Tisch School of the Arts
New York University
721 Broadway, 4th Floor, South Elevators
New York NY 10003Take the left elevators to the 4th Floor
This event is free and open to the public
No need to RSVPFor questions: 212-998-1880
email: itp.inquiries@nyu.eduWe look forward to seeing you!
Feel free to pass this message along far and wide.
Please write a 1500-2500 word doc on non-professional political media, outlining what strategic advice you would give a Presidential contender running in 2012 (which is to say, forming an exploratory committee in 2010.)
Topics can include, but do not need to be limited to: how to reach out to media producers, how to solicit research on your opponent, how to frame or benefit from media in favor of your candidate, how to frame or benefit from media against your opponent, how to neutralize media praising your opponent or attacking you, the difference between media produced in the primary vs. general campaign, and so on.
(PDF)
]]>For my final project I plan to integrate my previous work with NASA’s World Wind, Twitter, and Scala into more recent work with PHP and MySQL. I will continue to focus on Twitter ‘retweets’ and will use a PHP script to search for recent retweets (using a variety of search terms — ‘RT’, ‘Retweet’ and ‘RTWT’ as well as ‘Retweeting”) and store them all in a MySQL database. I’ll use the database information and additional searches to link retweets to previous retweets that had the same content and to the original source tweet.
I will then use the twittervision API or the Twitter API to associate geographic data with each of those tweets, and I will filter the database to keep only linked chains of tweets for which all tweets have an associated location. These chains of tweets can then be further filtered by distance between Tweets, since larger distances will create more interesting visualizations. They can also be filtered by complexity of the chain — I have some expectation of seeing tree-like branching, in which multiple people retweet a single tweet, and then other people retweet those retweets.
I’ll then visualize these retweet trees on a globe. The globe will rotate itself such that the location of the first tweet is visible, that tweet can be marked with a dot and displayed with an annotation, and then a line can be drawn to the next level of tweets and the globe can rotate itself again to illustrate the progress of the idea. After the leaf nodes of the tree are drawn, the entire tweet can fade slightly (but still remain visible), and the process can repeat for another tree. I will use the database information to always have enough retweet trees to create an appealingly dense web around the world.
I also hope to create a web visualization, using a similar but simpler Flash globe visualization with the Poly9 FreeEarth.
The third and final component will be more interactive, and will be based on Twitter’s current feature of ‘nudging’ users who have not tweeted in the past 24 hours (and have expressed that they wish to receive the reminders). I will create a new Twitter account which interested users can follow, and it will watch their tweeting patterns to determine if that user has not posted a new tweet in a certain amount of time. The twitter account can then automatically suggest a much-retweeted tweet for that user to retweet as well. Thus that user can have an awareness of current popular ideas on Twitter, can participate in the viral spread of these ideas, and can interact with the trees of retweets as they grows.
I’ve figured several things out with World Wind that I was having trouble with in the previous version, but there aren’t enough updates for a new video. Current versions of the files, however, are below:
]]>Much of the work involved was learning how to use PHP and MySQL , and the functionality I implemented is effectively just a tiny subset of what you might find in phpMyAdmin. It will be necessary later, however, for dynamically re-generating the HTML page for the actual keyboard site whenever I update the content; this will allow search engines to index the content in their crawls. In addition, I can also generate the images of the keys (based on their content) that will be used to create a smooth key resizing effect.
I’m not going to create a web-hosted database and upload the files and get everything working for this update, but the new files can be viewed on github: http://github.com/lehrblogger/keyboard-portfolio/
]]>I’ve been having some issues with browsers other than Firefox, and I think most of them were stemming from my use of console.log for debugging in Firebug. David Nolen (the course instructor) posted a solution to this issue, and all past projects should now be working. I also fixed a few other Safari-specific issues in the keyboard project.
I decided to use Git, a popular version control system, for the continuation of the project. This will allow me to keep track of changes and progress in a sane and flexible way. I’m also going to use GitHub, a collaborative code hosting service, to make it easier for others to comment on and contribute to the project as it moves forward. You can visit the GitHub repository of the project here: http://github.com/lehrblogger/keyboard-portfolio.
I began to explore the powerful Flickr API, and plan to use it as the source of the photos for the portfolio. Provided that it proves to be fast enough, it should simplify image uploading, facilitate various file sizes, and create a unified place for comments. Image names and captions on Flickr will also be displayed on the website. Image categories and projects will be tracked using tags. (Each category and tag will probably have a ‘slug’ in the database (more on that later), and the tag should use that slug.) I’ve used ‘category2’, project1’ and ‘image1’ in this example, which just pulls a single image from Flickr using the API — <http://lehrblogger.com/projects/fall08/ajax/hw6/part2.
I started to think in detail with David about how I was going to implement various parts of the site, and I set up Apache and a MySQL database on my local machine to facilitate testing. All of the page content will be managed through interaction with the database (possibly through a content management page). Saving this page will automatically generate new Javascript and HTML for search engines to crawl and index.
Each standard keyboard key will always have an entry in the database, and this won’t be changeable from the CMS (content management system). Each key will have one of four possible modes — category, project, blank, and hidden — and other available table entries will depend on which type of key. Most punctuation keys will be hidden, but I will provide support for them in case I need the space for additional content later. Blank keys will have no other data associated with them, and will likely be greyed out on the site. Project keys will have the most associated data — the project will need a title, descriptive text (with limited HTML formatting), and some number of associated Flickr photos. Category keys will have a category name but no other content.
Note that there will be no difference in how each of the latter two types of keys behave. At the second level of zoom, the focused key and the surrounding keys will all be visible. At the third level of zoom, only the primary key will be visible. At the fourth level of zoom, for viewing the images, the interaction is handled through the number keys (1,2,3,4,5,6,7,8,9,0 — for up to ten images), and these won’t do anything on keys without images. A user can press a project key from the top level, but that user would need to press it again to be able to read it.
The automatic generation of the HTML and Javascript will be a challenge to get working, but they will make it much easier to manage the site. It is likely that the best way to get the text resizing to work will be to replace that text with an identical-looking image, resize the image, and then replace it with text at the new zoom level (or not, for smaller zoom levels). Those images would need to be automatically generated from the text saved to the database via the CMS, and (with David’s guidance) I’ve started to look into using Cairo for doing this on a server.
That might be the trickiest part of this whole thing, but if everything works well together it will be really cool. I’m not quite sure that I’ve been able to fully convey with clarity my idea of melding content and interface, and I’m excited to have a working example.
]]>The speaker on which my group and I presented was Craig Newmark, of Craigslist. After much discussion and planning, all ten of us worked together to organize a face-to-face market: we encouraged students to bring in items to trade, organized them by category, worked in real time to coordinate exchanges between students, and moderated a discussion on the experience. The activity was very successful, and afterwards we presented Redslist, the beginnings of an idea for an ITP version of Craigslist. A few students (both from within the original group (Thomas Robertson and myself) and not (John Dimatos, Cameron Cundiff)) met again to discuss how to proceed with the project (either inside or outside of class projects), and we wrote the following statement:
Redslist began as a response to Craig Newmark’s presentation in Red’s Applications class. We envision it as a set of tools for presenting and accessing ITP student content such as event details, items, services, and expertise — i.e. the sort of content we currently interact with on the listserve and ITPedia. Redslist would be unique in the way we could post and retrieve this information; an open platform for development, an API, and some core applications would make it possible for us all to develop our own way of interacting with the information on Redslist.
We’ve come up with a few ideas for applications so far in addition to the existing listserve and wiki: an interface for chat and text messaging to and from Redslist, using XMPP; a way to tag and forward listserve messages to Redslist for easier access; and using the displays at ITP to showcase recent activity on Redslist.
So far we have a group of several first and second years working on this, and we invite the rest of you to check it out. It could take off or not, but we know that ITP offers a unique chance to try something ambitious and different. So email us or come to the next meeting, and we’ll give it a shot.
We are planning another meeting for later this week, and I will make subsequent posts as the project progresses.
]]>My idea consists of two main parts: first, a map that shows which ISP’s are providing access to which areas of the country/world; second, discussion boards for the users of a specific ISP in a specific region. These would facilitate self-organization of users who are victims of a particular location-specific policy of a particular ISP, allowing them to assert themselves to that ISP’s customer service not as single individuals, but instead as important customer groups with the power to organize a boycott or take some other action.
(In writing this, I felt I was villifying ISP’s more than I should be. I don’t blame them for acting as they are as profit-maximizing entities, but I see it as a problem that the current arrangement discourages competition and creates a sub-optimal equilibrium. As I recall, capitalism works most efficiently when everyone has perfect information.)
(For the first few of these, at least, I think I’m going to link back to the introductory web idea post.)
]]>From now on, I’ll try to post the ideas here with a ‘web idea’ category that will replace the old ‘idea’ category. Also, I’ll try to go through my old ideas and post the ones that are worth typing up. You can subscribe to a category specific RSS feed here — http://lehrblogger.com/category/web-ideas/atom.xml
Ideas, I have found, are relatively commonplace; the real work is in their execution. Sometimes people guard their ideas, keeping them secret out of fear that they will be taken and implemented by someone else first. None of that here. Please, take these ideas, and please, bring them to life. I have more of them than I’ll ever have time for, and even if I were to eventually have time for all of them, they would have long-since lost their relevance. If you find yourself with sufficient knowledge, time, and interest to start work on one of these, I would love to talk about it and hear what you are thinking about what I was thinking.
]]>NASA has made their World Wind globe available as an open source project, and it functions much like the more common Google Earth. After a fair amount of struggle (and assistance from Jorge), I got it to work with Scala in Eclipse. I then integrated my Jumping Tweets code from the previous week, and was ultimately successful in displaying these as Annotations at (random) locations on the map, with Polylines between a specific location (arbitrarily set to NYC) and each tweet’s location. For next week, I hope to get the actual location data, improve the line quality, and add other options.
Since I did not use Processing at all for this project (instead relying on World Wind for the graphical interface), I was not able to export a embeddable Java Applet. Instead, a video I screen-captured is embedded below. I have also posted the Scala files, and feel free to contact me if you have any questions about how I got this working in Eclipse.
ICM Midterm – Jumping Tweets from me on Vimeo (you can watch a high definition version there)
(Sorry for the graphic strangeness at the beginning of the video — I think that happened during the import to FCP, and I’m not sure how to fix it (or if it’s worth the trouble).)
Another post is coming soon about my plans for continuing this for my final project.
]]>In all versions, clicking on or typing a letter will zoom in on that letter, and spacebar zooms out keeping that letter in the upper left. I’m using Firefox 3.0.3 to view these, and I’m not sure if other browsers will work.
(It was much trickier than I expected to get the keys to change and move correctly, so I didn’t have time to modify the Flickr/GoogleMaps/YouTube/Twitter examples — I will later this week.)
HW6 Part 1 (no tweens): http://lehrblogger.com/projects/fall08/ajax/hw6/part1_notweens
HW6 Part 1 (partial keyboard): http://lehrblogger.com/projects/fall08/ajax/hw6/part1_partial
HW6 Part 1 (all): http://lehrblogger.com/projects/fall08/ajax/hw6/part1_all
HW6 Part 2: http://lehrblogger.com/projects/fall08/ajax/hw6/part2
I initially had (in HW5 Part 2a) some question about whether I should use one canvas tag for the entire keyboard or multiple canvas tags for each key. I was thinking that it makes much more sense to use one for the whole keyboard, since everything will be scaling and translating at once (perhaps with photos off the screen not switching to a larger version? hmm). I started to run into problems, however, when I began to look into displaying text on a canvas tag. There are a few alternatives, and maybe something that doesn’t work in Firefox yet, but all seem to involve drawing the actual typecface vectors manually. This might work for now, but it would be nice to have selectable text on the pages, and I’m worried about S(earch)E(ngine)O(ptimization) later on, so I decided to look for other ways to make it work.
Instead, I continued to move forward with representing the keys as div tags (in HW5 Part 2b). I made a fair amount of progress displaying the keyboard, adding key listeners, and moving keys around appropriately. Multiple sizes are almost, but not quite, working, and once that is done I can work on tweens. I still need the text to resize smoothly though, and might be able to get something based off of this idea to work well, but I’m skeptical. Does anyone know of any other solutions? Maybe generating an image for each text field, and replacing the text with that when it’s being resized, and shrinking the image, and then replacing the image with text again when it’s done resizing?
More on that project later this week — the last assignment here was to modify the ‘animals’ example that he presented in class. It was the first example we had seen of a full web project — using HTML, CSS, and Javascript, as well as PHP and MySQL to pull information from a database. I had an absurd amount of trouble getting it to work on either the Dreamhost or the ITP servers, but I finally resolved the various issues. Getting the naming to work was much more manageable, but I learned the basics of using a database in the process. (The information for the animals is stored in a single database, so all visitors to the site are viewing and modifying the same animals — refresh the page to see any changes made by others.)
Note that I’ve noticed these are acting sporadically in browsers other than Firefox 3+, and I’ll try to figure out the browser compatibility issues this week.
HW4 Part 1: http://lehrblogger.com/projects/fall08/ajax/hw4/part1/
HW5 Parts 1 and 3: http://lehrblogger.com/projects/fall08/ajax/hw5/parts1and3/
HW5 Part 2a: http://lehrblogger.com/projects/fall08/ajax/hw5/part2a/
HW5 Part 2b: http://lehrblogger.com/projects/fall08/ajax/hw5/part2b/
HW5 Part 4: http://lehrblogger.com/projects/fall08/ajax/hw5/part4/
I started off focusing on the ‘retweeting’ keyword that people use on Twitter to indicate that they are reposting something written by someone else (of the form “Retweeting @username text”). I wrote a Processing sketch (again in Scala) that gets the 15 most recent retweets from Twitter, prints them one by one, and continuously checks for new retweets in real time. I’ve posted a Processing page, but note that all output is just going to the computer’s console, and that they grey applet will not change.
A next step is to extract the author of the retweet and the author of the author of the original tweet, and possible previous authors if the post has been retweeted repeatedly. Twittervision.com has a geographic display of Tweets that is somewhat similar to my idea (without the conversational threading), and they also have an API that anyone can use to get location data for a specific user. Given the tweet, the series of users, and their (general) locations, the final task is (effectively and attractively) displaying it on a map/globe.
]]>Jorge Ortiz was again extremely helpful (see post on HW4). He helped me early on by providing and explaining a simple php script to obtain the raw search results via proxy (because otherwise it would be necessary to sign the applet). He also provided the code for the parallel threaded downloading of the tweets from Twitter, which was much faster than the previous implementation, and he answered many random Scala questions. It’s a tricky language, but it is also much more expressive, powerful, and intuitive than Java, and I’m getting better.
Since I wrote this assignment in Coderspiel’s Spde rather than Eclipse, I was able to easily export an applet that runs in a browser. See it on this Processing page (you’ll need to wait 20 seconds or so for it to load all of the tweets). (Also, be careful when opening this — it’s crashes browsers at random.. maybe a browser other than your primary one?)
]]>I had been wanting to try a programming language called Scala for a while, and recently there were a few posts on the Coderspiel blog about a Scala fork for the Processing Development Environment. I decided to take this weeks ICM assignment as an opportunity to try it out. Jorge Ortiz, a friend and Scala programmer extraordinaire, deserves immense credit for teaching me much about the new language and helping me resolve various issues.
For the second Coderspiel post, n8an rewrote Flocking (a simple emergent behavior simulation) in Scala, basing his work on a Processing applet written by my ICM instructor Daniel Shiffman. I based my work off of n8an’s Scala code, worked through everything carefully to get a feel for it, and added a couple of modifications (more on those in a minute). I worked in Eclipse (rather than the Spde that Coderspiel made available), and this was helpful for dealing with the syntax of a new language and a project with multiple classes, but I ran into problems trying to export my applet. I was able to use the Fat Jar Eclipse plugin (being sure to have both core.jar and scala-library.jar added to my build path) and a couple of semi-hacks recommended by Jorge (scroll to the end of Flocking.scala) to get a working applet. The Scala library jar is ~3.5mb and Fat Jar incorporates the entire thing, and this made my applet much bigger than it needed to be, so I tried to use ProGuard (as recommended by Coderspiel) to shrink it down. I couldn’t get this to give me a jar file that I could run, and I also couldn’t get the larger jar file to display in a browser. I needed to move on to other things, and hopefully I can get it working later.
The first modification I made to Flocking was the introduction of multiple species (represented by color). Boids only flock with members of the same species, and they steer away from members of other species. The second modification was collission detection — when two boids collide (because they were unable to turn quickly enough), both boids are removed from the screen, and there is a quickly shrinking ‘poof’ to represent the collision.
Right click and save this jar to try it out, and let me know if you have any questions about the code.
(Note the new directory structure — I’ll transfer the files from previous posts when I get the chance.)
]]>I’ve seen a few sites with all of their content placed on a plane like this:
As interesting and innovative as these pages are, I see them all as plagued by awkward interfaces. My idea is to map the website onto an image of the two-dimensional surface of a keyboard. The site would be navigated by single keystrokes, rather than with mouse gestures. Headings would be in large type on the keys, and when one of those keys was pressed, the window would zoom in around that key. The project data on the surrounding keys would then be legible, and a project could be chosen by striking one of those keys. Image thumbnails on the project keys could be enlarged with a corresponding number key. The spacebar could zoom out, and arrow keys could pan horizontally around the plane.
I put together a rough sketch file showing different zoom levels and data (PDF).
I thought the idea would work well with a personal portfolio, sorting projects into approximately three categories of eight or so projects each (clustered around, say, the ’s’, ‘g’, and ‘k’ keys). I originally thought a Flash implementation would be required, but I will explore how much of it can be done in Javascript.
]]>To deal with memory issues for huge numbers of points, it only draws down to the level of 2px-long edges. Thus, only so many pixels are visible at a given time. Additionally, the zoom factor has an upper limit so that it does not increase indefinitely (but you can still keep zooming).
I rewrote nearly the entire program to implement these changes, and found the previous Edge class and heavy use of recursion not practical for the problems I was trying to solve. This surprised me given the highly recursive and self-similar nature of a fractal, and I think these projects were valuable case studies for me in algorithm choice.
]]>Assignment: see Class 3 on the Syllabus
]]>The Snowflake is currently limited to a constant maximum depth (with around 3k edges) — otherwise it will crash my browser. I considered improving the zooming functionality to only keep track of the edges that were visible, and thus allow for more depth at high levels of zoom, but I didn’t have time to implement it this week.
Many complexities arise for the different cases — zooming out from the whole snowflake, zooming out from a corner, zooming out from a view of the snowflake with two discontinuous pieces, zooming out from a view with no edges, zooming in on a corner, etc — and handling them all in my existing framework seemed impractical and inelegant.
I think I would have needed to rework the entire algorithm, possibly using a layering system (for the different levels of zoom) and/or a center-centric system (drawing each edge not from its parent edges but instead calculated (with a lot of trig) from the location of the center).
]]>Assignment: see Class 2 on the Syllabus and info for part 5
(note parts 1,2,4,5 have index.html and index.css files; parts 2,5 also have init.js files)
This theme is driving me a little crazy, but designing my own will be a separate project. All of the old posts and comments have been transferred.
]]>Paste the Javascript into Firebug to test it.
]]>This approach is imperfect, however, because the person who set his/her clock fast would know how fast it was and would account for that adjustment in subconscious judgments of how much time they had to complete the given task. The alternative method of “closing your eyes and holding down the button” is still imperfect, since the offset is then random and not necessarily useful. People are also not always in a hurry, and a fast clock in those situations can be inconvenient. Finally, the delusion about the actual time becomes even more fragile when other clocks are involved that either display the correct time or were randomly set to a different incorrect time.
The goal of my project was to prototype an immersive and dynamic delusion about time. I would use the XBee radios to synchronize the time between multiple devices, and Arduinos would interface with the XBees and handle displaying that time. I would use a sensor on a watch device to gauge the user’s stress level — measures of galvanic skin response, heart rate, speed of motion, and watch-checking-motions were considered. Then each device in the network would then offset its own current time by an amount that was a function of the user’s stress level. (Note there is an alternative possibility of slowing down time as the user got more stressed in an effort to calm him/her down. Both this laid-back and the other approach have interesting psychological consequences, but the fundamental techincal details are the same.)
Thus, as the user got more stressed, time would speed up, the user would hurry more, become more efficient, and would be less late. As the user calmed down again, the offset would slowly return to zero and the time would return to its accurate setting. (The function calculating the offset would approach a limit to prevent the problem of a positive feedback loop.) Changes would be gradual enough to go unnoticed by the occasional observer, and thus the user would act as if the time displayed on the devices was correct.
I made substantial progress in the implementation of this device network. I hacked a digital watch so that it was possible to set its time from an Arduino by rapidly sending high voltage pulses to the button contacts in the same pattern as a person would use to set the watch. I configured a serial LCD screen to receive and display a time and date (in the same manner as an alarm clock) and keep its own internal count of the time. My computer was then able to send out a time and date to the other two devices, and they would then set their own times to be (roughly) in sync with the computer. After the initial times were set, the watch could then instruct the LCD to change its time by a specified offset (I ran into difficulties setting the System Time in Mac OS X using Processing, but it would work in theory) corresponding to a users stress level. Accurate gauging of an individual’s stress level is a project in and of itself (including calibration for each person), so a simple push button was used as a placeholder for that signal.
Additional work on the project could include multiple types of devices (how would one accurately set an analog wall clock without being able to see the hands) and the introduction of multiple users into the system (what’s a clock to do if one person in the room is stressed, and the other is not).
Arduino code for Watch
Arduino code for SerialLCD
Processing code for Computer
Following are a demo video of the entire project, a demo video of only the watch component, and several images.
Play proceeds as follows: one player selects a choice using one of the switches, the choice is transmitted to the other player, an LED lights up on the other player’s board to indicate to the second player that the first player is waiting to receive a choice from the second player, the second player chooses a score that is transmitted to the first player, and when each player has the other players score a light indicates what the other player has chosen. An additional light on each board indicates the winner — if there is a tie then both lights are on, otherwise thxe winner is the player with the illuminated light, and either player can press a New Game switch to reset his own game and tell the other player to reset.
We extended the project to build a third circuit to keep score of the game. A third Arduino and XBee were used with a modification of the Binary Counter project I had built before. Each board keeps track of its own score and transmits that score to the scorekeeper when the New Game button has been pressed. Scores up to 16 for each player are indicated on two sets of four LEDs.
The Arduino code for each the two players is online here, and the code for the scorekeeper is online here
Video with scorekeeper:
Video without scorekeeper:
The assignment was based heavily on the exercises in Chapter 6 of Tom Igoe’s book Making Things Talk (amazon).
The Arduino code is online here.
since access to the floor happens (almost?) exclusively through the elevator and adjacent stairwell, it might be feasible to use this bottleneck to make note of people entering and leaving. Rob mentioned previous attempts to have someone swipe a card, but it seems an E(lectronic )A(rticle )S(urveillance) system similar to those used in retail stores to protect against shoplifters might work well. the units cost one or two thousand dollars new, but they’ve been around for a while so maybe older/cheaper models are available. then each student could carry an RFID or similarly purposed chip in his/her wallet, and the scanners could note when that person passes through (without, of course, making the noise)
a pretty powerful API could be created for the system that other students could build on. it would be useful to be able to query the system about whether a certain person was on the floor or not (this might avoid the privacy concerns that people had with BlueWay), and individuals could sign up for the notifications that interested them.
]]>The everyday places that we linger will start to take on a new relevance with the widespread adoption of devices equipped with proximate wireless connectivity — Bluetooth, RFID, WiFi, …, when the simple act of lingering creates opportunities for meaningful data exchange.
http://www.janchipchase.com/blog/archives/2008/05/the_small_crowd.html
]]>Prepare the breadboard –
Add a Digital Input (a switch) –
Add Digital Outputs (LEDs) –
Program the Arduino –
Get Creative –
Part 1 -
The voltage between the power and the ground rows on the breadboard was approximately 4.92V, with about 12V coming from the adapter socket.
Part 2 -
The voltage across the first resistor was approximately 3V. The voltage across the LED was approximately 1.9V. These do have a sum that is approximately the total voltage in the circuit.
Part 3 -
The voltage across the first LED was approximately 2.52V and the voltage across the second LED was approximately 2.48V. These are nearly the same, as expected. Three LEDs in series do light, but they are less bright because each receives only approximately 1.6V.
Part 4 -
In parallel, the LEDs each have the total voltage of approximately 4.9V (4.94V, 4.94V, and 4.92V were measured). The amperage drawn by the LEDs was approximately 20 milliamps.
Part 5 -
The potentiometer drew between 0V and 2.98V, depending on the setting.