Conversations about bots have the tendency to evoke the infinite. We see claims that bots are endless,1 that they run off cloud-based servers that “never go dark.”2 We even see articles that ask “What happens to a Twitter bot after its maker dies?”3 with the presumption that our creations might outlive us all.
I’m fairly certain that the majority of the people making these claims of eternal life are entirely aware of how hyperbolic those same claims are. Even if we assume that they’re hyperbolic, however, it’s still helpful to think about the assumption that bots are going to be long-lived enough that such hyperbole is to be expected. What is the lifespan of a bot, and what are the factors that determine it?
This is far from comprehensive, but I'd like to briefly look at some of the components of bots that are subject to breakdown. Firstly we must remember that bots run on computers, and that computers are physical objects that exist in the world. Hard drives are subject to failure, power can cut out, servers could catch on fire or flood. When I started creating bots they were running on a raspberry pi on my desk, and the bots all died one day when one of my cats jumped on the desk and tripped over the power cable. The sudden loss of power corrupted the SD card and I lost a number of bots permanently.
Moving our bots to cloud servers may give them longer lives, but they're not totally safe there either. It's entirely possible that servers could go down, or that the service could otherwise be interrupted. Just as seemingly-random objects could be confiscated if they're packed in a shipping container that customs seizes, bots could get shut down if a server were found to be hosting criminal content.
Money can also be an enemy to bots: the popular Cheap Bots Done Quick service went down in May when George Buckenham forgot to renew his lease on the domain name.4 The botmaker Vex0rian had to choose between several of their bots when Heroku changed the number of processes that could be run in the free tier to 5.5 This forced Vex0rion to choose between bringing their bot count below the new, lower cap or to start paying for the service, a choice that is difficult to make given the fact that very few botmakers see much financial compensation for their work.
In addition to these very real financial issues, some financial issues can be entirely imaginary: On the 18th of February this year, Heroku sent their customers absurdly large bills of hundreds and even thousands of dollars for levels of service that were supposed to be free. Dozens of bots went silent as their creators took them offline in anticipation of further charges. At least some bot makers switched away from Heroku entirely6 as a result of the billing issues, while others thought about switching but have tentatively decided to remain with Heroku, albeit shaken in their faith in the service7. Moments like this billing incident can send ripples throughout the bot-verse, shutting down bots or causing them to be rewritten to adapt to the requirements of new services like Digital Ocean or Linode.
Generally speaking, there are two major kinds of bots, varieties that Mark Sample has dubbed “closed” and “green.8” Closed bots are those that work from a predefined set of materials and, given enough time, will eventually tweet every possible combination of those materials. Green bots, by contrast, turn to outside resources and so can carry on generating new content with no clear end in sight. Only when the green bot’s external resources are exhausted or vanish will these bots cease generating new content.
Consequently, green bots face unique threats to their mortality. Because green bots make use of external resources, they are dependant on those resources. A quick survey of bots reveals a number of resources at play like Google Maps, news feeds, Wordnik, and Wikipedia. Any of these is just as likely to fail as Heroku was, and the failure of these resources, for the most part, will remain entirely out of the hands of the botmaker.
The failure of bots that rely on these types of resources can be piecemeal, and could therefore take a while to be noticed. I encountered this myself with my @edmontonreads bot, a bot intended to distribute information about books and programs happening at Edmonton Public Library. First the bot stopped tweeting events when the event data formatting changed, and then the bot stopped tweeting catalogue items when the catalogue API was reset. As these external services failed, my bot went from tweeting event and book recommendations to just book recommendations, and finally nothing. These problems are ultimately relatively simple to fix, but they also require the active engagement of a human, and so are troublesome if we are thinking of bots as infinite.
When thinking about how to work around the fragility of bots and how we might keep them up and running for something approaching forever, we can turn to Jorge Borges's “The Library of Babel” for advice: in Borges's library, the destruction any one of the unique books is not nearly the blow it seems, as there are thousands upon thousands of copies identical to it but one letter. Perhaps the exact implementation of the bot isn't as important as the general idea? Do we need all of our everyword-like bots to use the same corpus, or would it work just as well if we were to draw our word list from another, slightly different source?
To this end we can also consider the example of the Ship of Theseus. What will be the result of slowly changing each of the parts as they degrade? If we change servers when one company folds; if we switch image search APIs when the existing API becomes too restrictive; if we switch social media platforms as one falls out of vogue, what will happen to our bots and can we think of them as the same bot? Are we extending our bot's life, changing it fundamentally, or both?
It seems to me that both of these approaches offer us ways to keep our bots going for as long as possible, but they also have the effect of slowly changing our bots. These changes can alter the aesthetics of the bots as different resources are likely to favour different types of actions. Replacing a conservative, institutional word list with one generated collaboratively could significantly shape the tone of a bot, for example, making it move from sounding stodgy to sounding hip or, perhaps worst of all, like someone stodgy trying to sound hip.
While there is great value to this preservational approach, we should also consider its inverse: we should think about designing bots that are built to fail.
I am interested in a subcategory of what Mark Sample calls “closed” bots—a subcategory I would like to call the terminating bots. While bots working from closed lists will eventually repeat themselves, these bots can continue on in this repetitive manner ad infinitum. Metaphor-a-minute may start to repeat itself, but the tweets will continue. Terminating bots, on the other hand, will one day stop entirely, and will stop intentionally. Everyword’s last tweet was on June 7th of 2014, and unless Allison Parish restarts the bot, that will remain its final tweet.
I wanted to create a terminating bot of my own, one that took advantage of the many possibilities I see in this type of bot. Inspired by Dorothea Baker's @onceuponaquine, a bot that tweets its own code, William Gibson’s self-deleting (or more accurately self-encrypting) poem Agrippa, and an episode of Adventure Time in which the character BMO deletes their own code, I thought I would build a bot that disassembles itself and documents the process in tweets. The end result of my efforts is @rm_everyfile.
Every hour @rm_everyfile deletes another file or directory from the raspberry pi server it is running on and catalogues the lost file. This process will continue as the bot moves from one file to the next until the point that it deletes an essential file or perhaps even its own code. At this point, the bot will simply stop—we’ll only know that it finished its task when it is silent.
In some ways, @rm_everyfile is a bot akin to @everyword: it has a limited lifespan that is building towards something the readers can anticipate. On the other hand, this bot differs significantly—there is a sense of surprise regarding its conclusion that was not there with @everyword, which proudly advertised its completion date in its bio.
Another bot in this same vein is Colin Mitchell’s @spacejamcheck, which exists to catalogue the presumptive failure of the website for the 1996 movie Space Jam which has so far been up and running for two decades. The anticipation of the ending inherent in @spacejamcheck is similar to that of @rm_everyfile. We are less interested in these bots when they’re tweeting and more interested in the end of tweets.
There are other examples of such terminal bots, but the genre is, to my mind, fairly underrepresented. While we may not personally believe that our bots will go on forever, we nonetheless tend to design them so that they might. Rare is the Twitter bot that is created with an end date in mind, and rarer still is the bot for whom that end date is the very reason it exists.
I would like to encourage you all to explore the possibility space of bots that break, crumble, make visible their fault lines. Try building bots that show their wear and tear, that look better when they get scuffed up. Stop writing programs that will run for so long as their server bill gets paid and start writing bots that run on a raspberry pi that’s always on the cusp of bursting into flames. Write bots that, like us, are mortal, and whose beauty lies in what little art they’re able to create before they gutter and extinguish.
Thank you.
http://www.popsci.com/see-endless-stream-trippy-photos-thanks-deep-dream-twitter-bot ↩
http://www.newyorker.com/tech/elements/the-rise-of-twitter-bots ↩
Personal correspondence with Martin O'Leary ↩
Personal correspondence with Emma Winston ↩
http://www.samplereality.com/2014/06/23/closed-bots-and-green-bots/ ↩