For those of you fortunate enough not to have seen me in the past ten days or so, I’ve some news for you: I’m a parent now. Not of a human child but of a terrible robot child: @WoolfBot3000. If you’ve been cursed with my presence, then I can only apologise for harping on about it. My terrible robot child is a Twitter bot that’s set to tweet out a new opening to Mrs Dalloway every hour. You know the one – Mrs Dalloway said she would get the flowers herself. What an absolute hero.

Before we go any further, here’s some of my personal favourite WoolfBot utterances, partly to give you some idea of what the WoolfBot spits out, and partly just to show off:

My terrible robot child was surprisingly easy to make, even given that I’m the sort of humanities student whose eyes glaze over as soon as someone says ‘Javascript’. @WoolfBot3000 is run from a hosting platform called Cheap Bots Done Quick, created by George Buckenham, which does what it says on the tin. More specifically, it hosts bots like mine, like Thinkpiece Bot, like Infinite Deserts and like Soft Landscapes. All of these bots are created using a Javascript library called Tracery, developed by Kate Compton. Tracery is a tool for creating generative grammars with a minimum of fuss.

In simple terms, Tracery works a bit like Mad Libs: you give it a sentence structure with a set of placeholders, and lists of items to put in the placeholders. These items can get as long and complicated as you like, and you can nest placeholders inside items so a generated piece of text can theoretically stretch out forever. You can also set the code up to remember certain things, so that your text’s characters have a consistent name, or pronouns, for example.

A very stripped-down version of my code with most of the items missing (no spoilers!) looks like this:

{

“origin”: [“#[#setPronouns#]story#”],

“story”: “#title# #name# #verb1# #they# would #verb2# the #noun# #themselves#.”

“setPronouns”: [”[title:Mrs][they:she][themselves:herself]”,”[title:Mr][they:he][themselves:himself]”]

“name”: [“Dalloway”, “Ramsay”]

“verb 1”: [“said”, “pledged”]

“verb 2”: [“get”, “requisition”]

“noun”: [“flowers”, “Lighthouse”]

}

Most of it’s pretty self-explanatory: “origin” is at the root of Tracery’s grammar and governs what the output contains, while “story” is what determines the Tweets that you see. Items marked “name” slot neatly into the placeholder marked #name# and so on. “SetPronouns” determines whether my person is a man or a woman (or indeed non-binary) and governs how the person is referred to throughout the sentence.

So that’s how my robot child generates text. It’s not particularly advanced, but it gives me a chuckle every so often.

There’s definitely room to improve though. Right now, WoolfBot more or less tweets what I tell it to, but that’s it. It’s bound by the limits of my imagination and by what Woolf titbits I can dredge up. Right now, I’m trying to puzzle out some Python to create a new terrible robot child witha measure of artificial intelligence, so that it can write its own Woolfy creations. The details would depend on the flavour of machine learning/artificial intelligence I’d use, but essentially the new bot would teach itself to write by reading Woolf over and over. Which is really a good idea for students, come to think of it.

This brings me on to a broader point about Twitter bots and the digital humanities. Right now, as far as I’m aware, the digital humanities seem to view the ‘digital’ part as anterior to the ‘humanities’ part. Typically, no matter how invested in methodology or the act of analysis by digital means, the digital humanities tend to view its tools and methods as shedding light on an object – a text, images, an archive – that’s already been made.

Which is no bad thing: methodologies such as Franco Moretti’s distant reading wouldn’t be possible without computer-based corpus analysis to power through vast numbers of texts and pull out data, while in my own field, the Modernist Archives Publishing Project is making the Hogarth Press’s archive freely and readily available. But digital humanities scholarship has tended to ignore the generative potential of computing technologies – their ability to create something new.

My terrible robot child might not be very clever at the moment, but hopefully it’s doing the tiniest bit to tip the scales. Watch this space for more.