The last time Hackerfall tried to access this page, it returned a not found error. A cached version of the page is below, or clickhereto continue anyway

The Believer Logger - Proposals Toward the End of Writing Proposals Toward the End of Writing

Proposals Toward the End of Writing

By Tony Tulathimutte

I. The Solution to Cliché

…whenever thought for a time runs along an accepted groove—there is an opportunity for the machine.

—Vannevar Bush

Some writers morbidly fixate on computer interference in their working lives: it’s a distraction, an unwanted convenience; it debases the written word, revolutionizes form, or is “making us stupid.” Often it’s framed as anathema to serious writing: Philip Roth worries that books “can’t compete with the screen”; Zadie Smith credits the Internet-blocking app Freedom in the Acknowledgements of NW; Doris Grumbach grumbles that word processors allow people to write too much; Jonathan Franzen squirts superglue into his laptop’s Ethernet port.

For all this handwringing, there’s less discussion about technology’s direct interventions in the writing itself, especially in an editorial capacity. Consider spellcheck, whose influence is obscure but probably quietly tremendous, not just on writing but on writers themselves—a 2012 British survey found that two-thirds of people used spellcheck “all or most of the time,” and one-third misspelled “definitely.” (The organization blamed this on the “auto-correct generation,” though the causal link appears baseless.) And can we ever measure contemporary literature’s debt to cut-and-paste, find-replace, versioned backup, web research, online correspondence, Track Changes?

Of course, these general-purpose functions influence far more than just literature, but we’re also beginning to see text analysis tools with a specific literary focus. These tools promise to show us our true reflections in the form of hard statistical data—insights beyond the reach of mere human editors. These include everything from word counters and sentence-length analyzers to hundreds of more boutique gizmos: Gender Guesser tries to ascertain an author’s gender by comparing it against the word frequency trends in prose written by women and by men, while MetaMind can be programmed to assess a writing sample’s “viewpoint,” from its political leanings to its “positivity.” Services like Turnitin circumvent plagiarism, while others like PhraseExpress insert entire sentences right under your fingertips.

The recent Hemingway app goes even further, offering dogmatic editorial guidance to make your prose “bold and clear”:

Hemingway highlights long, complex sentences and common errors; if you see a yellow sentence, shorten or split it. If you see a red highlight, your sentence is so dense and complicated that your readers will get lost trying to follow its meandering, splitting logic — try editing this sentence to remove the red.

It also recommends the indiscriminate excision of adverbs and passive constructions. Tallying up all the infelicities, it assigns the passage a numerical grade, representing “the lowest education level needed to understand your text,” which oddly equates boldness and clarity with legibility to young children (presumably, the best score would be “Illiterate”). Ernest Hemingway’s own prose often fails the test, though, as Ian Crouch observes, Hemingway is usually making a stylistic point wherever he trespasses against his own putative rules. Meanwhile, Nabokov’s “Spring in Fialta” gets the worst possible score of 25 (a second-year post-doc?).

With inventions like these, many of which are intended to improve prose’s suitability to a particular purpose, it seems inevitable that we’ll soon have programs aimed at broader literary purposes. Imagine, for instance, a computer program that detects clichés at the sentence level. Existing attempts are based on small databases of fixed idioms. Suppose our cliché detector is a simple extension of the language-checking features already baked into most word processing software, underlining each trite phrase with a baby-blue squiggle. It analyzes the text for any sequences of words that statistically tend to accompany each other—and the statistical database of clichés, in turn, is based on a Zipfian distribution of word groupings obtained from the quantitative analysis of a large prose corpus. Every phrase ranked above a certain score is flagged as a cliché. No more “in any case” or “at this rate,” no more “battling cancer” or “wry grin” or “boisterous laughter”—though the program might forgive idioms that lack basic synonyms, like “walking the dog.”

The larger the corpus, the better; Google could team up with the NSA to digitize and index every word ever written or recorded, and make this omni-corpus available for indexing, mining, and categorizing. Or by being trained on a personal corpus of writing samples, the detector could be adapted to learn an author’s pet phrases. Zadie Smith pointed out that in all of her novels someone “rummages in their purse”; our program would flag each instance, as well as any variations: “they had rummaged through their purses,” “purses were rummaged,” etc. And it could be tailored to specific genres: “heaving bosoms” in romance, “throughout history” in student papers, “please advise” in business emails.

Beyond merely detecting clichés, the program could also offer statistically unique replacements for each cliché, constructed by thesaural substitution and grammatical reshuffling. You might input the sentence:

After wolfing down his meal, he began the arduous task of checking his email, the mere presence of which triggered a pounding headache.

With the result:

After gulping his repast, he began the irksome undertaking of scrutinizing his online correspondence, the very proximity of which actuated a clobbering neuralgia.

The outcome is certainly more unique, however awkward and purple. To mitigate this, the app might offer a setting that offers word combinations that have been used before, though infrequently. Thus the suggestions would be vetted with prior usage, without quite rising to the level of cliché.[1]

What would happen if this program (the Sui Generator?) were broadly adopted, making cliché trivially eradicable? There could be a subtle across-the-board improvement of style as writers and editors supplemented their editing process. But perhaps automated blemish-free prose may become tacky and cloying, the prose equivalent of Photoshop or Auto-Tune—the deliberate use of cliché may become an act of subversive camp, or a reassuring watermark of human authorship.

II. Art in the Age of Mechanical Production

But I have no native language,I can’t judge, I suspect I write garbage.—Eugene Ostashevsky, Iterature

One can never assume that tools will only be used for their intended purposes. In the same way that people have appropriated plagiarism detectors to gather research citations, it’s easy to imagine people using the cliché detector as a composition aid. Picture an uninspired-yet-tenacious user—the Lazy Student—slamming out a cliché-infested rough draft, then methodically stepping through it, iterating through elegant variations like a slot machine until he finds one that sounds right, repeating as necessary. All that’s required is a decent ear to produce sentences that will be, statistically speaking, highly original. If this method of editing proved more efficient and at least as good as traditional writing, it would put taste at a premium, and render talent as unnecessary and quaint as good penmanship. Better writing produced by worse writers.

Being particular to sentence-level flaws, the cliché detector we’ve described is just a rudimentary line-editor, a nose-hair trimmer. It doesn’t address the larger problems of clichéd sentiment, faddish style, stylistic vampirism, stereotyped characters, shopworn narrative devices. It has no sense of context, and wouldn’t be able to distinguish clichés from quotations, allusions, parodies, or collage pieces.

But you could suppose that each of these problems is just an engineering hurdle waiting to be jumped. If spellcheckers correct words and cliché detectors fix phrases, what would a coarser-grained cliché detector that addressed whole sentences and scenes look like? Suppose we break narrative prose into analytic units:

Letter < Word < Phrase < Sentence < Paragraph < Scene < Section < Book

Now say a certain Impatient Writer wanted to write a book. By merely providing a set of input parameters—genre, length, tenses, points of view, number of characters (major and minor), language and dialect, tone, mean chapter length, historical era, authorial gender and nationality, literary influences, and so on—the Impatient Writer would, after much diligent clicking, be able to produce not only a felicitous phrase but a whole book.

As vast a leap as that seems, the conceptual basis and practice of generating books by formula is already widespread. Tabloids, porn, and mass-market pulp are already strictly parameterized. The Harlequin romance novel guidelines—depending on which of thirty subgenres it belongs to (“Spice,” “Medical Romance,” “Nocturne”)—stipulate length, secondary characters, character motivation, the love interest’s marital status, etc. This seems to confirm our anxieties about the machine-produced, formulaic writing we’ve seen satirized in literature, like the padograph in Bend Sinister that replicates human handwriting “with repellent perfection,” or the Oceanic literature of Orwell’s 1984,

rubbishy newspapers containing almost nothing except sport, crime and astrology, sensational five-cent novelettes, films oozing with sex, and sentimental songs which were composed entirely by mechanical means on a special kind of kaleidoscope known as a versificator.

But don’t these anxieties bear a whiff of ingrained anthropocentrism, the old fears about technological encroachment on human specialness? Does a versificator produce rubbish only because it’s been programmed to?

There’s plenty of evidence to suggest both that good literature abides by formulas and that formulaic writing can be rewarding. Joseph Campbell and Gustav Freytag identified narrative structures and archetypes in the canonical texts of several cultures; Vladimir Propp and Claude Lévi-Strauss codified mythologies in quasi-mathematical formulas, while Northrop Frye argued for discarding subjective criteria entirely and analyzing narrative as a “pattern of knowledge,” such that “criticism begins when reading ends.”

Taking Frye’s approach to its technological extreme, Franco Moretti’s recent “distant reading” critical methods use computers to conduct quantitative analysis over vast sets of prose. Along with his Stanford Literary Lab, Moretti has expressed plots as formulas, analyzed the relationship between sound shape and meaning in poetry, expressed relationships between characters in graph form, and codified genres (Gothic romance, bildungsroman, silver-fork novel) based on word frequency.

If computers can recognize literary formulas, it is not hard to imagine applying them to generate new texts. Already computers are finding success in specialty fields: not only have they passed the original Turing test for human conversation, but academic journals have accepted autogenerated computer science research papers, the LA Times files autogenerated reports on earthquakes as they happen, and NarrativeScience collates statistical data into blog content for mainstream media outlets. A New York Times article, reporting on these services, is titled “If an Algorithm Wrote This, How Would You Even Know?”

Supposing that it is possible for a computer to generate writing, the question then becomes: is it any good? This clearly subjective judgment call is a little more complicated than it seems. By “good,” do we mean semantically and/or stylistically coherent? Original? Passably human? Or some combination of those? At least one of these matters seems to be settled by the Bot or Not project, which invites visitors to guess whether poems were composed by humans or computers, a sort of crowdsourced literary Turing test. Several computer-generated poems succeed in fooling most users; here is one of the most convincing:

A Wounded Deer Leaps Highest

A wounded deer leaps highest,I’ve heard the daffodilI’ve heard the flag to-dayI’ve heard the hunter tell;‘Tis but the ecstasy of death,And then the brake is almost done,And sunrise grows so nearsunrise grows so nearThat we can touch the despair andfrenzied hope of all the ages.

It may not be any good or even original—it cribs several phrases, including its title, from Emily Dickinson—but it is convincingly “human” enough to dupe 68% of Bot or Not participants, testifying both to computers’ compositional abilities and to people’s collective expectations about what human poems sound like.  

The poem was generated by the “Cybernetic Poet” developed by the futurist luminary Ray Kurzweil, who said his invention could be “a useful aid to real-life poets looking for inspiration or for help with alliteration or rhyming. But I am not intending for it to be a huge money maker.” (Ha.) It’s easy to see why he’d downplay the promise of his own invention, not only for its limitations, but because it is easiest and least threatening to see computers as mere prostheses, basing their effects on a sturdy foundation of human intelligence. Accordingly, most bots simply curate or recombine the huge troves of found text online, like the Pentametron that collects rhyming tweets in iambic pentameter, The New York Times’ headline haiku bot, or the “What Would I Say” app that recombines your Facebook status updates.

On a deeper technical level, there are coding platforms like the Natural Language Toolkit, which provides for the “classification, tokenization, stemming, tagging, parsing, and semantic reasoning” of English prose; it makes generating rhyme, scansion, and any other formal linguistic operation trivial for computers to crank out by the trainload. These tools help create apps like RapPad, a lyric-composition tool that counts syllables, suggests rhymes, and even cribs entire lines from other songs to match ones you’ve written (“We cannot know his legendary head / I don’t know what y’all heard but hip hop ain’t dead”). The NLT was also used to create JanusNode, a “user-configurable dynamic textual projective surface” that converts input text into poems in various styles; users can “eecummingsfy” and “Dadafy” their text. Its creator has used it to generate Robert Johnson-style blues lyrics (“I went to the country, far as my eyes could see / Ain’t got nobody to hate and love with me”), as well as a book of poems—which means that JanusNode has its own Amazon author page.[2]

Regardless of whether computer poems are passably human, are they enjoyable? Interestingly, their chief appeal is usually based on our knowledge that they’re computer-generated. The Cybernetic Poet’s “A Wounded Deer Leaps Highest” is only impressive once we know it was mechanically composed. Many have been amused by autocomplete mistakes, spam email gibberish, and gimmicky word generators like the Academic Essay Title Generator (“Otherness and Intercourse: Reinscribing the Invader and/in Chewing Gum”).

The effect is simple, yet apparently engaging enough that several human attempts have been made to reproduce it. Flarf poetry and “net literature” projects like Christophe Bruno’s Iterature are largely attempts to mimic the essence of content-free writing to collage or kitsch ends. A more illuminating example is the popular @Horse_ebooks Twitter account, an uncannily precocious spam bot producing non sequiturs (“GLIMPSE AT MY LIFESTYLE”) that were unfailingly lauded as “poetic” or “like Zen koans.” When it was revealed to have been acquired and manually operated by a web-savvy BuzzFeed employee in its later years, there was a keen sense of letdown that its author wasn’t a computer—that Deep Web mystique had been so successfully reverse engineered.

So computer-generated writing can occasionally pass as human. Yet it appears to be engaging mostly in how it haphazardly separates meaning from intention, a naive serendipity that one might call “Bots Say the Darndest Things.” Lacking consciousness and intention, it can’t convey anything of its own; it can only warp human writing at hazard, failing and succeeding in mildly interesting ways.

But perhaps this is irrelevant, and only seems limited because we’re trying to make computers conform to human methods of composition: line by line, conforming to grammatical rules. It’s unfair to condemn the potential of computer-generated literature by that which currently exists; it may not even be fair to evaluate it by human standards. Just as it would be baffling to consider most modern scientific inquiry achieved without technological aid, literature may yet see another sort of Program Era.

Via Me and My Writing Machine.

III. Universal Literature

Hence in order to program a poetry machine, one would first have to repeat the entire Universe from the beginning—or at least a good piece of it.

—Stanislaw Lem, “Trurl’s Electronic Bard”

Maybe the question isn’t whether computers can be good writers, but whether writers can ever hope to catch up to computers. A core insight of Moretti’s “Conjectures on World Literature” is that no human can read all the books necessary to make meaningful claims about literature as a whole, and thus comprehensive critical insights can only be obtained quantitatively. Obviously, it takes a lot longer to write a book than to read one, so following similar logic, humanity as a whole isn’t writing all of the great books it ought to, either. A computer might produce garbage 99.99999% of the time—but in the months or years it takes a person to compose one novel, a computer might generate hundreds of millions, only one of which needs to be any good in order to match a human writer’s achievements (and all without going into debt).

So, in the interest of a more complete world literature, we might consider the feasibility of generating books as quickly as possible and selecting the books we want from the resulting corpus. This undertaking recalls two precedents—first, the “Infinite Monkey” thought experiment, which posits that a monkey banging at random on a typewriter for an infinite amount of time will likely produce any given text, such as the complete works of William Shakespeare. (The experiment has been successfully computer-simulated, albeit piecemeal and in random order.)[3] The second is Borges’s short story “The Library of Babel,” which describes a universe made of a library containing every single conceivable book. Each book is 410 pages long, and each page contains 40 lines of 80 characters; the Library exists ab aeterno, contains 25 orthographical symbols (no capital letters, numbers, or any punctuation besides the comma and period), and no two books are identical. Despite the narrator’s surmises that the Library is “perhaps infinite,” by his own description it’s unambiguously finite, and all it would take to computer-generate it is a simple combinatoric algorithm that outputs every possible 1,312,000-character string from the set of 25 characters.[4]

Once compiled, all writing would be rendered redundant, all thoughts preconceived, every utterance cliché. Might this pose a crisis of justification for writers? The Library would contain not only every book, but every piece of criticism on that book, and so on. Writing would become a pointless act of divination, like the old men in the latrines of Borges’s Library who use “metal disks in a forbidden dice cup” to try and “feebly mimic the divine disorder.”

So it is perhaps fortunate that the universe will not permit it. Borges’s Library is ab aeterno and arbitrarily vast; our universe has neither the time nor space to generate it in its entirety, within our current understanding of physics. The Library’s 251,312,000 books positively dwarf the 1080 estimated fundamental particles in the observable universe, and even if all 1080 particles cranked out one book for every unit of Planck time (~1043 units per second), that’d give us about 10123 books per second, or about 10130 books per year. Producing the Library would still take somewhere on the order of 101,833,967 years, at which point all matter in the universe will have long since decayed into clods of highly literate iron-56. Even given infinite time, the Bekenstein bound—the upper limit to the information density of energy—suggests that inscribing all these books into matter would make the universe collapse into a black hole.[5]

Some far-future technological discovery might make it easier to produce and store the Library; even today, there are simulations that offer at least the experience of browsing it But in a way, it’s reassuring to know that the endgame of writing—producing every conceivable book—would entail the end of everything; that our existence depends on an incomplete literature.

Still, we shouldn’t dismiss the project’s feasibility; we neither need nor want the whole Library. Indeed, the Borgesian Library’s sheer volume of illegible nonsense—of Babel—drives its inhabitants to despair, suicide, and mysticism. We would only need to produce and retrieve valuable books by exploiting computer processing speed. This may take the form of a massive distributed computing project, using the spare CPU cycles of idle computers to generate books by the billions.

Strictly speaking, the “searchers” in Borges’s library are actually browsers who have to manually vet preexisting books, more or less at random. We, on the other hand, would build it ourselves, which gives us tremendous advantages. For one thing, we’d have what Borges calls the “formula and perfect compendium of all the rest”: a searchable index. Unlike Borges’s roving Purifiers, we wouldn’t have to discard “useless works,” because our Versificator could just filter them out. Of course, any definition of “useless” will be controversial; as Borges’s narrator stresses, any apparently incoherent phrase is “justified in a cryptographical or allegorical manner.”[6] But then again, the unencrypted version would be out there, too.

Instead of going character-by-character, we would produce books sentence-by-sentence, in human-readable forms. We could disallow gibberish, while making provisions for proper nouns, invented words, and stylized syntax (“yes I said yes I will Yes”). To produce fiction, we’d also build in an ontology by which the algorithm understands that there are such things as objects and characters, attributes they possess, settings they’re in, actions they can perform, and so on. (These may be adapted from existing ontologies, like the serendipitously named BabelNet.) The generator might be further guided by input parameters dictating the language, perspective, characters, etc.; we could even seed the algorithm with a writing sample, spawning millions of related books, much as a Pandora playlist is fashioned from a single song. Our model here would not be Orwell’s Versificator, but the “mechanical versifier” in Stanislaw Lem’s “Trurl’s Electronic Bard,” who, being programmed with a simulation of the entire Universe, can so skillfully produce any form of poetry, traditional or avant-garde, that it causes poets to commit suicide.

We’re still left with a mammoth set of legible books.[7] Wouldn’t winnowing them down be just as time-consuming as writing? Here we return to the distant reading approach: let the computers do it.[8] This would require a big overhaul of distant reading’s telos—more than just categorizing a work by its formal properties, it would have to assess a book’s coherence. This is no casual feat; to indulge in some hand-wavy speculation, evaluating a literary text’s coherence may amount to a form of strong AI. Then again, literary appeal may be more quantifiable than it seems: for example, there’s an algorithm that can predict which of two tweets will be more retweeted, outperforming the average person. But even assuming you can achieve that with current technology, we still have a problem: our literary criteria would be based on current values, which would inevitably yield only more of what’s already around.

Addressing that problem of precedent—which exists with or without computers—would ultimately fall to human operators, who would devise new search criteria to retrieve books unlike any that exist. They will also play an editorial role: any readable book that’s discovered is likely to be in rough shape, and, like Egyptologists digging up ancient relics, it’ll take some labor-intensive restoration, though of course they needn’t worry about preserving intention. So post-Babel literary producers would train the algorithms to produce and retrieve books meeting self-defined criteria of quality, find the most promising ones, and edit them into their final form; whether this constitutes authorship or mere editing is a question whose answer is likely to evolve along with social norms.

Borges’ Library of Babel by Erik Desmazieres

IV. E Pluribus Pluram

The plural of anecdote is data.—Raymond Wolfinger

Now suppose every logistical challenge is met and the auto-generation of literature becomes viable. How does it change our relationship to literature to know that it’s been generated without human intention, and that one could just as easily retrieve or generate a book as write one? It’s one thing to pretend the Author is dead; how about an author who was never alive to begin with?

It’s easy to see why people might reflexively prefer books written by people. We value literature as an expression of human effort, synthesizing memory and imagination and education, which in turn reflects the society and times in which it’s produced. At its best it marks the achievements of our species; they define identity and convey ideas; finding something recognizable in another person’s writing can make us feel connected to a common human experience, or conversely expose us to different perspectives. Literature is communication, a self-portrait of consciousness or artistic vision. Without a consciousness to articulate, what’s the point? Reading a computer-generated book would be like dating a robot. So goes the argument.

But why stack human authorship against computer generation at all?⁠ First of all, the Babel books described above are products of human intention, since any book that finds its way to a reader will have been curated. Writers have always been resourceful about indulging their impulse to tell stories, whether or not the stories were “their own”—transmitting them orally, inventing them in fiction, hunting for them in journalism, embellishing them in adaptation, even flat-out plagiarism—and that hasn’t stopped readers from enjoying them.

Even assuming that we’re talking about computer-generated books produced without human intervention, there are some conventional ways we could carve out a modest space for them. We could go the route of the Turing test and declare that the criterion for literary value amounts to the machine’s ability to convince people it has literary value. Or we could argue generated and authored books are non-overlapping magisteria, and the fact that machines are better at chess and worse at facial recognition simply doesn’t matter, it’s apples and oranges. A computer-generated book’s supposed deficiencies may even have benefits, for instance, by eliminating annoying biographical criticism (“Was Nabokov secretly telling us he was a pedophile?”).

Borges won’t settle for anything less than a categorical dismissal of generated writing, however. With characteristic foresight he anticipates our project in “Note on (toward) Bernard Shaw,” citing Kurd Lasswitz’s “staggering fantasy of a universal library which would register all the variations of the twenty-odd orthographical symbols, in other words, all that it is given to express in all languages.” Borges disdains all efforts at “making metaphysics and the arts into a kind of play with combinations,” and continues:

Those who practice this game forget that a book is more than a verbal structure or series of verbal structures; it is the dialog it establishes with its reader and the intonation it imposes upon his voice and the changing and durable images it leaves in his memory. This dialog is infinite… Literature is not exhaustible, for the sufficient and simple reason that no single book is. A book is not an isolated being: it is a relationship, an axis of innumerable relationships. One literature differs from another, prior or posterior, less because of the text than because of the way in which it is read.

His remark needs to be parsed. When Borges refers to a “dialog,” he is of course referring to the dialog engendered in the reader’s mind, which doesn’t require a sentient author; it’s also not clear why computer generation would render the images in a piece of writing any less malleable, memorable, or capable of imposing an intonation on a narrative voice.

His strongest point is that a computer-generated novel would be an “isolated being” that lacked the meaningful “relationships” of an authored novel. By this view, authored writing is enriched by its refraction through the unique circumstances (social, historical, personal) of its creation, the intertextual connections between an author’s books, and so on. (Consider how the phrase “I love you” varies depending on who said it, why, to whom, in what context.) Computer-generated writing, on the other hand, only leads, “in the best of cases, to the fine chiseling of a period or a stanza, to an artful decorum… and in the worst, to the discomforts of a work made of surprises dictated by vanity and chance.” Borges goes as far as to declare that a particular statement derived through automatic processes would “lack all value and even meaning.” To adapt a line from Beckett: no symbols where there is no intention.

Fittingly, we can take another of Borges’s writings, “Pierre Menard, Author of the Quixote,” as an enactment of this outlook. A critic lauds the masterpiece of the 20th-century writer Pierre Menard, who reproduces parts of Don Quixote word-for-word; the critic asserts that Menard’s identical version is superior to the original, because of the context in which it’s written (e.g. Cervantes’ Quixote is a contemporary novel written in natural Spanish, whereas Menard’s is a historical novel in an affected archaic style). As much as this story offers dozens of impressively effective arguments to this end, the notion of a critic lauding a perfect replica of an existing text never really stops feeling ironic and winky. On some level we feel that two identical linguistic constructs must share at least a few essential things in common. Is that just unsophisticated?

Here it will be useful to recall Roland Barthes’ famous essay, “The Death of the Author.” Whereas Menard’s critic argues that the Quixote’s significance is managed through its author’s identity, which provides the “axis of innumerable relationships” through which we understand the text, Barthes argues that “to give an Author to a text is to impose upon that text a stop clause, to furnish it with a final signification, to close the writing.” Barthes—or rather, his essay—would probably locate a book from the Library of Babel alongside Mallarmé and Valéry, whose stance was that “it is language which speaks, not the author.” (We can also throw in T.S. Eliot’s “impersonal” poetics and New Criticism.)

Given this seeming opposition, it’s surprising how similar Borges and Barthes’ premises are. When Borges insists a book is not just a “verbal structure” but an “axis of innumerable relationships,” he sounds a lot like Barthes, who claims that “a text does not consist of a line of words” but “a space of many dimensions.” Borges praises Bernard Shaw not for his strong authorial presence but for his impersonal “nothingness,” from which he “educe[s] almost innumerable persons,” while Barthes argues that the Author “is in no way supplied with a being which precedes or transcends his writing.” Even Borges’s claim that a book is primarily distinguished by “the way in which it is read” inadvertently echoes Barthes’ sentiment that “the true locus of writing is reading.”

With all that common ground, maybe the two viewpoints can be reconciled. In cryptic fashion, Barthes argues that with the Author’s removal, criticism takes “as its major task the discovery of the Author… beneath the work.” His description of how to go about this is very open-ended (as he probably, ahem, intended), but Borges supplies us with a concrete method in “Pierre Menard.” Just as Borges devises the fictional Pierre Menard—and arguably the fictional critic “Jorge Luis Borges”—to invest Don Quixote with new meaning, it’s easy to imagine a field of hypothetical criticism, in which authors are not just studied but created. By this approach, a text’s authorship is treated as a fluid attribute, rather than a rigid historical fact, such that one would begin by asking of any authorless text: who could have written this? What if Borges had? Or Pierre Menard? Or I? With the positing of multiple authors, the text could even be, as in Pierre Menard, compared to itself. Many necessary authors that never existed might be found this way; if Borges did not exist, we would have to invent him.

Post-Babel writers have a new task. Since, as Barthes says, “the writer can only imitate a gesture forever anterior, never original,” no book will be distinguished by uniqueness. Instead a writer will produce not only a text, but also the context to enrich a particular work, a context that may or may not be fictional. This pushes literature deeper into the realm of performance and relational aesthetics, making a major art form of the pseudepigraph. In their eternal redundancy, many Authors, in many forms, return from the dead.

Tony Tulathimutte’s novel Private Citizens is out this week.

Book photograph by Bucky Miller.

Acknowledgements

Much credit is due to Alyssa Loh at the American Reader, who oversaw the essay through several drafts. Alex Walton contributed several insights into computer-generated poetry, and Eugene Fischer helped with understanding and calculating the physical limitations of creating the Library of Babel, which he treats at greater length in “The Bookbinder’s Guide to Destroying the Universe.”

[1]There are other approaches—for example, a crowdsourced database of alternatives could exist. But who’d construct that database? It could be harvested from an online game similar to the satirical TrademarkVille, in which players formulate with compound-word alternatives for common words (like “longblade” instead of “sword”), or guess the meaning of user-submitted synonyms. A rating system could further quantify the appropriateness of the synonym.

[2] Chris Westbury, the creator of JanusNode, has contributed the only Amazon review to date, whose text is fittingly supplied by JanusNode: “This work, at once glowing and expressionist, brings into our awareness the impactful harmony of psychoanalysis, represented here as a pointillist mixing of blood-letting and an inability to listen to others…” His rating: five stars.

[3] A university study demonstrated that real-life monkeys will mostly type the letter S and smash the typewriter with a rock and “use it as a lavatory.”

[4] (or 127 characters to include the basic ASCII set)

[5] Fittingly, the same principle invites us to see the universe as fundamentally comprised not of matter and energy, but of information.

[6] It’s fun to wonder exactly how the Library of Babel’s searchers even codified a written language to begin with, given that nobody seems to write and the books follow no coherent rules. Though the narrator leaves open every possibility that the story itself hasn’t been written at all, and is just one of the found texts within the Library.

[7] We’re concerning ourselves mainly with fiction and poetry, since the burden of fact-checking involved in producing viable non-fiction is impractical. However, if algorithmically generated non-fiction caught on, it might simply relieve the genre of its cumbersome expectations of factuality and frankness, to whatever extent these expectations exist.

[8] Consider that computers are already doing the most of the reading and writing: every email that’s sent, every word published in print or online is processed and indexed by computers, and an estimated 97% of all emails are computer-generated spam, most of which is “read” and filtered out by other computers; data-scraped websites and autogenerated content comprise ever-larger percentages of the Internet. And why exclude the huge amount of machine code that’s read when executing the most basic of operations, just because it’s not human-legible?

Continue reading on logger.believermag.com