For fewer but more perfect blooms,
experts say to prune the buds.
—Sara Wainscott, Insecurity Systems
The ideas in this essay have not led me to quit or walk away from any role or job I currently hold. But I’ve thought about it.
In the summer of 2020 I rejoined a literary organization best known for its annual book awards. I had served on the board from 2010 to 2013 and loved it for the camaraderie and sophisticated discussion, but decided not to run for another term, because my partner and I had a second young child. I came back because current board members appointed me to one of several vacancies caused by disputes and resignations. The disputes, in turn (all of which are public now), had to do with alleged racial bias in how we judged the awards. Some members wanted to do more to fight white supremacy in a year marked by Black Lives Matter. Others objected vociferously to anything that would distract, in their view, from individual judgments about individual books. (Put as broadly as possible, I think that the former were right, and the latter were wrong.) Parts of those disputes—supposedly confidential, according to board rules—wound up on Twitter. No one was happy after that.
Such disputes were no surprise. If you look at literary awards in America over the past ten years, especially in my field of poetry, you will see a dramatic shift: from mostly white committees giving most prizes to white poets, to committees (some still mostly white) frequently honoring writers of color, especially African Americans. From 1991 to 2000 one of ten Pulitzer Prizes in poetry went to a poet of color; from 2001 to 2010, one; from 2011 to 2020, five. The equivalent numbers for the National Book Award are two, two, and six; for the Los Angeles Times Book Prize, zero, zero, and four.
I support this shift. There’s a case to be made (and elsewhere I have made it) that the best books of contemporary poetry, judged by the most appropriate standards, just are by writers of color these days. But there’s also a case to be made—and the backlash against this shift inadvertently makes it—that there is no race-neutral, in fact no background-neutral, way to frame or evaluate literature. Some poets and poems have power and beauty each year, and there’s a lot worth reading, but which are your “best” depends on what you believe, what you want, how you’ve been trained, what you can hear, who you are. In the strong sense of “aesthetic” and of “disinterest” made legible by Immanuel Kant, or at least by popularizations of Kant, we can have no such thing as aesthetic disinterest, though we can (perhaps pigheadedly) try to approach it.
Is there really a best book of poetry every year? A best single poem (the Forward Prize, in the UK, claims to pick one)? If not—and there isn’t, at least not in the verifiable way that there’s a tallest building or a speediest marathon time—isn’t it hypocritical to argue as if there were? It’s great that some poets get more attention because these prizes exist, but is there a better way to say why we give them? The National Book Foundation, which gives the National Book Awards, says its mission is “to celebrate the best literature in America.” The Windham-Campbell awards—a six-figure money-pot bestowed through Yale—merely “call attention to literary achievement and provide writers with the opportunity to focus on their work.” (A skeptic might point out that they pick winners anyway.)
The English professor James English (not a typo) begins The Economy of Prestige: Prizes, Awards, and the Circulation of Cultural Value (2005) by rehearsing the common argument that we now have too many literary medals, “that the cultural universe has become supersaturated with prizes.” Arts prizes behave like Olympic sports, encouraging spectators to view them all as “explicitly competitive”: people bet on who will win them each year. “Modern cultural prizes,” English remarks, “cannot fulfill their cultural function unless authoritative people . . . are thundering against them.” But most of those people ask whether literary prizes go to the wrong winners, or whether too many exist. I want to know, instead, whether we should have any at all, and whether I do harm by getting involved. Should I, as a member of dominant groups and disfavored ones (for example, I’m white, transgender, gregarious, well-to-do, a woman, and probably neurodivergent), take part in a final prize judgement? If not me, who? These questions may start—in the United States, in the 2010s and 2020s—from race, but they do not end there. Should anyone be self-consciously naming Bests? If nobody does, will markets and publishers do it willy-nilly? Won’t that be worse?
_____
In 2016, when I was already out to friends as transgender, but before medical transition, I got seriously interested in women’s gymnastics, a sport that celebrates—or at least purports to celebrate—the bodies of women and girls, not for what we are, but for what we can do. Watching the London Olympics, I found myself cheering for young women like Simone Biles, Aly Raisman, and Gabby Douglas, who had disciplined themselves into accomplishing feats almost nobody on Earth could perform. They were like the teen superheroes in some of my favorite comics, except real. And like teen superheroes, they had to keep winning, not in fights against supervillains, but in the endlessly stressful competitions that brought them to the medal platform and to the Olympic stage.
Maybe there never should have been a stage. The former gymnast and journalist Dvora Meyers, author of the terrific history of modern women’s gymnastics The End of the Perfect Ten: The Making and Breaking of Gymnastics’ Top Score from Nadia to Now (2016), looked at the postponed Olympics in August 2020 and concluded, in a long and persuasive essay published by Longreads, that the games simply should not take place ever again. For Meyers—who drew explicit links to Black Lives Matter—each Olympic locale’s preparations “bring misery to the already vulnerable,” as governments demolish neighborhoods to put in stadiums; “the pandemic, police brutality, and the Olympics are not unconnected events,” she argues. The same winner-take-all mentality that leads municipal governments to overlook everyday violence against Black and brown bodies licenses urban destruction, limitless rent-seeking, and public expenditures for private gain.
No wonder, starting with Denver in the 1970s, “residents of potential Olympic host cities have voted overwhelmingly to reject the Games.” That’s in democracies on multiple continents (dictatorships still want to host). American gymnastics, moreover, has more specific troubles (which Meyers has also covered): since 2016 brave athletes, journalists, and activists have upended the sport by exposing a conspiracy to cover up horrifying sexual abuse so that Team USA could keep winning. The highest-profile abuser, the team physician, benefited from a culture in which, as one former athlete told the makers of the 2020 documentary Athlete A, “emotional and physical abuse was actually the norm.” Biles said in 2020 that if she had a daughter she would not let her get involved with USA Gymnastics now.
The Olympics seem uniquely destructive and uniquely wasteful, but they also look like twenty-first-century America, and not only America, sped up and writ large. Even if every dollar or Euro or baht allotted to every future Games goes efficiently toward the televised spectacle and no one is left without a playground or a home, even if individual predatory coaches and doctors are kept far away, the Olympics are still about elites, about winning, about taking the best of the best and letting them face off over and over, while—in judged sports like gymnastics and figure skating—authorities grade and score them. Is that what sports—or, for that matter, art forms—should do?
Is it what schools should do? If you teach, you’ve probably encountered a student like Jean, who is a composite. Jean’s brilliant in conversation, and early in their school career they wrote brilliant papers too: they may be a committed writer or artist in fields where teachers have little sway, such as fan fiction. Peers may seek them out as a beta reader. And yet as a deadline approaches—as it comes time for Jean to do work meant to be evaluated by adults—Jean freezes up and nopes out.
Maybe Jean clams up, or cuts class. Maybe Jean—like the talented aunt in Robert Lowell’s Life Studies—skips their recital. Maybe they’ve succumbed to “perfectionism,” which names the problem without providing an etiology or a remedy. Maybe they’ve got an anxiety disorder; they might benefit from an SSRI (I do) and from talk therapy. Maybe they’ve succumbed to Gifted Child Syndrome, or to the phenomenon of the Burnt-Out Gifted Kid, described by the radical philosopher Tom Whyman in a 2018 article for The Outline. And maybe they’re also reacting in a healthy way to our social and educational insistence that talent, merit, excellence ought to be measured, that the most important ways to look at talent involve its relation to scarcity, to unequally distributed rewards.
Sometimes the scarcity is real. Knopf and Carcanet and Wesleyan University Press can publish only a certain number of books each year. Yale can only take so many students: surely they ought to work to discover the best? The more students want to go to Yale, the harder Yale has to work to make the right choice, so the harder the students work, right? And once they get in, an A ought to mean something, right?
Maybe not, or not all the time. Dutch medical schools until 2018 selected some applicants (those who met minimum academic standards) by a weighted lottery, rather than seeking the Very Best. Aotearoa New Zealand has almost five million inhabitants and eight universities: some excel in particular disciplines (literary writers tend to cluster at Victoria University of Wellington) and all require applications, but there’s no such thing as the Stanford of New Zealand. Even the most ambitious undergraduates tend to select places based on regional preference or on the kind of lifestyle they prefer. When I lived in New Zealand for a few months in 2016–17, university faculty asked me how I’d describe the United States; my short answer was that the U.S. is good if you’re winning, but it also insists that you win or lose.
I’d rather grow up to be Jean—and much rather have my kids grow up to be Jean—than have them grow up to be, say, Andrew Cuomo, the handsy, combative martinet who remains (as of July 2021) the governor of New York. I’d hate to have them grow up to resemble, in any way, the authoritarian kleptocrat who was, between 2017 and 2020, the president of the United States, a man who promised on the campaign trail “so much winning that you’ll be tired of winning.” Both men—as one New York Times reporter put it—saw politics as a zero-sum game. You can win an election without that mentality, of course, and admirable politicians like John Lewis, Paul Wellstone, and Jacinda Ardern have, but it’s a mentality we unwittingly foster: government as a meaner, higher-stakes Olympic Games.
Ardern, the Labor Party prime minister of New Zealand, won international fame in 2020 by rallying her “team of five million” to obey quarantine rules and nearly eliminate Covid-19. “Everything was about collaboration, working together, positive language,” microbiologist Siouxsie Wiles told reporter Noah Smith, “rather than fear.” New Zealand does love its competitive sports (especially the rugby All Blacks). But it also boasts social insurance superior to what we see in the United States, starting with single-payer health care. And here it is no exaggeration to say that the American devotion to competition kills. Conservative arguments against basic social insurance, like the Affordable Care Act or the Biden administration’s stimulus program, can rest on the very American axiom that a basic income, or comfort, or food security are things you need to compete for, to win, to deserve. The liberal philosopher John Rawls argued that to build a just society, we need to ditch our concepts of desert: conservative philosophers have been defending desert ever since.
It’s tempting to say that our modern worship of winning grew from—and might be defeated alongside—rapacious modern capitalism. Free market fundamentalism surely makes everything worse. Whyman blames it—and the financial crisis of 2008, which made jobs harder to find—for student burnout: his solution is to “radicalize the gifted kids.” Yet we can find warnings about the dangers of winning long before, and outside, the boundaries of that economic system. We have only to read about late-twentieth-century Romanian gymnastics, or East Bloc track and field, to see the cost of Olympic winning in countries supposedly run on another plan. Athlete A, which examines the gymnastics sexual abuse scandals, suggests that abusive methods for training young gymnasts grew from the other cruelties of Romania’s totalitarian Ceaușescu regime. Shakespeare’s play Coriolanus follows a man who cannot live without the rewards that (he believes) come with his victories and makes speeches about “mine own desert.” The Iliad begins in a struggle over a prestigious, symbolic first prize (who happens to be a human being, the maiden Briseis); Achilles’s sense of unrecognized arete—desert, honor, merit—becomes the source of his poem-driving mene—wrath.
Pressure to win seems much older than the United States. And yet in the modern United States it can seem to be, not only everything, but everywhere. The psychologist and social critic Alfie Kohn has devoted much of his career to railing against the use of behaviorist criteria—awards, punishments, gold stars, cookies, winners and losers—all over the Western world, from school grades to refrigerator reward charts to workplace incentive schemes. Intense competition, reward and punishment, winner and loser situations, Kohn finds—with research to back him up—reduce effort from people who think they’ll lose, quashes enthusiasm, and generally hurts everyone. “Competition holds us back from doing our best work,” he writes. At best, we game the system. We teach to the test. At best, rewards—and punishments, since winning implies a loser—cause people to work less, or to work in bad faith, or to run away. “The only thing worse than scrutinizing people’s performance, evaluating it, and making them worry about deadlines,” Kohn writes in Punished by Rewards (1999) “is to cause a reward or punishment to hinge on the outcome. And the only thing worse than that is to set up the activity so that one person can be successful only when someone else is not.”
When winning is the point—when competition is intrinsic to the activity—small rule changes can change almost everything. The End of the Perfect Ten in large part follows rule tweaks in elite women’s gymnastics, their unintended effects, and the efforts to reverse them in some precincts of that sport. When winning isn’t supposed to be the point, but it becomes the point, what should be a cooperative effort becomes a tragedy of the commons. The once-mighty department store Sears collapsed in the early 2010s in part because free-market fundamentalists took over the company and imposed a system by which department heads and managers had to compete directly with one another. “As profits collapsed,” write the reporters Leigh Phillips and Michal Rozworski in People’s Republic of Walmart (2019), “divisions grew increasingly vicious to one another.” They couldn’t work together to save their livelihoods.
And they were adults: Kohn is hardly the first to notice how quantified measures of school success or failure, designed to promote the very best, instead discourage most of the children involved (see also John Holt’s often-reprinted 1967 essay “How Teachers Make Children Hate Reading”). Conversely, some kids give all of themselves when and only when no adult can judge. The online worldbuilding and engineer game Minecraft may have become so popular among kids born in the 2000s because it’s a space for creation—and for self-paced visible competition—that most adults will not reward, or attempt to evaluate, or even understand: in this respect it duplicates, and makes directly visible on-screen, the principle by which so many youth cultures evolve.
Winning may be addictive, moreover, in something like the medical sense of the term: it generates tolerance—once you win you need to win more to get the same buzz—and conditions withdrawal—no other reward will do. The charismatic basketball great Diana Taurasi—former Olympian, former University of Connecticut star, now with the WNBA Phoenix Mercury—“hates to lose more than she loves to win,” in the words of her team’s general manager. “I go to sleep every night thinking I’m not good enough,” Taurasi has said. Winning’s addictive properties propel the televised series The Queen’s Gambit, based on the novel by Walter Tevis (a man familiar with real-life addictions). In the tvshow, Anya Taylor-Joy’s character Beth must learn to accept friends’ help and to play for the sake of play before she can (wait for it) win against the Soviet world champ.
_____
The point I’m trying to raise—and maybe it’s obvious by now—is that the problem of the predatory Olympics, the Red Queen’s Race of top college admissions, the problem of hypocritical or less-than-disinterested or racist literary prizes, the problems of gifted-kid burnout, the problem or non-problem of grade inflation, and several other problems to boot (the familiar local-news problems of yelling, combative Little League dads, for example) are in some sense the same problem: the problem of institutions, social lives, and personal pleasures built around winning.
Nobody wants to live in Kurt Vonnegut’s Monkey House. There’s nothing wrong with celebrating things you like and the people who make or do them, nor with working hard to develop a talent, nor with reserving genuinely limited places for those who can do the work. Only a certain number of people can do research math, for example. Literary awards steer spotlights to works and authors that might otherwise go unnoticed by many readers who might fall in love with them. Spectators’ delight in Biles and Douglas and Taurasi is genuine (unless the spectators are fans of their opponents). And it’s hard to envision a society with plenty of leisure time and resources but no kind of competition at all. The problems arise when the institutions designed to figure out who should get places, and to train everybody for some place, and to encourage human effort and human flourishing, default to competitions with rankings and winners, and when activities with grades and rankings and competition and winners—from AP Physics to the WNBA Finals—seem consistently more important than activities that do not provide those things.
Kohn’s solution to this disaster is what he, and like-minded psychologists, call autonomy support. Emphasize shows and exhibitions, not medals; responses, not numerical grades; encourage what looks like growth, not like staged combat. “It is a mistake,” Kohn writes, “to talk about motivating other people. All we can do is set up certain conditions that will maximize the probability of their developing an interest in what they are doing and remove the conditions that function as constraints.” To the query “what can be done about inherently boring work?”—about, for example, garbage collection and handling—Kohn answers that it need not be so boring: sociologists found that garbage collectors in San Francisco, working for a cooperative, enjoyed their jobs. “It is premature to assume that certain categories of work are inherently distasteful and will be pursued only for extrinsic reasons.”
I am not qualified to say whether Kohn’s vision of a society without grades or scores could ever be fully implemented anywhere. What I am qualified to say—what I am saying: what links the evils of the modern Olympics to literary criticism, to literary prizes and to A-to-F classroom grades—is that I’m tired of losing and tired of winning, and that we all lose when we focus so often on prizes, grades, and final scores.
The terrific fantasy novelist Mishell Baker lives with borderline personality disorder, which renders her extra-sensitive to rejection and praise. She tweeted, late in 2020, “I would give up all the things I ever dreamed of as a kid, forever, if I could just walk out into the world, either in real life or online, and hear something other than a chorus of unending pain. If I could see happiness out there that wasn’t based on ‘beating’ someone else.” She decided in 2020 to stop writing novels: the process was just too much for her to pursue.
Even for those of us without Baker’s particular mindset, there’s something corrosive and horrifying about the realization—in childhood or in adulthood—that our success is someone else’s loss. “By God! I will accept nothing which all cannot have their counterpart of on the same terms,” Walt Whitman declared in “Song of Myself.” And yet he published, anonymously, laudatory reviews about his own book. He wanted to get recognized as the best, the newest, the most innovative American, but also to overturn the artificial scarcity of a literary economy in which there could be only one best. So did Adrienne Rich, who opened her controversial 1996 edition of the annual Best American Poetry with a quotation from Whitman’s “Calamus.” Rich explained that her selections “are not, by any neutral or universal standard, the best poems written, or heard aloud, or published in (North) American during 1996.” A peeved Harold Bloom excluded all Rich’s selections from his 1998 Best of the Best American Poetry anthology.
_____
I write as part of the problem. My teens were shaped in part—and I enjoyed it—by the delightfully ridiculous sport of quiz bowl, which combines quick recall of school-friendly facts with videogame-style reflex tests. In the tv version called It’s Academic, “the longest-running tv quiz show in the world,” high school teams win by buzzing in first and correctly guessing what the adult host wants them to say.
In my first year of quiz bowl, we lost all the time. Then my friend Z, who is and was at least as competitive as I am—and more diligent and more practical—took it upon himself to train us. We learned lists of British kings from 1066. We learned to identify the twenty-fifth president of the United States, along with the name of his assassin. We learned titles and plot summaries for decanonized novels (like Edna Ferber’s Giant) that we would never read, and names for international landmarks we would never see. After a year with Z as our leader, we won and won and won before we lost. The year after that (twelfth grade for me), we won the whole televised tournament. By that time Z was in college. It wasn’t fair.
It was, however, comparatively harmless, set beside what actual grading and classroom pressure did to some of my peers then, and to some of my students at Harvard, where I teach now. Harvard’s a strange pressure cooker of a place, where students and teachers motivate one another, sometimes out of curiosity, sometimes out of an excess of twitchy energy, and sometimes in the unreasoning and counterproductive pursuit of competitive excellence. My colleague Helen Vendler—who has attracted opprobrium in some quarters as a proponent of aesthetic judgment—has complained about the kind of competition that Harvard admissions encourage, writing in 2012 that “many future poets, novelists, and screenwriters are not likely to be straight-A students, either in high school or in college. The arts through which they will discover themselves prize creativity, originality, and intensity above academic performance.” And yet if they want to come here, they have to get—or at least think they have to get—As.
And if they come back as professors—as I did—they have to run a gauntlet I did not understand until I saw it in action. There is a case for assessing, as many schools do, the merit of new professors’ writing and teaching before allotting them a job for life. There’s a case for looking harder at that writing, and for wanting more of it, in schools that expect more writing overall and in schools that attempt to train college professors. There is, however, no good case to be made for a system where tenure means experts must pretend to agree that you are, not Good, but the Best. If your field has no consensus—or if the best prefers to remain in Durban or Nashville or Wellington—both you and Harvard may be out of luck.
I was in luck. Not until the pandemic of 2020, which left us all stewing at home—and left me watching middle and high school teachers triumph and flounder with kids online—did I realize both what I loved about the classroom and what had already failed me and failed them: the notion that in school we pursue together not only ideas or creations but the ability to prove, separately, our excellence, that we come to school to judge and be judged.
And yet we go on. We teach, we do our jobs, and since our jobs require grading, we grade. Am I grading for effort or for mastery? Do I reward experiment and daring or am I judging only the end results? If a student takes risks and falls flat, what good will a low grade do? Who gets the gold medal, the bronze medal, no medal? What’s the best book of the year? The best poem? Who am I to say? How long do I get? Should I show my work? Why am I doing this?
If “this” means teaching and reading and writing, I do it because I love it. It brings me joy, as well as an income. If “this” means grading and evaluating students on a numerical basis and listening to my well-meaning colleagues yell at one another about grade inflation, I do it because I keep my promises and honor my contract. And if “this” means picking winners (and, therefore, losers), taking part in what used to be called canon formation, and competing, myself, for scarce eyeballs and scarcer airtime, I do it because I still want people to read Terrance Hayes and Hazel Hall and Sarah Morgan Bryan Piatt and George Herbert and me. But I rarely write negative reviews: it feels too much like punching down.
“One finds it unbearable,” wrote my early idol Randall Jarrell, “that poetry should be hard to write: a game of Pin the Tail on the Donkey in which there is for most of the participants no tail, no donkey, not even a booby prize.” Jarrell got famous in the 1940s for writing scathingly negative reviews. By the end of the 1950s, though, most of his prose was praise for poets like Elizabeth Bishop (whom he helped “discover”), Walt Whitman, and Robert Frost.
And yet he was still picking winners. I am too. Arguing for one or another poetry book as the best of the year, or picking poems for publication in a national political magazine (as I did with a terrific co-editor from 2017-2020), or giving some students As and others Bs, means allocating a scarce and socially constructed resource. It might do more good than harm, and it’s certainly not going to level city blocks and leave tens of thousands of people unhoused, the way the Olympics seem to do, but it still amounts to competition. It’s a bit like quiz bowl, and very much about winning.
_____
What isn’t? Writing collaboratively. Reading collaboratively. Cooking for loved ones, though I’m not a great cook; doing things for others, more generally. I want to do something active, productive, creative, that’s not about scarcity, or head-to-head competition. I want to do something that’s not about gaining merit in the eyes of authority, or earning high scores from the judges, or pretending malgré moi to serve as a Rhadamanthine judge.
In this post-Covid year of 2021 I want to do something that involves sharing discoveries, making inventions, solving problems with stories or words. And—since there’s still a pandemic, and I live in New England, and I am a social person disinclined to wilderness hiking—I want to do something that I can do, remotely if needed, alongside other people, indoors.
I have found that thing, and it’s tabletop role-playing games, a.k.a. TTRPGs. TTRPGs use rulebooks, written scenarios, dice rolls or card draws, and—the heart of the game—hours-long, multi-part conversations to simulate characters’ adventures. The best known, and likely the oldest, is Dungeons and Dragons, which began as a late-1960s outgrowth from recreational tabletop war simulations. Wargamers Gary Gygax and Dave Arneson wanted to operate individual characters rather than armies, first in combat, and then in between combats, and the game mechanics they built for the purpose created a new form of collaborative storytelling. In most TTRPGs one person (the GM, game master, or DM, dungeon master) creates the scenarios, the minor characters, and the surrounding world; the other people pretend to be, or otherwise play, individual characters. The GM may have a rough sense of where the plot should go, but the characters’ decisions—and the resulting dice rolls—can change it. (“You’re trying to convince the blacksmith to make you a magic shield: that’s a persuasion check—roll one twenty-sided die.”)
Re-publicized by the tv show Stranger Things, D&D continues to attract adults and kids who seek swords, spells, quests, fantasy settings, and combat. Other TTRPGs—some with no combat mechanics, or no dice rolls, or no GM—fit players who want to tell other kinds of stories, or to focus not on heroic fights but on how characters fall in love or grow wiser or learn to feel at home. While D&D and other popular games are big business, experienced players also design independent games and crowdfund them: Coyote and Crow, for example, is described on Kickstarter as “a science fiction and fantasy tabletop RPG set in a near-future where the Americas were never colonized, created by a team of Natives.”
Despite the name, TTRPGs flourish online. In one biweekly game I’m a disillusioned mandolin player roaming among intricate cities, accompanying a winged magic wedding planner and a fire-breathing mortuary guild member. In another game I’m a young queer sorcerer with mysterious links to the sea, using her new powers to put out fires (our gregarious barbarian friend starts the fires). And in another, non-D&D game, Masks, I’m the daughter of a top-tier superhero, trying to live up to her powerful legacy, while using my matter-shaping abilities to fashion gifts and trinkets for my friends. A few weeks ago my Masks pals and I protected a sentient space station from malevolent crabs, learned space first aid together, and then had to make do with snacks of canned space pizza. My character had a moment of triumph when she convinced a sitcom-obsessed artificial intelligence to let us leave the space station (“remember the episode of Brooklyn Nine-Nine where Jake’s not allowed to leave the apartment? That wasn’t fun for him, right?”). She then had a sad, moody fight with her pushy mom.
Masks—as its designer, Brendan Conway, explains—creates “a story of uncertainty and discovery.” “The moves point to places where no one knows what happens next.” Like real life, and unlike a novel, the games proceed as things undertaken together: it’s tempting to say that they simulate—as single-author works can never simulate, as competitive games can never quite simulate—Hannah Arendt’s notion of action, in which human beings occupy space together and no one knows in advance what will unfold. (I don’t mean that Arendt would play D&D, though I suspect that a young Randall Jarrell would.)
Tabletop role-playing games are the anti-Olympics, or maybe the anti-Harvard: designed for participants, not for public kudos, non-scalable, cooperative, and not directly linked to real-world success. Not even the recently popular phenomenon of Actual Play podcasts (which let you eavesdrop online as podcasters play games) can give you the feeling of taking part. If you do want to take part, all you need are friends, paper, dice, and a virtual table, and if you can find the right table at the start of the game, by the end you will likely have friends. Your character can win fights with zombies or win foot races or win the love of a charismatic gnome, but you, as a real person, cannot win or lose. Even if your third-level druid gets their butt handed to them by a deadly beholder, you yourself haven’t failed, or been given a C. (PSA: Third-level druids should not fight beholders.)
TTRPGs seem to me—and in some ways they’re still new to me—not only a good time with friends, but a paradigm for the classroom (I have started to use them in teaching) and for sociability and for real life, not as a way to replace competition entirely (what Kohn seems to want) but as a way to work alongside and against it. They are, perhaps, a paradigm for literary reading too: we can ask not “Who is right?” or “Whose claim wins?” so much as “What can we do together?” or “Why do we care about this?” or “What now?” In doing so we might, following Rita Felski, do justice to the reasons that poems and novels and so on “hook” readers in to begin with: poems do not so much compel admiration as lend us reasons to become attached, give us ways to participate in them, even to role-play. “Any case for art,” Felski writes in her recent book Hooked: Art and Attachment, “cannot brush aside the salience of first-person response.” There would be no poems, and no prizes for poetry, if people did not, at some point, choose to read them in our free time.
The eminent game designer Monte Cook, who has worked on Dungeons & Dragons, reminds newcomers (in his 2019 tome Your Best Game Ever) that “although it’s a game, there aren’t winners and losers. Playing an RPG is about creating a story as a group. It’s not about beating the GM, and it’s absolutely not about beating the other players.” It can’t be. There’s no shared measure of success, unless (and it’s a big unless) the measure of your success as a tabletop role-playing gamer lies in how much fun the other players have, how much we learn about one another and about the worlds (including the real world) that we build and share. As the experienced GM Ceci Mancuso has put it (quoting the oral lore of earlier gamers): “The Golden Rule of RPGs is to always make sure other players are having as much or more fun than you are.” That, at the moment, seems like the big win for me.