November 26, 2024

The History of the Future

Author:
Go to Source

Here are the transcript and slides of the talk I gave today at CUNY. Well, not at CUNY. The conference was called “Toward an Open Future,” as I guess you might gather from my presentation.

Thank you very much for inviting me to speak to you today. There is, no doubt, a part of me that is disappointed that we cannot all be together in person; then there’s the part of me that absolutely cringes at the idea of ever being in a group larger than six or seven people again.

I do thank the organizers, I will add, for not canceling today’s event when it couldn’t be held in person. My bank account thanks you. Like almost everyone, any sort of financial stability I might have once had has been completely upended. So I very much appreciate the work.

Although some communities have listed journalists as “essential workers,” no one claims that status for the keynote speaker. The “work” of being a keynote speaker feels even more ridiculous than usual these days. I’m not a practitioner in the field. I don’t work in a school or a library. I don’t teach. I don’t maintain the learning management system or the library management software. I don’t help faculty move their courses online. I’m not even confident I can share my screen or my slides in Zoom correctly. Me, I just sit on the sidelines and take notes as everything ed-tech passes by, like some snitty judge on Project Runway, knowing full well that contestants had, like, 24 hours to pull together a ball gown out of burlap scraps and kitchen twine and still having the audacity to tell them that, meh, it just wasn’t my favorite look, wouldn’t flatter most bodies, wouldn’t make the anyone feel at all good wearing it.

I’m not a cheerleader or a motivational speaker, and for the first time ever, I sort of wish I was because I want to tell you enthusiastically that you’re all doing great and then you’d all leave Zoom feeling great. I wish I could tell you honestly that everything’s going to be okay. No one ever hires me to say “everything’s going to be okay,” of course. I’d say that the role of the critical keynote speaker is awkward and unpleasant under the best of circumstances. And these are not the best of circumstances.

So I’ve thought a lot about what I wanted to say to you today. What does one say to the faculty and staff and students of CUNY right now? What does one say to people who live and work in New York? What does one say to anyone who works in education — at any institution anywhere. Is there any other message than “My God, I feel so sick and sad and angry and powerless to help.”

I thought at one point I’d just do something completely different and unrelated to the conference theme. Maybe I’d just tell you a story. A good keynote speaker should be a good storyteller, after all. Like, I’d tell you about the origins of Sea Monkeys — I did actually give a talk on that several years ago, because that conference had the word “reconstitute” in its theme and after hearing that I could not think of anything else other than advertisements in the back of comic books and the instructions to “just add water.” Or maybe I’d tell you a little bit about pigeons — I’ve done a whole keynote on that topic too — and on how the birds have been used as weapons of war and as models for education technology. But I hate repeating myself. So, maybe, I thought, I could find a nice metaphor we all could relate to right now that ties into the themes here today of “open,” resilience, and care — like how my friends are mailing one another sourdough starters, even though it’s impossible to find yeast or flour in the grocery stores, even though, as I’m mildly gluten intolerant, I really shouldn’t be eating bread. I didn’t think I could pull off a 40-minute talk on sourdough and open pedagogy — but someone should try.

So I’m going to try to stick to the theme as it was given to me back in December — truly a lifetime ago: “Toward an Open Future.” The other speakers today are going to do a great job of talking about that adjective “open,” I’m sure. If I recall correctly, the last time I spoke at CUNY, I talked about some of the problems I have with “open” and the assumptions that are often made around “open” and labor. You can find the transcript of that talk on my site, if you’re curious.

So instead of “open” — others have it covered — I’ve decided I’m going to tackle the preposition and the noun in that clause, “Toward an Open Future.” Mostly the noun. I want to talk to you today about the future — and I want to do so for mostly selfish reasons, I won’t lie. That’s how we keynote speakers roll. It’s all about us. Even more so in this awful Zoom format where I can’t see you roll your eyes or groan at my jokes. But I want to talk about the future strangely enough because I’m sick of it. I am utterly exhausted by all the pontification and speculation about what is going to happen in the coming weeks and months and years to our world. I am exhausted, and I am frightened. And if I hear one more ed-tech bro talk about the silver linings of the coronavirus and how finally finally school has been disrupted and will never be the same again, I’m gonna lose my shit.

In talking about the future, I don’t come here to predict or promise anything, although my goodness, keynote speakers really do love to do that too. I want to talk a little bit about how we have imagined the future and predicted the future in the past, how that’s changed (or not changed) over time, and how we ended up with a consultancy class of futurists, whose work it is to predict and prepare our institutions for tomorrow — a consultancy class of futurists who are probably going to have gigs at schools long after swaths of staff have been furloughed or fired.

I am fascinated, as the subtitle of my website probably indicates, by “the history of the future of education.” I think it’s interesting to consider this history because we can see in it what people in the past hoped that the future might be. Their imaginings and predictions were (are) never disinterested, and I don’t agree at all with the famous saying by computer scientist Alan Kay that “the best way to predict the future is to build it.” I’ve argued elsewhere that the best way to predict the future is to issue a press release. The best way to predict the future of education is to get Thomas Friedman to write an op-ed in The New York Times about your idea and then a bunch of college administrators will likely believe that it’s inevitable.

Now, I will confess here, my academic background is in folklore and literature, so when I hear the word “futurists,” what comes to mind first are the Italian Futurists, proto-fascists many of them, who rejected the past and who embraced speed, industry, and violence. “We wish to destroy museums, libraries, academies of any sort,” Filippo Marinetti’s manifesto of Futurism, published in 1909, read, “and fight against moralism, feminism, and every kind of materialistic, self-serving cowardice.” I mean, you do hear echoes of Italian Futurism among some of today’s futurists and tech entrepreneurs, unfortunately. Take Anthony Levandowski, the creator of a self-driving car program and a religion that worships AI, for example: “The only thing that matters is the future,” he told a reporter from The New Yorker. “I don’t even know why we study history. It’s entertaining, I guess — the dinosaurs and the Neanderthals and the Industrial Revolution, and stuff like that. But what already happened doesn’t really matter. You don’t need to know that history to build on what they made. In technology, all that matters is tomorrow.” So part of me, I won’t lie, is fascinated by the history of the future because I like to think that nothing would annoy a futurist more than to talk about the history of their -ism and to remind them someone else had the same idea a hundred years ago.

In all seriousness, the history of the future is important because “the future” isn’t simply a temporal construct — “Tomorrow, and tomorrow, and tomorrow, Creeps in this petty pace from day to day, To the last syllable of recorded time,” as Macbeth says. The future — as Macbeth figures out, I suppose — is a political problem. The history of the future is a study of political imagination and political will.

No doubt, we recognize this when, in education circles, we hear the predictions that pundits and entrepreneurs and politicians and oh my goodness yes consultants offer. A sampling of these from recent weeks, years, centuries:

“Books will soon be obsolete in schools” — Thomas Edison, patent troll (1913)

“By 2019, about 50 percent of high school courses will be delivered online” — Clayton Christensen and Michael Horn (2008)

“Was talking with someone last week about something unrelated + he remarked that he’s upset about paying $80K for his daughter to ‘watch college lectures online’ and it dawned on me that this could be the thing that finally bursts the bubble that Peter [Thiel] was talking about years ago” — Alexis Ohanian, co-founder of Reddit and husband of the greatest athlete of all time (2020)

“I think this crisis is the worst thing that could have happened to ed-tech. People can now see just how impractical and inferior it is to face to face classrooms. It can’t pretend anymore to be the next big thing. The world tried it, for months. Game over.” — Tom Bennett, founder of researchED (2020)

Online learning will be the “silver lining” of the coronavirus — Sal Khan, founder of Khan Academy (2020)

“It’s a great moment” for learning — Andreas Schleicher, head of education at the OECD (2020)

“You’re going to have a lot of young people who have experienced different forms of learning in this crisis, learning that was more fun, more empowering. They will go back to their teachers and say: can we do things differently?”” — Andreas Schleicher, again (2020)

“In 15 years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.” – Clayton Christensen, Harvard Business School professor (2013)

“In 50 years, there will be only 10 institutions in the world delivering higher education and Udacity has a shot at being one of them.” – Sebastian Thrun, founder of Udacity (2012)

“Software is eating the world” — Marc Andreessen, venture capitalist and investor in Udacity (2011) and author of a blog post last week in which he laments that no one builds things in the world, for the world anymore

Now, unlike the epidemiological models and graphs that we’ve grown accustomed to reading, the statements I just read aren’t really based on data. They’re mostly based on bravado. The best way to predict the future, if you will, is to be a dude whose words get picked up by the news.

These predictions are, let’s be honest, mostly wishful thinking. Dangerous but wishful thinking. Fantasy. But when they get repeated often enough — to investors, university administrators, politicians, journalists, and the like — the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and control the future. Because this is key: the fantasy then becomes the basis for decision-making.

Some futurists do build models, of course. They assure us, their claims are based on research — it’s “market research” often, as the history of Cold War-era futurism is bound up in supply chain management. They make claims based on data. They make graphs — proprietary graphic presentations — that are meant to help businesses, schools, governments (or the administrators and executives therein, at least) make good decisions about technology and about what is always, in these futurists’ estimation, an inevitably more technological future. The Forrester Wave, for example. Disruptive innovation. The Gartner Hype Cycle.

According to the Gartner Hype Cycle, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage, the “peak of inflated expectations,” after so many promises are made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly — this is the “slope of enlightenment.” Eventually the new technology becomes mainstream — the “plateau of productivity.”

It’s not that hard to identify significant problems with any of these models. Take the Hype Cycle. It’s not a particularly scientific model. Gartner’s methodology is proprietary, no surprise — in other words, hidden from scrutiny. Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place emerging technologies on the curve. When Gartner uses the word “methodology,” it is trying to signify that its futurism is a “science,” and what it really means is “expensive reports you should buy to help you make better business decisions or at the very least to illustrate a point in a webinar.”

Can the Hype Cycle really help us make better decisions? I mean, look around us. After all, it doesn’t help explain why technologies move from one stage to another. It doesn’t account for precursors that make acceptance of a new technology happen more smoothly — new technologies rarely appear out of nowhere. Nor does it address the political or social occurrences that might prompt or preclude technology adoption. In the end, it is simply too optimistic, unreasonably so, I’d argue. No matter how silly or useless or terrible a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted.

Where would you plot the Segway?

In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.

Where would you plot Zoom?

And here’s the thing: the idea that we would even want Zoom — or any digital technology, really — to end up on a “plateau of productivity” revolts me. I’d prefer to reside in the jungle of justice, thank you, on the outskirts of this whole market-oriented topography.

And maybe this gets to the heart as to why I don’t consider myself a futurist. I don’t share this excitement for an increasingly technological future; I don’t believe that more technology means the world gets better. I don’t believe in technological progress.

That is, of course, a wildly un-American position to hold.

This is “American Progress,” an 1872 painting by John Gast. It was commissioned by George Crofutt, the publisher of a travel guide magazine called Western World. Crofutt wanted Gast to paint a “beautiful and charming female… floating westward through the air, bearing on her forehead the ‘Star of Empire.'” In her right hand, the figure was to hold a textbook, “the emblem of education… and national enlightenment.” And with her left hand, Crofutt explained, “she unfolds and stretches the slender wires of the telegraph, that are to flash intelligence throughout the land.” Education, as this painting so perfectly depicts, has been bound up in notions of technology — “progress!” — since the very beginnings of this country.

It should be noted too that, as Crofutt directed, the painting also shows the Native Americans “fleeing from Progress.” They “turn their despairing faces toward the setting sun, as they flee from the presence of wondrous vision. The ‘Star’ is too much for them.” So education, let us add, has been bound up in notions of technology and Empire and white supremacy. “Progress,” we’re told. “Inevitability.” I reject that.

The frontier — phew, there’s another way in which “open” is a problematic adjective, eh? but that’s a topic for another talk — remains an important metaphor in the American conceptualization of the future. New places, new fields for exploration and conquest. There’s that bravado again — just like in the predictions I read to you earlier — that confidence even when stepping into the unknown.

There have been times, no doubt, when that confidence was shaken. Now is surely one of those times. The future feels very uncertain, very unclear. It’ll be a boon for those futurist consultants. It’ll be a boon for those who offer predictive models and who sell predictive analytics. People want to know how to prepare. That’s understandable. But I’m not sure futurist-consultants have ever helped us prepare — certainly not for a public sphere, including education, that is resilient or just.

Here is why, I’d argue, the history of the future might be worth thinking about. Because much of what futurists do today — their graphs and their reports — was developed in the Cold War era. These practices emerged in response to the totalitarianism that the earlier futurism — its love of war and machines and speed — had become. The field of futurism — “futurology” — was facilitated, no doubt, by advances in computing that made possible more powerful multivariate analysis, simulation, modeling. It coincided as well with the rise of behavioral psychology — this is the part where I could talk a lot about pigeons — and the desire to be able to efficiently engineer systems, society, people.

Perhaps one of the most famous future-oriented think tanks, the RAND Corporation — RAND stands for Research ANd Development — was founded in 1948. It grew out of Project RAND, a US Air Force project that worked on new analytical approaches to war. The RAND Corporation applied these approaches to civilian matters as well — urban planning and technology adoption, along with space exploration and nuclear warfare, for example. The RAND Corporation helped develop game theory and rational choice theory, following the publication of John von Neumann and Oskar Morgenstern’s book A Mathematical Theory of Games and Human Behavior, which introduced the Prisoners Dilemma theorem. (von Neumann was a consultant at RAND. He’d worked on the Manhattan Project, as many at RAND had, and was, of course, a pioneer in the development of computing.)

The RAND analysts soon found that there were limitations to what game theory could predict about decision-making. So they began experimenting with other kinds of simulations, particularly ones that involved exercises in imagining alternative futures and then, of course, shaping behaviors and attitudes in such a way as to attain the desired outcomes. In 1964, RAND researchers Olaf Helmer and Theodore Gordon released a report called A Long Range Forecasting Study, in which they explained their new technique, what they called the Delphi method. Not “long term,” let’s note; “long range” — a spatial concept not a temporal one, and concept tied to military strategy, to frontiers.

The report “describes an experimental trend-predicting exercise covering a period extending as far as fifty years into the future,” the authors wrote. “The experiment used a sequence of questionnaires to elicit predictions from individual experts in six broad areas: scientific breakthroughs, population growth, automation, space progress, probability and prevention of war, and future weapons systems. A summary of responses from each round of questionnaires was fed back to the respondents before they replied to each succeeding round of questionnaires.” The Delphi method solicited the opinions of “experts” but then it steered those opinions towards a consensus. (This method, incidentally, has been used to develop the Horizon Reports in education technology that purported to identify ed-tech trends “on the horizon.”) Again, this wasn’t so much about predicting the future, despite the reference to the great oracle at Delphi, as it was about shaping the future, shaping behavior — the behavior of experts and in turn the behavior of decision-makers. This forecasting was actually about management, about control.

The tools and methods for modeling war-games — for predicting in the Cold War era the actions of communist regimes abroad — were turned towards American society — for predicting forms of social unrest at home.

This kind of futurism — one that relies on the rationality of scientific men and their machines in the service of liberalism and corporate capitalism and Empire — is, of course, just one way of thinking about the future. But it has remained a powerful one, permeating many of our institutions — the OECD, the World Economic Forum, The Wall Street Journal and the like. And in doing so, this kind of futurism has foreclosed other ways of imagining the future — those based on emotion, care, refusal, resistance, love.

That is, I think, what this conference gets at with its theme “Toward an Open Future.” It is a reimagining of the teaching and learning and research, one that we must face with great urgency. We have to think about, we have to talk about, we have to make strides toward an open future before the futurist-consultants come in with their predictive models and techno-solutionism and tell the bosses they have to sell off the world to save it. These futurists promise certainty. They promise inevitability. And with their models, no one bears responsibility. “It was the algorithm,” they shrug.

In 1961 — while the Cold War future-forecasters were building their models and their corporate client-base — Hannah Arendt published a series of essays titled Between Past and Future in which she argued that, by severing ties to history — by embracing that sort of futurism that led to totalitarianism and World War II but also by espousing theories and practices of behavioral engineering — that we had forsaken our responsibility for a meaningful, human future. We had surrendered a future in which we are free to choose. (Curse Milton Friedman for thoroughly ruining that turn of phrase.)

A responsibility to the world and to the past and to the future, Arendt argues, should be at the core of our endeavors in education. “Education is the point at which we decide whether we love the world enough to assume responsibility for it,” she writes, “and by the same token save it from that ruin which, except for renewal, except for the coming of the new and young, would be inevitable. And education, too, is where we decide whether we love our children enough not to expel them from our world and leave them to their own devices, nor to strike from their hands their chance of undertaking something new, something unforeseen by us, but to prepare them in advance for the task of renewing a common world.”

That renewal always brings with it uncertainty, despite the predictions that the consultants and op-ed columnists want to sell us — predictions of a world that is hollowed out, closed off, sold off, “safe.” Remember: their predictions led us here in the first place, steering management towards institutional decay. I saw someone on Twitter ask the other day, “Why are schools better prepared for school shootings than they are to handle cancellation and closure?” I think we know why: because that’s the future schools were sold. One of surveillance and control. It’s the same future they’re going to repackage and rebrand for us at home. Let me repeat what I said earlier: the history of the future is a study of political imagination and political will. The future is a political problem.

We do not know what the future holds. Indeed it might seem almost impossible to imagine how, at this stage of the pandemic with catastrophic climate change also looming on the horizon, we can, as Arendt urges, renew a common world. And yet we must. It is our responsibility to do so. God knows the consultants are going to try to beat us to it.

Read more