Philip K. Dick on Refusing--or Not Refusing--New Technologies
Should teachers should feel compelled to teach and use artificial intelligence in their courses?
I picked away at this post for four weeks, and still have concerns about publishing it. As I explain at the end of the essay, I don’t want to become the crank who complains about AI all of the time. I have seen some excellent uses of it in education, and have praised those uses in other writings. But I use this space to raise questions that don’t have easy answers, and I believe the question raised in this essay should challenge us more than I am seeing it challenging people right now.
In early 2023, I was invited to a private high school in New Mexico to give a series of presentations to the faculty based on my book Distracted: Why Students Can’t Focus and What You Can Do About It. I spent most of a full day speaking to the school’s teachers or consulting with the administration about the ideas presented in the book, and how they could be applied to their courses and classrooms.
On the evening before my day with the faculty, I spoke to the parents of the students. The arguments I presented to the parents were similar to the ones that I shared with the teachers, with the ideas for application tweaked appropriately for their roles in the students’ lives. To both audiences I explained that distractible brains have plagued humans for at least a couple of millennia or more. Writers and philosophers throughout the ages have lamented their ability to focus, just as we lament today. We wish we could pay better attention, and we curse our distractions.
But we also have a slippery way of defining “distraction.” Writing a poem seems like an attention-worthy activity—unless you are writing the poem as a way to distract yourself from working on your dissertation. Scrolling through Instagram seems like the ultimate distracting experience—unless you are an entrepreneur seeking to leverage social media to grow your business. What we mean by distraction depends upon the context. Anything can become a distraction if it keeps us from paying attention to something we are trying to accomplish.
From these premises, I presented ideas to the parents for how they and their teenagers could cultivate attention in service to their goals, rather than banning all of the technologies from their lives or blaming technologies for all of their problems. After the talk concluded, a parent approached me at the podium. I would guess he was my age, maybe early 50s. He shook my hand and said he enjoyed the talk, but he disagreed with me about one thing.
“You said it was useless to fight or banish smart phones from our lives. But why is that so? Nobody in my house, including the kids, has smart phones. We’re happier for it. And as their parent, I believe that they will be better off for it in the long run. What’s wrong with that?”
Honestly, I was a little taken aback—in part because I used to feel this same way. But as technologies infiltrated our educational system, and I observed them making plenty of things easier or more efficient or more effective, I came around to my current perspective (which evolved even further when I read Christina Moore’s excellent Mobile-Mindful Teaching and Learning). In response to the parent’s question, I fumbled out an unmemorable answer. It had been long day of traveling and speaking.
This conversation returned to my memory over winter break as I was reading Philip K. Dick’s Do Androids Dream of Electric Sheep?, a science fiction novel from the 1960s that was made into the movie Blade Runner. You can find a plot summary of Dick’s novel here, but for our purposes what you need to know is that the earth has been largely laid to waste, and most humans have migrated to other planets, along with androids that resemble humans in almost every way. Androids are largely forbidden on earth, but they sometimes escape their planets and seek refuge in a depopulated earth. The novel’s protagonist, Rick Deckard, works as a bounty hunter, tracking down illegal androids on earth and eliminating them.
At the start of the novel, we learn about the manufacture of a new sophisticated breed of androids, ones which are virtually indistinguishable from humans. A few of these androids are hiding out in a once-populous city on earth (which seems to be San Francisco). A secretary at the police station informs Deckard that their boss has reached out to the Russian government to join a lawsuit stopping the creation of these androids. Deckard responds with bitter resignation:
“The Soviet police can’t do any more than we can,” he said. Legally, the manufacturers of the Nexus-6 brain unit operated under colonial law, their parent autofactory being on Mars. “We had better just accept the new unit as a fact of life,” he said. “It’s always been this way, with every improved brain unit that’s come along.”
Reading this section of the book immediately put me back into that auditorium in Albuquerque, struggling to find a answer to difficult question. The parent’s question even seems more difficult now, because when that parent posed it to me, ChatGPT had not yet been introduced to the world. In its most basic form the question is this: When should we resist or reject a new technological development?
Suppose for a minute that we envision a timeline of moments when a new technology was introduced, and we had the choice to acquiesce or resist each one as it emerged:
· When the internet became available
· When smartphones were invented
· When generative artificial intelligence appeared
· When almost-human androids became so sophisticated that you couldn’t detect whether they were human or not
Philip K. Dick died before the first three events occurred, but in response to the (hypothetical) last one his answer seems kind of obvious: You definitely should have resisted then. We can extract this answer from the setting in the novel (devasted landscape of the earth), its characters (deluded and unhappy), and the events of the novel (no spoilers—read it for yourself). While Deckard resigns himself to the fact that new technologies can’t be resisted, the author seems to suggest that we always have a choice when it comes to technological progress. That choice to resist might seem unconceivable to most of us, but it remains. We just don’t choose it.
I have been thinking about this choice recently as we see education writers beginning to populate print and digital media with arguments for incorporating AI into every aspect of our lives, institutions, and courses. Most jobs in the future, they explain, will depend upon AI. If we want students to find jobs and thrive in their careers, we need to educate them about how it operates. We need to model how to work with it productively.
Ethan Mollick’s Substack, One Useful Thing, has made this case repeatedly, most recently in these words:
Managers, educators, and policy makers need to recognize that we are living in an AI-haunted world, and we need to both adjust to it, and shape it, in ways that increase its benefits and mitigate its harms. We need to start now, because we are facing exponential change, and that means that even the signs and portents I have discussed in this post are quickly becoming prophecies of the past, rather than indicators of the future.
Arguments like this imply that anyone who refuses to engage with or use generative AI has decided to sit on the sidelines during this most consequential moment in human history. If we don’t engage, we neglect our responsibilities as adults and educators. Teachers who choose to opt out of the use of AI in their courses are guilty of pedagogical malpractice. How will my students find and thrive in their jobs in an AI-haunted world if I don’t prepare them for it?
While I don’t wish to refuse AI in a general way, I refuse to accept this implication. Teachers, parents, students, humans—all of us have the option to refuse generative AI when it seems right for us. And in some cases, refusing engagement with AI will be the right thing to do. Perhaps we choose to refuse AI because we care about climate change and pen-and-paper can do the job just fine, instead of sucking up fossil fuels (you should definitely read this analysis of AI and the climate). Maybe our discipline prioritizes reflection over efficiency (hey, philosophy and theology!), and the real insights come from slowing down rather than from speeding up. If we work on community-based problems, we might believe that accompaniment matters more than problem-solving. Or we believe that some group of students—for whatever reason—need more human connection in their feedback than they need the mechanical details of how to master a skill.
I admit freely that these examples reflect my humanities training and orientation. Perhaps they will not resonate with folks who teach in disciplines in which efficiency and economy of task completion are the highest priorities, such as business or engineering. Or disciplines which can speak to urgent crises, such as a pandemic or climate change. Or ones in which we are faced with wicked problems of any sort, and we should avail ourselves of every tool at hand, from pencil-and-paper brainstorming to AI bots.
This is my second post on artificial intelligence, and both of them have been designed to raise hard questions about its place in higher education and the world more generally. But I don’t want to be pegged as an anti-AI person, because I don’t feel that way. I use Chat-GPT. In my forthcoming book, Writing Like a Teacher, I have a series of recommendations for how nonfiction writers can make strategic use of AI in their writing process. I find Ethan Mollick’s Substack thought-provoking and, indeed, useful.
But Philip K. Dick and Richard Deckard haunt me a little bit as I engage with ChatGPT and its spawn. They make me want to walk slowly as I use AI, write about it, or consider its places in education. Artificial intelligence has no use for slow walking. It moves fast. It doesn’t pause, reflect, or question. These are human activities. If we believe they matter, and contribute to a thriving human life and the life of the planet, we should preserve spaces for them. That might translate into whole chunks of a college course which are AI-free, or entire courses in a curriculum which fence out AI completely.
These are choices that teachers should be able to make for themselves, without guilt-trips from artificial intelligence advocates, their students, or their administrators. We are receiving a lot of pressure to move quickly with artificial intelligence. If you wish to move fast, by all means, jump in your autonomous car and instruct it to get moving. If you prefer a more reflective pace, feel no guilt about your slow walk. I will accompany you.
Appreciate this Jim and glad you chose to publish it. What I continue to return to is that whatever we do with generative AI, or any other tool for that matter, we must do with a clear understanding of what we hope to do with it and what it will do to us and our communities.
I stand with you in the space of the radical middle: I think there are places where AI and generative AI can be quite powerful and useful. But in other applications, the drawbacks outweigh the benefits.
I've been pulling on this thread recently in my own writing and feel that my thinking is quite aligned with your own (from an engineer at that!). A few posts that might resonate with you:
Andy Crouch's Innovation Bargain and how it helps us to see the both-and nature of technology: https://joshbrake.substack.com/p/the-innovation-bargain
Why AI is the same and yet different as previous technologies and how we should consider engaging with it thoughtfully: https://joshbrake.substack.com/p/containing-the-coming-wave