Maybe it’s not so much the command prompt I’m nostalgic for, but the days when the computer wouldn’t do anything without me — I had to explicitly tell the computer what I wanted to do, and if I didn’t tell it, it would just sit there, patiently, with a dumb look on its face.
I really miss how computers used to be “dumb” in this way. The primary computer in my life — my “smartphone” — is too smart. It used to constantly push things on me — push notifications — letting me know about all sorts of stuff it thought I wanted to know about, and it continued doing this until I had the good sense to turn them all off. It’s dumber now, and much better.
Besides text messages and Snapchat pictures of my new nephew, I don’t get notifications on my phone and haven’t for a long time. I can’t imagine how people with news or social media apps subject themselves to the onslaught of Fresh Hell in their pockets all day.
In Information Doesn’t Want to Be Free, Cory Doctorow writes about the need to be protected from computers as they burrow further into our lives and bodies:
I want to be sure that it is designed to take orders from its user, and to hide nothing.
Take orders and hide nothing. Command and control. Pull rather than push. Make Computers Dumb Again.
Relatedly, at Mashable, “Stop reading what Facebook tells you to read” calls for consumers to break out of Facebook’s detention center walled garden and use a web browser to find things:
By choosing to be a reader of websites whose voices and ideas you’re fundamentally interested in and care about, you’re taking control. And by doing that, you’ll chip away at the incentive publishers have to create headlines and stories weaponized for the purpose of sharing on social media. You’ll be stripping away at the motivation for websites everywhere (including this one) to make dumb hollow mindgarbage. At the same time, you’ll increase the incentive for these websites to be (if nothing else) more consistent and less desperate for your attention.
Richard Polt has an interesting post about the assumption of paper in speculative fiction from the past:
Apparently, a mere 40 years ago it still didn’t occur to some science fiction novelists that paper would become a second-class citizen to glass screens studded with millions of tiny pixels.
Note that the word “paper” does not actually appear in any of these passages. That’s the way it is with things we take for granted: they’re as invisible as the air we breathe.
I expect that our own speculative futures will look just as ridiculous 40 years from now. What developments are we failing to imagine?
(This reminds me of Donald Rumsfeld’s “unknown unknowns” briefing, which actually establishes a helpful framework for analysis, and Chuck Klosterman’s great book But What If We’re Wrong?, which interrogates the assumptions we’ve turned into self-evident conclusions.)
The question of paper’s place in a digital society popped into my life today at a doctor’s office. I had to fill out an intake form as usual, but with a twist: it was the first time the form was digital. It was on a PhreesiaPad, a touchscreen encased by a clunky orange plastic shell that made it look like a kid’s toy. The opening screen said “Paper Is So 20th Century”.
Paper’s fearsome competitor.
I assume these devices help speed up information processing in clinics and contribute to the all-encompassing idol goal of Efficiency in businesses. But if I had to bet on whether the PhreesiaPad or paper will still be around in 10 years, even 5 years, it’s paper all the way. I’ll be surprised if all those cheaply made tablets and their ilk make it to next year before getting disrupted into obsolescence by the Next Big Thing.
Paper is so 20th century. And 19th. And 18th. And 17th. And 16th. And 15th. And so on for a long, long time. So long that you can count paper’s age in millennia. Silicon Valley startups and speculative fiction authors have a lot of intriguing ideas about what the future will look like, but until they figure out how to close in on paper’s 2,000-year head start I won’t be worried about its fate.
So goes the thesis statement of The New Analog: Listening and Reconnecting in a Digital World, a wonderful new book by musician Damon Krukowski. He reckons with how digital media has changed how we consume music and what we’ve come to expect from it. New technologies have begat new ways of listening, but to get to that newness, music has been stripped of its context and surrounding “noise” and turned (for a profit) into pure “signal” over a disembodied digital stream.
In theory this would be ideal; noise is usually considered a bad thing, and boosting signal above it separates the gold from the dross, the wheat from the chaff, etc. But what happens when everything becomes signal? What happens when we cede the authority to determine what ought to be signal to Spotify’s mysterious algorithms and the rigid perfectionism of digital recording equipment?
Krukowski illuminates what we lose when we ignore or eliminate noise. It’s not only the small things—incidental studio sounds captured alongside the recorded music and how smartphones flatten the richness of our voices—but bigger ones too: how we’ve come to occupy space “simultaneously but not together”, and how streaming encourages “ahistorical listening.”
This isn’t a fusty screed against newfangled media. Krukowski avoids nostalgia as he straddles the analog/digital divide, opting for clear-headed rumination on “aspects of the analog that persist—that must persist—that we need persist—in the digital era.” These aspects involve early 20th century player pianos, Sinatra’s microphone technique, the “loudness wars”, and Napster, among other topics I learned a lot about.
The book overlaps a lot with Krukowski’s podcast miniseries Ways of Hearing, though I’m not sure which informed the other more. Ironically, despite its inability to convey sound, I thought the book was better at explaining the concepts and aural phenomena of analog that Krukowski dives into. With the relentless iterations of new media keeping us ever focused on the present and future, it’s more important than ever for thoughtful critics like Krukowski and Nicholas Carr and Alan Jacobs to help promote intentional thinking and challenge our modern assumptions.
This is the Google Maps Street View of my parents’ home. It’s from 2007, which is old by Google Maps standards. The current view looks very different ten years later. The house is a different color, the front lawn is now completely garden (more like a jungle at this point), and the tree on the road verge was slain by ash borer.
All three cars are gone too. The black Corolla was my sister’s first car. The blue Corolla we inherited from my grandma; it nearly won Worst Car senior year, and my cymbals were stolen from it once, but I remember it fondly. The white Camry was an inheritance from the other grandma, since replaced by another.
I suspect the Google Maps Camera Car will make its way back to this street one day and replace this image with a new one. Until then this snapshot will remain like a mural, a mosaic of memory, unaware a new coat of paint will erase it from existence, but only for most.
Perhaps it was because I had just finished reading Neil Postman’s 1992 book Technopoly: The Surrender of Culture to Technology when I started in on Rod Dreher’s latest, The Benedict Option: A Strategy for Christians in a Post-Christian Nation, but I was detecting a subtle yet strong Postmanian vibe throughout the book. Then, when Dreher actually quoted Technopoly, I realized that wasn’t a coincidence.
First, a disclaimer: I am (briefly) in The Benedict Option. When Dreher put out a call on his blog for examples of Christian-run businesses, I emailed him about Reba Place Fellowship, the intentional Christian community that over the years has spun off church ministries into actual businesses, like a bicycle repair shop and an Amish furniture store. Months later, in a reply to my comment on one of his unrelated blog posts, he told me I was in the book, much to my surprise. And sure enough, on page 189 there was my name and a short paragraph adapted from my email about Reba.
I felt compelled to alert Dreher about RPF not only because I think they are a living, functional example of the Benedict Option in action, but also because I’ve followed Rod Dreher’s blog for a while, really enjoyed his books Crunchy Cons and The Little Way of Ruthie Leming, and hoped his new one would contribute to the conversation about religious engagement in civic life.
The Benedict Option really does feel like the secular successor to Technopoly. The two books share a pessimism about the Way Things Are Now and a dire outlook of what’s to come. Dreher’s thesis is that Christians have lost the culture wars and need to reconsider their embedded relationship with the wider (Western) culture, in order to strengthen what’s left of the Church before a new anti-religion dark age descends. This seems like a natural response to the trajectory of Postman’s theory of the Technopoly, which he defines as “totalitarian technocracy” and “the submission of all forms of cultural life to the sovereignty of technique and technology.“
Written 25 years ago, several passages in Technopoly would be right at home in The Benedict Option, like the one about the erosion of cultural symbols:
In Technopoly, the trivialization of significant cultural symbols is largely conducted by commercial enterprise. This occurs not because corporate America is greedy but because the adoration of technology preempts the adoration of anything else. … Tradition is, in fact, nothing but the acknowledgment of the authority of symbols and the relevance of the narratives that gave birth to them. With the erosion of symbols there follows a loss of narrative, which is one of the most debilitating consequences of Technopoly’s power.
And Technopoly’s hollow solipsism:
The Technopoly story is without a moral center. It puts in its place efficiency, interest, and economic advantage. It promises heaven on earth through the conveniences of technological progress. It casts aside all traditional narratives and symbols that suggest stability and orderliness, and tells, instead, of a life of skills, technical expertise, and the ecstasy of consumption. Its purpose is to produce functionaries for an ongoing Technopoly.
Technopoly offers so much more to unpack, much of it specifically related to technology and education, but another nugget I thought aligned very well with Dreher’s Benedict Option is Postman’s call for “those who wish to defend themselves against the worst effects of the American Technopoly” to become “loving resistance fighters.” He defines a technological resistance fighter as someone who “maintains an epistemological and psychic distance from any technology, so that it always appears somewhat strange, never inevitable, never natural.”
Religious resistance fighters don’t “run for the hills” as critics of the Benedict Option would have it say. (Though Dreher does end the book with Benedictine monks in Italy literally running for the hills after an earthquake destroys their monastery—a reasonable action, but ironic given his frustration for the “run for the hills” criticism.) In fact, the work of resistance requires direct engagement within the larger cultural life. But it also requires deliberate and distinctive separation—if not physically, then spiritually, ethically, and intellectually.
Dreher bemoans the submission of churchgoers to the pressures of secular culture (i.e. the Technopoly), whether it’s the now widespread acceptance of gay marriage, the rootless and self-interested browsing of different churches, or the unfettered access to technology parents allow their children. The principles in the Rule of St. Benedict, originally established for sixth-century monks cloistered away from the chaotic post-Rome Europe, offer a way for modern Christians to shore up their spiritual discipline while reconnecting with ancient traditions.
Most of his proposals (neatly summarized here) should not be terribly controversial among committed believers, though some, like pulling your kids out of public school, seem unduly influenced by his alarmism and are much easier said than done.
But that seems to be his point: Christianity isn’t supposed to be easy. Monks don’t join a monastery to sit around and avoid the world; they work hard! They take the claims and commandments of their Savior and Scripture seriously and endeavor to follow them.
Postman has been proven right. He didn’t live to see today’s wholesale surrender to smartphones and Silicon Valley’s tech-utopianism, but he’d have a serious case of the “I told you so”s if he did. Whether Dreher’s predictions for the demise of Christianity also come to pass remains to be seen, but you don’t have to be a doomsday prepping zealot to realize that it is good to hope for the best while preparing for the worst.
Let me exhort you, people: close Twitter and read a book. Take delight in something well-made, well-made because the author loved her task and sought to bring her best intellectual resources to bear on her work. Take delight in words crafted to increase the world’s store of intelligence, to share what the author knows and bring forth knowledge in readers. It’s a better way for us to live that to spend even a few minutes a day in the company of people who have made the cultivation of stupidity into a virtue.
This happens to me all the time: I hear about a book (or movie or album, but usually book) and find it at my library, then I read it and see mention of another book or figure, sending me off into that direction, where I find another book to read. And so on. I’ll call it the Wikipedia Effect, which is a little less hippie-dippie than calling it the Everything Is Connected Effect, though it’s of the same spirit.
This time, I listened to the 99% Invisible episode on the U.S. Post Office, based on Winifred Gallagher’s new book How the Post Office Created America: A History, which I went to look at in the stacks. I didn’t end up checking it out, but as my eyes wandered a little farther down the shelf I did see an intriguing title: A Thread Across the Ocean: The Heroic Story of the Transatlantic Cable by John Steele Gordon.
Calling the story “heroic” is a bit much, but it’s a quick and well-done story of the small group of monied men in mid-eighteenth-century New York who staked their fortunes on basically willing the oceanic cable into being, even after some pretty serious setbacks. It’s a good companion with Tom Standage’s The Victorian Internet: The Remarkable Story of the Telegraph and the Nineteenth Century’s On-Line Pioneers, a broader history of the invention of telegraphy.
I spotted it on a shelf at the library when I was looking for something completely different—is there a word for serendipity striking in the library? Librindipity?—but my interest in it made me realize I’m intrigued by the stories of how innovative technologies came into being.
In addition to these two books about the telegraph, I’ve already read a few books I think fit into this theme of the development of a revolutionary technology or notable technical achievement:
Screw and screwdriver (One Good Turn: A Natural History of the Screwdriver and the Screw by Witold Rybczynski)
Chairs (Now I Sit Me Down: From Klismos to Plastic Chair by Witold Rybczynski)
Photography (River of Shadows: Eadweard Muybridge and the Technological Wild West by Rebecca Solnit)
Some of these were heralded in their time, known right away to be revolutionary, but some were not. I’m interested in both: how things came into being whether we noticed them or not.
A quick brainstorm yielded a few more ideas for future reading along these lines. (I’ll need a hashtag for when I catch up with these. Let’s go with #TechnicallyFirst). There’s no guarantee I’ll read these; they’re just ideas gathered in one place for future reference:
Transcontinental Railroad (Nothing Like It in the World: The Men Who Built the Transcontinental Railroad by Stephen Ambrose)
Interstate Highways (The Big Roads: The Untold Story of the Engineers, Visionaries, and Trailblazers Who Created the American Superhighways by Earl Swift)
Electricity (Empires of Light: Edison, Tesla, Westinghouse, and the Race to Electrify the World by Jill Jonnes)
Pencils (The Pencil: A History Of Design And Circumstance by Henry Petroski)
Will have to keep adding to the list. But I thank A Thread Across the Ocean for sending me down this path, wherever it leads.
A chair is an everyday object with which the human body has an intimate relationship. You sit down in an armchair and it embraces you, you rub against it, you caress the fabric, touch the wood, grip the arms. It is this intimacy, not merely utility, that ultimately distinguishes a beautiful chair from a beautiful painting. If you sit on it, can it still be art? Perhaps it is more.
Indeed it is. Witold Rybczynski’s new book Now I Sit Me Down: From Klismos to Plastic Chair: A Natural History is one of my favorite genres: a nichestory (as in niche + history). Like the first Rybczynski book I read (One Good Turn: A Natural History of the Screwdriver and the Screw), this one is a loving and learned micro-history of an everyday thing we usually don’t regard at all. The book weaves Rybczynski’s expertise and personal experience with stories about influential designers and craftsmen throughout history, along with some wider cultural criticism.
NPR’s review of the book has a nice collection of Rybczynski’s own illustrations from the book of the many different kinds of chairs he writes about. After reading this you’ll see them everywhere.
A good argument could be made for several different technologies being the ideal tool for writers. Pen and paper have proved durable and flexible but aren’t easily manipulated. Typewriters provide an attractive single-purpose distraction-free environment but don’t allow for easy duplication. Modern computers are powerful and multi-purpose, but easily distract.
We all are fortunate to live in a time when we can choose between these options. That wasn’t the case until certain benchmarks in history, which Matthew Kirschenbaum explores in his new book Track Changes: A Literary History of Word Processing (which I learned about from the review in The New Republic). I was born in 1987, so I missed the period Kirschenbaum covers here (mostly the 1970s and ’80s), but I distinctly remember Windows 95, floppy disks, and everything going much slower than they do now. I wouldn’t chose to be transported back to the early 80s, when having a home computer required so much more work than it does now. Some people liked that work, and hey, to each’s own. But I’m a late-adopter. Whether it’s a new device, app, or other web service, I’m happy to let the early-adopters suffer through the bugs and relative paucity of features while I wait for things to get smoother and more robust.
The book explores several things I’m moderately interested in — chiefly literature, philosophy of writing, and the technical aspects of early personal computers — so I thought I’d give it a whirl and see what it came to. Since Kirschenbaum goes deep on those things I’m only moderately interested in, I found myself skimming through several passages that someone more invested in the topic might find more worthwhile. Overall, though, I thought it was a nice niche history, with perspective on where we’ve come from as a creative species and how the tools writers specifically use have shaped their work.
At the beginning of December I had my wife change my Twitter password so I couldn’t access it. I’ve learned that I’m a cold turkey guy. Maybe I have some elements of an addictive personality, because for things like social media that act as mini dopamine triggers, I can’t use them moderately. I’m either on them every day, usually several times, or I deactivate the account and pretend they don’t exist for a time in order to unclog my mental plumping.
I really like Twitter. It’s nice to communicate occasionally with people I admire, get the latest on the things I enjoy, and above all share the things I’m proud of or interested in. I don’t have to deal with the spam and garbage trolls that celebrities and well-known figures endure, so it’s generally a pleasant experience.
I just sought it out too much. This sabbatical forces me to live without it for a time—to rewire my brain to not think in tweets, seek validation in retweets and likes, and be proud of how clever I am.
I discovered, located at my local library, checked out, and read Richard Polt’s The Typewriter’s Revolution within about two days. And wouldn’t you know it, now all I want to do is use my typewriter.
Reading this beautiful book—nay, merely getting a few pages in—inspired me to uncover the IBM Selectric I that I inherited from my grandma when she moved into a different place and get the ink flowing again. Despite the incessant hum that accompanies electrics, I love the whole process of using it, and the basic thrill of having a piece of paper stamped with the words of my doing without the overlording influence of the Internet and that blasted distraction machine we call a laptop. I can’t wait to write more on it, and to retrieve the other typewriters from my parents’ storage and see if they can’t be brought back to life and service.
Usually when we see a typewriter in action these days, it’s at the hands of a young Occupy Wherever libertine or an elderly, quite possibly curmudgeonly, traditionalist: people who don’t accede, intentionally or otherwise, to the Information Regime (as Polt’s Typewriter Insurgency Manifesto calls it). My chief connotation with them were my grandma’s missives on birthday and Christmas cards, discussing the weather and congratulating me on recent academic achievements. “Take care and keep in touch,” they would always end. Perhaps she was on to something. Taking care of ourselves and our instruments, keeping in touch with them and each other; these are the principles inherent in the Manifesto, which affirms “the real over representation, the physical over digital, the durable over the unsustainable, the self-sufficient over the efficient.”
It’s easy and tempting to scoff at these “insurgents” for not giving in to the Regime, or for doing it so ostentatiously, until you actually consider why typewriters remain useful tools and toys. The possibility that I might find some practical application for these not-dead-yet mechanical wonders, and do so without ostentation, thrills me. Here’s to the ongoing Revolution.
Like If The Moon Were Only 1 Pixel, a “tediously accurate scale model of the solar system” that, as you scroll horizontally, reveals the vast span of our neighborhood:
Or Why Time Flies, a philosophical exploration of our fungible awareness of time:
Or The Scale of the Universe (my favorite), which, as you zoom in and out, shows the comparative sizes of all creation, from the largest supercluster to the smallest neutrino (notice how everything at some point is the same size):
Or Lightyear.fm, a “journey through space, time & music” that plays songs of the past according to how far their waves have traveled from Earth since they were released:
Or The Deep Sea, made by Neal Agarwal, which shows as you scroll down the creatures (and shipwrecks) that live at different depths of the ocean. Spoiler alert: the ocean is very deep.
In a sloppy but understandable attempt at satire, Justine Sacco tweeted: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” Then she got on a long plane ride to South Africa. During the flight her tweet went viral, enraging the easily enraged bastions of social media and catapulting the hashtag #HasJustineLandedYet around the world. When she landed in Johannesburg she was out of a job and in the throes of a scorching, unmerciful online public shaming.
I was on Twitter the day #HasJustineLandedYet was in progress. When I figured out what it was about, I probably chuckled, thought “Sucks to be her…” and clicked elsewhere. But Justine, freshly captive prey of the collective shaming committee that is the Internet, wasn’t allowed to move on. The invisible, crushing weight of public opinion had pinned her to her momentary mistake. Jon Ronson interviewed her about this experience for his new book So You’ve Been Publicly Shamed, an eye-opening panorama of the dark, menacing, deceptively fleeting phenomenon of online shaming. His dissection of these digital witch hunts led him on a listening tour of other recent victims like Jonah Lehrer, Lindsay Stone, and Adria Richards, who were, months or years after their respective ordeals, still haunted by a modern twist on PTSD. Call it Post Traumatic Shaming Disorder.
Before her Twitstorm, Sacco was director of communications at IAC, the parent company of OkCupid, whose co-founder Christian Rudder wrote another fascinating book recently released called Dataclysm: Who We Are (When We Think No One’s Looking). I read Dataclysm right after So You’ve Been Publicly Shamed, which was fortuitous not only because both books feature the Justine Sacco saga, but because Rudder’s deep dive into the data about our online selves—dating site profiles and otherwise—weaved perfectly with Ronson’s closely observed stories of public shaming. And the joint conclusion we can make doesn’t look good.
The “when we think no one’s looking” part of Rudder’s title is key here. Dataclysm focuses on OkCupid users, but he might as well be writing about Us. “So much of what makes the Internet useful for communication—asynchrony, anonymity, escapism, a lack of central authority—also makes it frightening,” he writes. Nearly everything we do online we do when no one’s looking. Even if a real name and picture is attached to a Twitter profile viciously trolling the Justine Saccos of the web, the ramifications are few. Kill that account, another will pop up.
The really interesting stuff, then, is what lies beneath the cultivated online personas, the stuff we don’t have incentive to lie about or craft for a particular purpose. What if your Google searches were made public? (Because they basically are.) Our searches would paint a much finer (though not prettier) portrait of ourselves than our Facebook posts, try as we might to convince ourselves otherwise.
Compared to Facebook, Rudder writes, which is “compulsively networked” and rich with interconnected data, dating sites like OkCupid pull people away from their inner circle and into an intentional solitude: “Your experience is just you and the people you choose to be with; and what you do is secret. Often the very fact that you have an account—let alone what you do with it—is unknown to your friends. So people can act on attitudes and desires relatively free from social pressure.”
OkCupid users are prompted to answer questions the site’s algorithms use to find other compatible users. The answers are confidential, so like Google searches they tell a more nuanced story about the user than whatever they write in their OkCupid self-summary. And yet there persists a wide discrepancy between what people say they believe—what they tell the algorithm—and how they actually behave on the site. The stats on who they chat with, for how long, and whether an in-person date occurs end up revealing more about a user’s preferences than their expressed beliefs.
Does the same apply to the hordes of people behind #HasJustineLandedYet? They might not be quite as evil and sadistic in real life as they seem online, but they can afford to play-act in whatever persona they’re cultivating because they’re protected by distance: abstractly, the virtual world being a different, cloudier dimension than the physical one; but also concretely, in that the odds of bumping into your shaming victim on the street is practically nil.
So You’ve Been Publicly Shamed and Dataclysm travel on the same track, but start out in opposite directions. Both concern themselves with the real-life implications of desire, how it’s wielded and to what end. Desiring companionship, love, or sex, OkCupid users seek opportunities to encounter whatever it is they’re looking for, personal fulfillment usually being the ultimate goal. Ronson’s case studies, heading the other way, illustrate the deviousness of desire—when on the road to euphoria we carelessly or even intentionally run down whoever gets in our way. “There is strength in collective guilt,” Rudder writes, “and guilt is diffused in the sharing. Extirpate the Other and make yourselves whole again.”
Yet neither book is as depressing as I’ve portrayed them. Dataclysm wades into a bevy of interesting data-driven topics, like the most common and uncommon words used in OkCupid profiles based on race and gender, how beauty gives people a quantifiable edge, and the emergence of digital communities. And Ronson’s journey leads to a host of stories, historical and contemporary, that lend depth and nuance to a social phenomenon desperately in need of them.
Above all, these books should make us think twice before hitting Send. “If you’re reading a popular science book about Big Data and all its portents,” writes Rudder, “rest assured the data in it is you.” Whether we’re chirping into a stupid hashtag or perusing profile pics in search of The One, someone is always watching.
To never confront the possibility of getting lost is to live in a state of perpetual dislocation. If you never have to worry about not knowing where you are, then you never have to know where you are. —Nicholas Carr, The Glass Cage
One time the internet went down at the library and it was like the Apocalypse. Patrons at the public computers and on their laptops saw their pages not loading and came to the desk to ask what was wrong. We told them the internet’s down, it’ll have to be restarted and it’ll be up again in a few minutes. Most were satisfied by this, if a bit peeved, but waited for order to be restored. But it was a serious outage and the wait was much longer than usual, so people at the computers just left. The lab, usually almost or completely full, was a ghost town.
Just as I was thinking (a bit judgmentally) how odd it was that people who temporarily didn’t have internet would just leave instead of using other parts of the library (like, you know, books ‘n’ stuff), I realized that the library catalog was down too. Without this mechanism that we use to search for items and get their call number for retrieval, I was librarianing in the dark. If someone came to the desk looking for a specific book, I had no way of (a) knowing if we had it and it was on the shelf, or (b) where it was among the thousands of books neatly lining the stacks before me. I knew generally where books on certain topics were—sports in the 790s, the 200s had religion, and so on—but without a specific call number I’d have to navigate the sea of spines book by book until by providence or luck I found the item within a reasonable amount of time.
The internet was restored, the computer lab filled again, and the catalog came back to life. No crises came to pass during this internet-less interlude, but I did wonder if I knew as much as I thought I did. Did I as a librarian rely too heavily on access to the online catalog to do my job? Without internet connectivity or hard-copy reference material, would we still be able to provide the information people ask for every day? Even looking up a phone number is much more easily done on Google than through a paper trail.
The times we’re not connected to the internet somehow are becoming less and less frequent, so existential crises like mine don’t have to last long. But the questions lingered as I read Nicholas Carr‘s new book, The Glass Cage: Automation and Us. It asks questions that apply not only to libraries but every facet of our lives: do humans rely too heavily on technology? And if so, what is that reliance doing to us? Well-versed in the effects of technology on human behavior, Carr, author of The Shallows and The Big Switch, posits that automated technology, though responsible for many improvements in industry, health care, transportation, and many other areas, can also degrade our natural skill-learning abilities and generate a false sense of security in technology that aims (yet often fails) to be perfect in an imperfect world.
Carr points to two phenomena that, taken separately or together, exacerbate our worst human tendencies and stunt the mental and physiological growth required for mastering complex tasks. Automation complacency, writes Carr, “takes hold when a computer lulls us into a false sense of security. We become so confident that the machine will work flawlessly, handling any challenge that may arise, that we allow our attention to drift.” Exhibit A: my opening library anecdote. Knowing that the online catalog will reliably provide the information I need when I ask for it, I’m much more liable not to retain useful knowledge despite the usefulness of retaining it and the pleasure I get from learning it.
The second phenomena is automation bias, which occurs “when people give undue weight to the information coming through their monitors. Even when the information is wrong or misleading, they believe it. Their trust in the software becomes so strong that they ignore or discount other sources of information, including their own senses.” I’ve experienced this too. One time a patron asked for the phone number of a business; because I was dealing with multiple things at once, I provided the first number that came up on Google without confirming its validity through another source, like the business’s website or the Yellow Pages. Turns out that number was outdated and the search engine hadn’t indexed the new one yet. But because I’d done that before with numbers that were accurate, to be expedient I trusted Google in that moment when I should have been more discerning.
Whichever technological tools Carr cites—airplane autopilot, assembly-line manufacturing, GPS—the theme that emerges is that after a certain point, the more automated technology takes away from humans the more we lose. This runs counter to the utopian belief that the iPhones and Google Glasses and self-driving cars of the world make our lives better by making them easier, that by ceding difficult tasks to machines we will be able to focus on more important things or use that extra time for leisure.
To some extent that is true, but there’s also a dark side to this bargain. By abdicating power over how we interact with the world, we stop being doers with agency over our skills and trades and become monitors of computer screens—supervisors of fast, mysterious, and smart machines that almost always seem to know more than us. This dynamic puts us at cross-purposes with the tools that should be working with us and for us, not in our stead. Humans’ greatest ability, writes Carr, is not to cull large amounts of data and make sense of complex patterns: “It’s our ability to make sense of things, to weave the knowledge we draw from observation and experience, from living, into a rich and fluid understanding of the world that we can then apply to any task or challenge.”
Automated tools like GPS, which we rely upon and follow without much question, take away that innate ability of sense-making and even dampen our desire to make our own observations based on first-hand experience. I should replace “we” with “I” here, because I struggle greatly with navigation and love being able to know where I am and get to where I’m going. But navigation is more than following the blue line directly from point A to point B as if A and B are the only data points that matter. The point of navigation is the map itself, the ability to make assessments based on acquired knowledge and turn that knowledge into informed action. When a computer does all that for us in a microsecond, then what’s the point of knowing anything?
Ominous implications like this are the star of The Glass Cage, which casts a discerning eye on the assumptions, implicit and explicit, that govern our relationship with technology. It’s a relationship that can be fruitful and healthy for everyone involved, but it also needs some work. Thankfully, Nicholas Carr has done the work for us in The Glass Cage. All we have to do is sit back and receive this knowledge.
Comello: First off, how has the response been to the book? Any surprising reactions from critics or regular readers?
Graedon: I think that the biggest surprise has been the seeming divisiveness of the language in my book. On the one hand, some readers have felt shut out when they’ve encountered words that they don’t know or can’t understand. That blindsided me more than it should have. When I remember back to the first time that I tried to read Nabokov, for instance, I’m sure that one of my impulses was to throw the book across the room—there were at least half a dozen unfamiliar words per page on average, and it made me crazy. (I’m of course not comparing my work with his in any way, except to say that I should have been more empathetic to the experience of reader estrangement.)
But for me, anyway, something shifted. I stopped being frustrated, and started wanting to look up those words, to know and own them, in a deep sense. And when I went back to Nabokov as an adult, the experience was very different, for all sorts of reasons. For one thing, I’ve come to a place where I don’t mind if what I’m reading asks something of me, whether it’s to look up a word that I don’t know, or to be comfortable with ambiguity, or to imagine something I don’t want to imagine. I find reading to be a more transformational experience if I’m involved in it, actively participating and thinking and engaging.
I guess for that reason, I also don’t mind asking something of readers. And it’s been very gratifying that there have been readers, on the other hand, who’ve responded to the book maybe especially because of the language. Not just the unusualness of some of it, but also because of what it’s doing. Because of course one of the messages that I was hoping to convey is that language is so central to our humanity, and that to lose it or not be able to understand it can in fact be very alienating. That before we relinquish control over things that are so fundamental to who we are—yielding all sorts of functionalities to devices and machines—we should give a lot of thought to what it might mean for us to give them up.
I think that a lot of readers have responded to that message, buried in the book’s language, and I have found that to be unbelievably thrilling, and very humbling.
That’s interesting because the challenge of the words themselves was something I discussed in my first reaction to The Word Exchange, how for me the joy of reading and learning is discovering new things that stretch my understanding. I appreciate when writers (or filmmakers or musicians) don’t make things easy. I think we all need and enjoy escapist entertainment from time to time, but not at the expense of richer and deeper engagement with ideas in every artistic form.
I wonder if you’ve read anything by Nicholas Carr, who writes mostly about technology, culture, and the implications of a digital world. Recently on his blog he’s been focusing on automation and the almost sinister implications it has for human intelligence and creativity. (His next book, in fact, is called The Glass Cage: Automation and Us.) Carr recently posted about a promotional email he got from Apple with the headline “Create a work of art, without the work.” In your book that seems to encapsulate the underlying mission of Synchronic, whose (very)smartphone-like devices have taken over wide control of their users’ lives, including how they look up words they don’t know.
This quote from the book stood out to me: “It’s comforting to believe that consigning small decisions to a device frees up our brains for more important things. But that begs the question, which things have been deemed more important? And what does our purportedly decluttered mind now allow us to do? Express ourselves? Concentrate? Think? Or have we simply carved out more time for entertainment? Anxiety? Dread?”
So how do you, as someone who wants to actively engage with things, manage what you automate in your own life and what you let into your brain? Are there certain things you still rigidly refuse to let a computer or other technology do?
I remember your response well! I think you were the first person to point out that reading The Word Exchange in some ways duplicates the experience of being one of the characters in the book (at least if you’re downloading the definitions of unknown words as you go). In other words, that form and content have the possibility of meeting, in a certain sense.
And while I’m quite intrigued by Nicholas Carr’s work—as I mentioned in my essay-length acknowledgments at the back of the book, several of his premises from The Shallows have made their way into The Word Exchange, largely via Doug—I wasn’t aware of his blog or his newest book, and I’m utterly grateful to you for pointing me in their direction.
I do find automation to be a slightly insidious force. While it’s easy to rationalize—I might tell myself, for instance, that only “mindless” tasks can be automated, nothing truly “authentic”—I also find that the more things I consign to my various omnipresent devices, the more alienated I wind up feeling. I want to remember important dates on my own, not find myself so cut off from the passing of time that I need a machine to remind me of them. I want to be able to navigate my own way through unknown cities, not depend on Siri to pleasantly read the directions to various destinations to me. And while it’s utterly convenient to have all my bills paid automatically, I’m almost a little nostalgic for a time when paying for things required some degree of conscious engagement.
That said, I’m as big an automation culprit as anyone at this point. I relinquished the keys to my bank account more than a year ago, finally setting up automatic bill payment, and breathing a deep sigh of relief over the minutes and maybe hours of my life saved from mindless check-writing. I have a terrible sense of direction, so I depend on Siri and Google to get me just about everywhere. I program dozens of reminders for myself into my iPhone.
But there are a lot of things that I still do analog. Write to-do lists. Keep a calendar. Edit drafts of things that I’m writing. I still mail lots of longhand letters, too. The truth is, I just think better on paper. And when I don’t automate, I also feel more aware of what’s happening in my life—of the passing of time—too. That’s something that I want to be aware of.
I think that electronic devices have introduced the idea of a false infinity and limitlessness into our lives: the cloud that exists everywhere and nowhere; the digital reader that’s a slightly warped manifestation of a Borgesian infinite library; and with automation (and the relentless schedules many of us now face), the idea of an endless day, which of course is a total fallacy, and one that can often make us feel not only depleted but depraved, pitching ever more quickly toward the grave without much awareness of its quickened arrival.
The phrase “wood and glue” pops up periodically in the book as an incantation for times of adversity. Was there a time in the six years of writing the book when you felt things were falling apart and had to MacGyver the story back into order?
The phrase “wood and glue” actually didn’t come in until near the end—final edits. But absolutely, there was much MacGyvering along the way. I did my very best to try to keep the writing process from taking as long as it eventually took. Before I began, I plotted heavily, did elaborate Excel spreadsheets, and tried to nail down the structure very carefully. And I only gave myself about 6 or 7 months to write the first draft, keeping to a strict schedule, finishing chapters every other week or so.
What that meant, of course, was that my first draft was absolutely terrible. I didn’t realize just how terrible it was right away. I gave it to a couple of trusted writer friends, and they were incredibly kind and gentle in their critiques. But in what they did and didn’t say, I realized that the book was a mess.
I took maybe six months off from working on it (that wasn’t hard, or even entirely intentional—I’d just started a new job), and then, when I looked at it again, I was a little shocked at its badness. I found myself gutting whole sections. Rewriting entire chapters. Tearing the spreadsheet into tiny pieces. The second draft of the book barely resembled the first. Although there were also things that I was afraid to let go, for better or worse: the beginning, some aspects of the structure (the 26 chapters, the alternating points of view, etc). I was afraid that if I tore up all the stakes, the entire thing would just float away.
I got more feedback from different friends on that draft, and then I did one more big draft before sending it out to any agents to consider. Needless to say, I did more revising after signing with one of them, and yet far more after starting to work with an editor.
It’s funny. After laboring over the book (in obscurity) for years, really taking my time to get things the way I wanted them to be, having realized with the first draft that breakneck speed didn’t work well for me, I did the most frenetic revising in the final months, under tremendous pressure, changing the book drastically at the eleventh hour. One of the tiniest changes to come out of that period was the inclusion of the phrase “wood and glue.” It never occurred to me before you asked just now, but I don’t think it’s a coincidence that this splint metaphor came in during those whirlwind months of mercenary final rewriting. I sort of felt like I could have used a whole-body (and whole-psyche) splint at the time.
As Anana’s journey to find Doug becomes more perilous and labyrinthine, the importance of “safe places” becomes more evident, whether it’s the library or within the secretive confines of the Diachronic Society. Where do you feel most safe and at peace? And is that the same place where your best ideas and writing come?
There are certainly places where I feel more calm and at peace. One of them, in fact, is the Mercantile Library on East 47th Street, where the members of the Diachronic Society meet. Like Anana, I spent a couple of years living in Hell’s Kitchen, one of the most frenetic neighborhoods in this frenetic city. I was so grateful to be introduced to the Mercantile Library during that time by someone who knows and understands me well. We’d often walk there together on weekends, braving the diamond district and Fifth Avenue, sometimes weaving our way through the craziness of Times Square. And stepping through the doors into the quiet, cool, calm of the library really felt like entering a holy place, whose sacred practices were reading, writing, and thinking. (It also felt a little like leaving the riotous bazaar of the internet to enter the relative stillness of a book.) I did a lot of the early research and thinking about The Word Exchange at the Merc.
I also draw a lot of solace from an acknowledged holy place, the Brooklyn Friends Meeting. To the degree that I had any sort of religious upbringing, my background is Quaker, which perhaps explains the complex treatment of silence in the book, as something that can either telegraph death or save you. (I think that as a kid attending Quaker meeting with my parents, I sometimes wondered if an hour of silence could actually bore me to death. As an adult, I feel very differently about that same quiet hour.)
And I’m unbelievably fortunate, too, to live in an apartment that I love, with incredible landlords who are also my downstairs neighbors, and where I get most of my thinking and writing done.
But the truth, New Age-y as it sounds, is that I actually don’t think of safe places as physical places, for the most part, but as habits of mind. As I mentioned earlier, I wrote and revised the book in lots of different places. The first draft largely in Asheville, NC, the second in Brooklyn, and the third sort of everywhere. During 2012, the longest period of time that I spent in any given place was four weeks, and I was often in places for much briefer periods than that—just a few days, sometimes. I was in Vermont, Mexico, Wyoming, New Hampshire, Virginia, North Carolina, Cape Cod, and then back to New Hampshire, with interstitial periods sleeping on couches in Brooklyn, before finally moving back to Brooklyn in the fall, where I stayed in a series of different apartments before finally finding the one that I live in now, which I moved into in early 2013.
The reason that I was able to work so steadily and with such focus during that peripatetic year wasn’t because of the places (though many of them were lovely), but because I was carrying the focus and the peace with me. There have been other times when I’ve had everything that I need, including quiet, calm places in which to work, but feel unable to get anything done, because I don’t feel quiet in my own mind.
I think, though, that part of this sense of mobile safety derives for me from the fact that I’m a runner, like lots of other writers. I’m not particularly fast, but like Murakami, I love to run long distances. Not as long as the distances that he routinely runs. But I am happy running 18, 19, 20, even 30 miles, which I’ll often do even when I’m not training for something. I like running in large part not because it helps me actively think about what I’m writing, but because it helps me not to think, or to think in a way that is very indirect and subconscious. When you run for several hours in a row, your muscles and other systems need so much energy that less of it is going to your brain than normal, which makes long-distance running a lot like moving meditation. When I’m running is when I feel most at peace. It also helps me feel safe. I go running in virtually every new place that I visit, and it’s a pretty remarkable way to get to know and feel comfortable in a new environment.
Silence is just as needed in literature as in real life, so I appreciate your approach to it. And the Quaker influence on the book, subconscious or otherwise, is very interesting: I wonder how many of us, or those in the world of The Word Exchange, could withstand an hour of silent contemplation before the “check your phone!” alert chimes in our heads. Practicing presence—that is, choosing to simply live in the present moment instead of trying to document it or escape it—is challenge I’ve set for myself that I strive daily to meet more often than not. It’s not, as you say, some New Age-y self-help mantra, but a simple and practical challenge that can transform our everyday if we let it.
Your love of long distance running is something I’ve yet to enjoy myself, but I find it interesting in respect to Anana’s journey throughout the book. On her quest for truth she seems to always be on the move, with periods of solitude and contemplation between legs of the journey. At this point in your own journey, what are you running toward? Are you on the way from A to B, or are you focusing more on what’s in between?
What an incredible question. And I think that you’ve pointed something out to me about my book that I’d never really noticed before. There’s definitely a lot of me in that process you describe, of running, running, running, interspersed with periods of quiet contemplation. I’ve been told I’m a bit of a paradox, and certainly in that way.
I don’t know that I’m running toward or away from anything at this point. I think that I felt a lot of urgency to finish this first novel for a lot of reasons. Most of them practical: for one, reality kept catching up with all of the “futuristic” elements. I had a feeling that if I didn’t hurry up, everyone would start walking around with word flu, and then I’d have to try to recast the thing as nonfiction.
There were also material concerns. Unlike Anana, I don’t have any wealthy relatives, and when I left my lucrative nonprofit job (ha) to try to finish the book with residencies at a few artist colonies, I took a really big risk. I got to the point where I had to either finish the book very quickly, praying that someone might want to publish it, or else it would have taken quite a few more years, because I was almost completely out of money, and I would have needed to find a new job, new apartment, start over, as I’ve done a few times, and I know how very difficult it is to get any writing done during start-over times.
I also really hoped that I might be able to make a go of a writing life, whatever that means, and I couldn’t really imagine abandoning this book after so much time and sweat and life, but I also knew that in order to move forward and do anything else, I needed to get a book out into the world. So by the end of the process, my life had narrowed down to this really fine point: this book, and more or less only this book.
If anything, that’s the way in which I want my life to change now. I don’t think of being at A or B, and I don’t feel like I’m running anymore, but my hope is that writing never really forces (or enables) the same kind of solipsistic existence that I inhabited for a couple of years. I’m ready for my life to be much less about me and my work. I’ve been striving toward more of a balance lately, but I have a ways to go still.
I hope that risk has paid off for you. If anything, I think the book has added to the conversation about how we live with technology, and will give readers (especially those with a penchant for the dystopian) a glimpse at the implications of an unexamined life. Thanks so much for taking the time to talk with me.
Thank you so much, Chad! It’s been a real pleasure exchanging words with you.
If a library doesn’t have books, does it cease to be a library? The coming of BiblioTech, a new Apple Storecomputer lab bookless library in San Antonio, the first in the nation, begs the question. It has also brought with it rhetorical musings on whether the future of libraries is already here, and whether the end of those pesky paper books is finally nigh.
Disclaimer: I love technology (sometimes too much) and I’m a library school graduate hoping to work in a library/archive, so I’m far from being a fist-shaking, card-catalog-carrying luddite librarian. But I have grown healthily skeptical of new technologies that come with fantastical declarations of What It All Means. If we’ve learned anything from the very company that BiblioTech models itself after, it’s that the newest available product is the greatest creation in the history of the world — until the slightly-altered updated version comes in a few months. “New device desirable, old device undesirable.”
So when Mary Graham, vice president of South Carolina’s Charleston Metro Chamber of Commerce, looks at BiblioTech and says, “I told our people that you need to take a look at this. This is the future. … If you’re going to be building new library facilities, this is what you need to be doing,” I can’t help but wonder whether the Charleston Metro Chamber of Commerce and influential civic bodies like it around the country will be building the next truly revolutionary and innovative development in the library & information world, or the next Segway.
What makes me so hesitant to hop on every wave supposedly headed to the bright, beautiful future — waves like the all-digital library or Google Glass or flying cars (there’s still time for that one, DeLorean) — is the air of inevitability usually attached to them. This is the future, so there’s no sense in resisting it. Given historical precedent I understand the reasoning for that argument, but that doesn’t make it any more justified.
Michael Sacasas dubbed this inevitability-argument the Borg Complex, a way of thinking “exhibited by writers and pundits who explicitly assert or implicitly assume that resistance to technology is futile.” Some symptoms of the Borg Complex, according to Sacasas:
Makes grandiose, but unsupported claims for technology
Pays lip service to, but ultimately dismisses genuine concerns
Equates resistance or caution to reactionary nostalgia
Announces the bleak future for those who refuse to assimilate
Nicholas Carr, one of my favorite tech writers, quotes Google chairman Eric Schmidt talking about the company’s Glass product: “Our goal is to make the world better. We’ll take the criticism along the way, but criticisms are inevitably from people who are afraid of change or who have not figured out that there will be an adaptation of society to it.” Don’t even try to resist Glass or any new technology, Earthling, for resistance is foolish.
Perhaps BiblioTech (and Google Glass for that matter) is the future. Perhaps physical books are indeed becoming glorified kindling. I highly doubt that, even setting aside my own predilection for them. But I don’t know the future. Our world is becoming more digitized, and libraries in the aggregate are reflecting that reality. Whether we become the wholly digital society BiblioTech is modeling (expecting?) remains to be seen. I’d love to check out the place in person one day, if only to back up my snap judgments with first-hand knowledge. Until then, I’ll be satisfied with libraries that are truly bibliotechy, achieving a healthy balance of physical and digital resources that honor the past, present, and future.
Is the Internet making us smarter or stupider? It’s a question Q the Podcast recently tackled in a lively and in-depth debate between lots of smart and interesting people. There is enough evidence to support both sides of the debate. But what I concluded after listening to the show was that for all of the doomsday talk about the technologies and processes that have become embedded in our digitized culture within the last decade or so, how we use the Internet is ultimately not up to the Internet.
No matter how incentivizing are the apps and social networks we frequent; nor addicting the silly games we enjoy; nor efficient the tools we use, there is still a human being making decisions in front of a screen. So while I certainly sympathize with those who profess addiction (willing or otherwise) to Tweeting or checking Facebook, I remind everyone using technology of any kind of Uncle Ben’s famous maxim: “With great power comes great responsibility.”
We as autonomous, advanced-brain human beings have the power to do or not to do things. It’s a great power to have, but it also requires perseverance. The allure of instant gratification the usual Internet suspects provide won’t be defeated easily. It takes a willpower heretofore unknown to modern peoples. It takes resolve to fight temptation that is equal or greater than the temptation itself.
Do you have what it takes? Do I? Eh, it’s day to day.
But flipping this entire argument on its head is Nicholas Carr’s recent article in The Atlantic called “All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines,” which delves into the burgeoning world of automation. He writes about how we’ve become increasingly reliant on computers to perform more elaborate and complicated tasks that had previously been done by humans. The benefit of this is that we’re able to get tasks done quicker and more efficiently. The downside is that some human services are no longer required, which means the skills needed to perform those services are eroding.
Carr uses the example of airplane pilots, who have been increasingly relegated to monitoring digital screens (the “glass cockpit”) as the computers do the heavy lifting and only sometimes take the plane’s reigns. While the usefulness of autopilot is obvious, when computers take away control of the primary functions of flying they are also taking away the neurological and physiological skills pilots have honed over years of flying.
This is a problem, says Carr, because “knowing demands doing”:
One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it. While we’re wrestling with a difficult task, we may be motivated by an anticipation of the ends of our labor, but it’s the work itself—the means—that makes us who we are.
Computer automation, he says, disconnects the ends from the means and thereby makes getting what we want easier without having to do the work of knowing. This just about nails social media, doesn’t it? It’s so easy to get what we want these days that the work we used to have to do no longer is required of us. To research a paper in college, one had to go to the physical library and pull out a physical book and transcribe quotes by hand; now a quick Google search and copy-paste will get that done in a jiff (or is it GIF?).
This isn’t a bad thing. I’m thankful that many tasks take eons less time than they used to. (I mean, typewriters are cool, but they’re not very amenable to formatting or mistakes.) My point is it’s important to understand how and why we use technology the way we do, and to acknowledge that we have agency over that use. To disregard that agency is to refuse to accept responsibility for our own power. And we know what happens then.