Tag: technology

How Tweet It Is

At the beginning of December I had my wife change my Twitter password so I couldn’t access it. I’ve learned that I’m a cold turkey guy. Maybe I have some elements of an addictive personality, because for things like social media that act as mini dopamine triggers, I can’t use them moderately. I’m either on them every day, usually several times, or I deactivate the account and pretend they don’t exist for a time in order to unclog my mental plumping.

I really like Twitter. It’s nice to communicate occasionally with people I admire, get the latest on the things I enjoy, and above all share the things I’m proud of or interested in. I don’t have to deal with the spam and garbage trolls that celebrities and well-known figures endure, so it’s generally a pleasant experience.

I just sought it out too much. This sabbatical forces me to live without it for a time—to rewire my brain to not think in tweets, seek validation in retweets and likes, and be proud of how clever I am.

It feels good. I’m not rushing back.

The Typewriter Revolution

typewriter-revolution

I discovered, located at my local library, checked out, and read Richard Polt’s The Typewriter’s Revolution within about two days. And wouldn’t you know it, now all I want to do is use my typewriter.

Reading this beautiful book—nay, merely getting a few pages in—inspired me to uncover the IBM Selectric I that I inherited from my grandma when she moved into a different place and get the ink flowing again. Despite the incessant hum that accompanies electrics, I love the whole process of using it, and the basic thrill of having a piece of paper stamped with the words of my doing without the overlording influence of the Internet and that blasted distraction machine we call a laptop. I can’t wait to write more on it, and to retrieve the other typewriters from my parents’ storage and see if they can’t be brought back to life and service.

Usually when we see a typewriter in action these days, it’s at the hands of a young Occupy Wherever libertine or an elderly, quite possibly curmudgeonly, traditionalist: people who don’t accede, intentionally or otherwise, to the Information Regime (as Polt’s Typewriter Insurgency Manifesto calls it). My chief connotation with them were my grandma’s missives on birthday and Christmas cards, discussing the weather and congratulating me on recent academic achievements. “Take care and keep in touch,” they would always end. Perhaps she was on to something. Taking care of ourselves and our instruments, keeping in touch with them and each other; these are the principles inherent in the Manifesto, which affirms “the real over representation, the physical over digital, the durable over the unsustainable, the self-sufficient over the efficient.”

It’s easy and tempting to scoff at these “insurgents” for not giving in to the Regime, or for doing it so ostentatiously, until you actually consider why typewriters remain useful tools and toys. The possibility that I might find some practical application for these not-dead-yet mechanical wonders, and do so without ostentation, thrills me. Here’s to the ongoing Revolution.

How to Feel Small

I like things that make me feel small.

Like If The Moon Were Only 1 Pixel, a “tediously accurate scale model of the solar system” that, as you scroll horizontally, reveals the vast span of our neighborhood:

Or Why Time Flies, a philosophical exploration of our fungible awareness of time:

Or The Scale of the Universe (my favorite), which, as you zoom in and out, shows the comparative sizes of all creation, from the largest supercluster to the smallest neutrino (notice how everything at some point is the same size):

Or Lightyear.fm, a “journey through space, time & music” that plays songs of the past according to how far their waves have traveled from Earth since they were released:

Or The Deep Sea, made by Neal Agarwal, which shows as you scroll down the creatures (and shipwrecks) that live at different depths of the ocean. Spoiler alert: the ocean is very deep.

Make Yourselves Whole Again: On ‘Dataclysm’ and ‘So You’ve Been Publicly Shamed’

In a sloppy but understandable attempt at satire, Justine Sacco tweeted: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” Then she got on a long plane ride to South Africa. During the flight her tweet went viral, enraging the easily enraged bastions of social media and catapulting the hashtag #HasJustineLandedYet around the world. When she landed in Johannesburg she was out of a job and in the throes of a scorching, unmerciful online public shaming.

I was on Twitter the day #HasJustineLandedYet was in progress. When I figured out what it was about, I probably chuckled, thought “Sucks to be her…” and clicked elsewhere. But Justine, freshly captive prey of the collective shaming committee that is the Internet, wasn’t allowed to move on. The invisible, crushing weight of public opinion had pinned her to her momentary mistake. Jon Ronson interviewed her about this experience for his new book So You’ve Been Publicly Shamed, an eye-opening panorama of the dark, menacing, deceptively fleeting phenomenon of online shaming. His dissection of these digital witch hunts led him on a listening tour of other recent victims like Jonah Lehrer, Lindsay Stone, and Adria Richards, who were, months or years after their respective ordeals, still haunted by a modern twist on PTSD. Call it Post Traumatic Shaming Disorder.

Before her Twitstorm, Sacco was director of communications at IAC, the parent company of OkCupid, whose co-founder Christian Rudder wrote another fascinating book recently released called Dataclysm: Who We Are (When We Think No One’s Looking). I read Dataclysm right after So You’ve Been Publicly Shamed, which was fortuitous not only because both books feature the Justine Sacco saga, but because Rudder’s deep dive into the data about our online selves—dating site profiles and otherwise—weaved perfectly with Ronson’s closely observed stories of public shaming. And the joint conclusion we can make doesn’t look good.

The “when we think no one’s looking” part of Rudder’s title is key here. Dataclysm focuses on OkCupid users, but he might as well be writing about Us. “So much of what makes the Internet useful for communication—asynchrony, anonymity, escapism, a lack of central authority—also makes it frightening,” he writes. Nearly everything we do online we do when no one’s looking. Even if a real name and picture is attached to a Twitter profile viciously trolling the Justine Saccos of the web, the ramifications are few. Kill that account, another will pop up.

The really interesting stuff, then, is what lies beneath the cultivated online personas, the stuff we don’t have incentive to lie about or craft for a particular purpose. What if your Google searches were made public? (Because they basically are.) Our searches would paint a much finer (though not prettier) portrait of ourselves than our Facebook posts, try as we might to convince ourselves otherwise.

Compared to Facebook, Rudder writes, which is “compulsively networked” and rich with interconnected data, dating sites like OkCupid pull people away from their inner circle and into an intentional solitude: “Your experience is just you and the people you choose to be with; and what you do is secret. Often the very fact that you have an account—let alone what you do with it—is unknown to your friends. So people can act on attitudes and desires relatively free from social pressure.”

OkCupid users are prompted to answer questions the site’s algorithms use to find other compatible users. The answers are confidential, so like Google searches they tell a more nuanced story about the user than whatever they write in their OkCupid self-summary. And yet there persists a wide discrepancy between what people say they believe—what they tell the algorithm—and how they actually behave on the site. The stats on who they chat with, for how long, and whether an in-person date occurs end up revealing more about a user’s preferences than their expressed beliefs.

Does the same apply to the hordes of people behind #HasJustineLandedYet? They might not be quite as evil and sadistic in real life as they seem online, but they can afford to play-act in whatever persona they’re cultivating because they’re protected by distance: abstractly, the virtual world being a different, cloudier dimension than the physical one; but also concretely, in that the odds of bumping into your shaming victim on the street is practically nil.

So You’ve Been Publicly Shamed and Dataclysm travel on the same track, but start out in opposite directions. Both concern themselves with the real-life implications of desire, how it’s wielded and to what end. Desiring companionship, love, or sex, OkCupid users seek opportunities to encounter whatever it is they’re looking for, personal fulfillment usually being the ultimate goal. Ronson’s case studies, heading the other way, illustrate the deviousness of desire—when on the road to euphoria we carelessly or even intentionally run down whoever gets in our way. “There is strength in collective guilt,” Rudder writes, “and guilt is diffused in the sharing. Extirpate the Other and make yourselves whole again.”

Yet neither book is as depressing as I’ve portrayed them. Dataclysm wades into a bevy of interesting data-driven topics, like the most common and uncommon words used in OkCupid profiles based on race and gender, how beauty gives people a quantifiable edge, and the emergence of digital communities. And Ronson’s journey leads to a host of stories, historical and contemporary, that lend depth and nuance to a social phenomenon desperately in need of them.

Above all, these books should make us think twice before hitting Send. “If you’re reading a popular science book about Big Data and all its portents,” writes Rudder, “rest assured the data in it is you.” Whether we’re chirping into a stupid hashtag or perusing profile pics in search of The One, someone is always watching.

The Glass Cage

To never confront the possibility of getting lost is to live in a state of perpetual dislocation. If you never have to worry about not knowing where you are, then you never have to know where you are. —Nicholas Carr, The Glass Cage

One time the internet went down at the library and it was like the Apocalypse. Patrons at the public computers and on their laptops saw their pages not loading and came to the desk to ask what was wrong. We told them the internet’s down, it’ll have to be restarted and it’ll be up again in a few minutes. Most were satisfied by this, if a bit peeved, but waited for order to be restored. But it was a serious outage and the wait was much longer than usual, so people at the computers just left. The lab, usually almost or completely full, was a ghost town.

Just as I was thinking (a bit judgmentally) how odd it was that people who temporarily didn’t have internet would just leave instead of using other parts of the library (like, you know, books ‘n’ stuff), I realized that the library catalog was down too. Without this mechanism that we use to search for items and get their call number for retrieval, I was librarianing in the dark. If someone came to the desk looking for a specific book, I had no way of (a) knowing if we had it and it was on the shelf, or (b) where it was among the thousands of books neatly lining the stacks before me. I knew generally where books on certain topics were—sports in the 790s, the 200s had religion, and so on—but without a specific call number I’d have to navigate the sea of spines book by book until by providence or luck I found the item within a reasonable amount of time.

The internet was restored, the computer lab filled again, and the catalog came back to life. No crises came to pass during this internet-less interlude, but I did wonder if I knew as much as I thought I did. Did I as a librarian rely too heavily on access to the online catalog to do my job? Without internet connectivity or hard-copy reference material, would we still be able to provide the information people ask for every day? Even looking up a phone number is much more easily done on Google than through a paper trail.

The times we’re not connected to the internet somehow are becoming less and less frequent, so existential crises like mine don’t have to last long. But the questions lingered as I read Nicholas Carr‘s new book, The Glass Cage: Automation and Us. It asks questions that apply not only to libraries but every facet of our lives: do humans rely too heavily on technology? And if so, what is that reliance doing to us? Well-versed in the effects of technology on human behavior, Carr, author of The Shallows and The Big Switch, posits that automated technology, though responsible for many improvements in industry, health care, transportation, and many other areas, can also degrade our natural skill-learning abilities and generate a false sense of security in technology that aims (yet often fails) to be perfect in an imperfect world.

Carr points to two phenomena that, taken separately or together, exacerbate our worst human tendencies and stunt the mental and physiological growth required for mastering complex tasks. Automation complacency, writes Carr, “takes hold when a computer lulls us into a false sense of security. We become so confident that the machine will work flawlessly, handling any challenge that may arise, that we allow our attention to drift.” Exhibit A: my opening library anecdote. Knowing that the online catalog will reliably provide the information I need when I ask for it, I’m much more liable not to retain useful knowledge despite the usefulness of retaining it and the pleasure I get from learning it.

The second phenomena is automation bias, which occurs “when people give undue weight to the information coming through their monitors. Even when the information is wrong or misleading, they believe it. Their trust in the software becomes so strong that they ignore or discount other sources of information, including their own senses.” I’ve experienced this too. One time a patron asked for the phone number of a business; because I was dealing with multiple things at once, I provided the first number that came up on Google without confirming its validity through another source, like the business’s website or the Yellow Pages. Turns out that number was outdated and the search engine hadn’t indexed the new one yet. But because I’d done that before with numbers that were accurate, to be expedient I trusted Google in that moment when I should have been more discerning.

Whichever technological tools Carr cites—airplane autopilot, assembly-line manufacturing, GPS—the theme that emerges is that after a certain point, the more automated technology takes away from humans the more we lose. This runs counter to the utopian belief that the iPhones and Google Glasses and self-driving cars of the world make our lives better by making them easier, that by ceding difficult tasks to machines we will be able to focus on more important things or use that extra time for leisure.

To some extent that is true, but there’s also a dark side to this bargain. By abdicating power over how we interact with the world, we stop being doers with agency over our skills and trades and become monitors of computer screens—supervisors of fast, mysterious, and smart machines that almost always seem to know more than us. This dynamic puts us at cross-purposes with the tools that should be working with us and for us, not in our stead. Humans’ greatest ability, writes Carr, is not to cull large amounts of data and make sense of complex patterns: “It’s our ability to make sense of things, to weave the knowledge we draw from observation and experience, from living, into a rich and fluid understanding of the world that we can then apply to any task or challenge.”

Automated tools like GPS, which we rely upon and follow without much question, take away that innate ability of sense-making and even dampen our desire to make our own observations based on first-hand experience. I should replace “we” with “I” here, because I struggle greatly with navigation and love being able to know where I am and get to where I’m going. But navigation is more than following the blue line directly from point A to point B as if A and B are the only data points that matter. The point of navigation is the map itself, the ability to make assessments based on acquired knowledge and turn that knowledge into informed action. When a computer does all that for us in a microsecond, then what’s the point of knowing anything?

Ominous implications like this are the star of The Glass Cage, which casts a discerning eye on the assumptions, implicit and explicit, that govern our relationship with technology. It’s a relationship that can be fruitful and healthy for everyone involved, but it also needs some work. Thankfully, Nicholas Carr has done the work for us in The Glass Cage. All we have to do is sit back and receive this knowledge.

Wait…

An Exchange of Words with Alena Graedon

Author Alena Graedon.
Author Alena Graedon.

Alena Graedon, author of The Word Exchange, was kind enough to chat about the process of bringing her first novel to life and where she finds solitude in a noisy world. (This interview appears in the second issue of the Simba Life Quarterly.)

Comello: First off, how has the response been to the book? Any surprising reactions from critics or regular readers?

Graedon: I think that the biggest surprise has been the seeming divisiveness of the language in my book. On the one hand, some readers have felt shut out when they’ve encountered words that they don’t know or can’t understand. That blindsided me more than it should have. When I remember back to the first time that I tried to read Nabokov, for instance, I’m sure that one of my impulses was to throw the book across the room—there were at least half a dozen unfamiliar words per page on average, and it made me crazy. (I’m of course not comparing my work with his in any way, except to say that I should have been more empathetic to the experience of reader estrangement.)

But for me, anyway, something shifted. I stopped being frustrated, and started wanting to look up those words, to know and own them, in a deep sense. And when I went back to Nabokov as an adult, the experience was very different, for all sorts of reasons. For one thing, I’ve come to a place where I don’t mind if what I’m reading asks something of me, whether it’s to look up a word that I don’t know, or to be comfortable with ambiguity, or to imagine something I don’t want to imagine. I find reading to be a more transformational experience if I’m involved in it, actively participating and thinking and engaging.

I guess for that reason, I also don’t mind asking something of readers. And it’s been very gratifying that there have been readers, on the other hand, who’ve responded to the book maybe especially because of the language. Not just the unusualness of some of it, but also because of what it’s doing. Because of course one of the messages that I was hoping to convey is that language is so central to our humanity, and that to lose it or not be able to understand it can in fact be very alienating. That before we relinquish control over things that are so fundamental to who we are—yielding all sorts of functionalities to devices and machines—we should give a lot of thought to what it might mean for us to give them up.

I think that a lot of readers have responded to that message, buried in the book’s language, and I have found that to be unbelievably thrilling, and very humbling.

That’s interesting because the challenge of the words themselves was something I discussed in my first reaction to The Word Exchange, how for me the joy of reading and learning is discovering new things that stretch my understanding. I appreciate when writers (or filmmakers or musicians) don’t make things easy. I think we all need and enjoy escapist entertainment from time to time, but not at the expense of richer and deeper engagement with ideas in every artistic form.

I wonder if you’ve read anything by Nicholas Carr, who writes mostly about technology, culture, and the implications of a digital world. Recently on his blog he’s been focusing on automation and the almost sinister implications it has for human intelligence and creativity. (His next book, in fact, is called The Glass Cage: Automation and Us.) Carr recently posted about a promotional email he got from Apple with the headline “Create a work of art, without the work.” In your book that seems to encapsulate the underlying mission of Synchronic, whose (very)smartphone-like devices have taken over wide control of their users’ lives, including how they look up words they don’t know. 

This quote from the book stood out to me: “It’s comforting to believe that consigning small decisions to a device frees up our brains for more important things. But that begs the question, which things have been deemed more important? And what does our purportedly decluttered mind now allow us to do? Express ourselves? Concentrate? Think? Or have we simply carved out more time for entertainment? Anxiety? Dread?”

So how do you, as someone who wants to actively engage with things, manage what you automate in your own life and what you let into your brain? Are there certain things you still rigidly refuse to let a computer or other technology do?

word1

I remember your response well! I think you were the first person to point out that reading The Word Exchange in some ways duplicates the experience of being one of the characters in the book (at least if you’re downloading the definitions of unknown words as you go). In other words, that form and content have the possibility of meeting, in a certain sense.

And while I’m quite intrigued by Nicholas Carr’s work—as I mentioned in my essay-length acknowledgments at the back of the book, several of his premises from The Shallows have made their way into The Word Exchange, largely via Doug—I wasn’t aware of his blog or his newest book, and I’m utterly grateful to you for pointing me in their direction.

I do find automation to be a slightly insidious force. While it’s easy to rationalize—I might tell myself, for instance, that only “mindless” tasks can be automated, nothing truly “authentic”—I also find that the more things I consign to my various omnipresent devices, the more alienated I wind up feeling. I want to remember important dates on my own, not find myself so cut off from the passing of time that I need a machine to remind me of them. I want to be able to navigate my own way through unknown cities, not depend on Siri to pleasantly read the directions to various destinations to me. And while it’s utterly convenient to have all my bills paid automatically, I’m almost a little nostalgic for a time when paying for things required some degree of conscious engagement.

That said, I’m as big an automation culprit as anyone at this point. I relinquished the keys to my bank account more than a year ago, finally setting up automatic bill payment, and breathing a deep sigh of relief over the minutes and maybe hours of my life saved from mindless check-writing. I have a terrible sense of direction, so I depend on Siri and Google to get me just about everywhere. I program dozens of reminders for myself into my iPhone.

But there are a lot of things that I still do analog. Write to-do lists. Keep a calendar. Edit drafts of things that I’m writing. I still mail lots of longhand letters, too. The truth is, I just think better on paper. And when I don’t automate, I also feel more aware of what’s happening in my life—of the passing of time—too. That’s something that I want to be aware of.

I think that electronic devices have introduced the idea of a false infinity and limitlessness into our lives: the cloud that exists everywhere and nowhere; the digital reader that’s a slightly warped manifestation of a Borgesian infinite library; and with automation (and the relentless schedules many of us now face), the idea of an endless day, which of course is a total fallacy, and one that can often make us feel not only depleted but depraved, pitching ever more quickly toward the grave without much awareness of its quickened arrival.

The phrase “wood and glue” pops up periodically in the book as an incantation for times of adversity. Was there a time in the six years of writing the book when you felt things were falling apart and had to MacGyver the story back into order?

The phrase “wood and glue” actually didn’t come in until near the end—final edits. But absolutely, there was much MacGyvering along the way. I did my very best to try to keep the writing process from taking as long as it eventually took. Before I began, I plotted heavily, did elaborate Excel spreadsheets, and tried to nail down the structure very carefully. And I only gave myself about 6 or 7 months to write the first draft, keeping to a strict schedule, finishing chapters every other week or so.

What that meant, of course, was that my first draft was absolutely terrible. I didn’t realize just how terrible it was right away. I gave it to a couple of trusted writer friends, and they were incredibly kind and gentle in their critiques. But in what they did and didn’t say, I realized that the book was a mess.

I took maybe six months off from working on it (that wasn’t hard, or even entirely intentional—I’d just started a new job), and then, when I looked at it again, I was a little shocked at its badness. I found myself gutting whole sections. Rewriting entire chapters. Tearing the spreadsheet into tiny pieces. The second draft of the book barely resembled the first. Although there were also things that I was afraid to let go, for better or worse: the beginning, some aspects of the structure (the 26 chapters, the alternating points of view, etc). I was afraid that if I tore up all the stakes, the entire thing would just float away.

I got more feedback from different friends on that draft, and then I did one more big draft before sending it out to any agents to consider. Needless to say, I did more revising after signing with one of them, and yet far more after starting to work with an editor.

It’s funny. After laboring over the book (in obscurity) for years, really taking my time to get things the way I wanted them to be, having realized with the first draft that breakneck speed didn’t work well for me, I did the most frenetic revising in the final months, under tremendous pressure, changing the book drastically at the eleventh hour. One of the tiniest changes to come out of that period was the inclusion of the phrase “wood and glue.” It never occurred to me before you asked just now, but I don’t think it’s a coincidence that this splint metaphor came in during those whirlwind months of mercenary final rewriting. I sort of felt like I could have used a whole-body (and whole-psyche) splint at the time.

As Anana’s journey to find Doug becomes more perilous and labyrinthine, the importance of “safe places” becomes more evident, whether it’s the library or within the secretive confines of the Diachronic Society. Where do you feel most safe and at peace? And is that the same place where your best ideas and writing come?

There are certainly places where I feel more calm and at peace. One of them, in fact, is the Mercantile Library on East 47th Street, where the members of the Diachronic Society meet. Like Anana, I spent a couple of years living in Hell’s Kitchen, one of the most frenetic neighborhoods in this frenetic city. I was so grateful to be introduced to the Mercantile Library during that time by someone who knows and understands me well. We’d often walk there together on weekends, braving the diamond district and Fifth Avenue, sometimes weaving our way through the craziness of Times Square. And stepping through the doors into the quiet, cool, calm of the library really felt like entering a holy place, whose sacred practices were reading, writing, and thinking. (It also felt a little like leaving the riotous bazaar of the internet to enter the relative stillness of a book.) I did a lot of the early research and thinking about The Word Exchange at the Merc.

I also draw a lot of solace from an acknowledged holy place, the Brooklyn Friends Meeting. To the degree that I had any sort of religious upbringing, my background is Quaker, which perhaps explains the complex treatment of silence in the book, as something that can either telegraph death or save you. (I think that as a kid attending Quaker meeting with my parents, I sometimes wondered if an hour of silence could actually bore me to death. As an adult, I feel very differently about that same quiet hour.)

And I’m unbelievably fortunate, too, to live in an apartment that I love, with incredible landlords who are also my downstairs neighbors, and where I get most of my thinking and writing done.

But the truth, New Age-y as it sounds, is that I actually don’t think of safe places as physical places, for the most part, but as habits of mind. As I mentioned earlier, I wrote and revised the book in lots of different places. The first draft largely in Asheville, NC, the second in Brooklyn, and the third sort of everywhere. During 2012, the longest period of time that I spent in any given place was four weeks, and I was often in places for much briefer periods than that—just a few days, sometimes. I was in Vermont, Mexico, Wyoming, New Hampshire, Virginia, North Carolina, Cape Cod, and then back to New Hampshire, with interstitial periods sleeping on couches in Brooklyn, before finally moving back to Brooklyn in the fall, where I stayed in a series of different apartments before finally finding the one that I live in now, which I moved into in early 2013.

The reason that I was able to work so steadily and with such focus during that peripatetic year wasn’t because of the places (though many of them were lovely), but because I was carrying the focus and the peace with me. There have been other times when I’ve had everything that I need, including quiet, calm places in which to work, but feel unable to get anything done, because I don’t feel quiet in my own mind.

I think, though, that part of this sense of mobile safety derives for me from the fact that I’m a runner, like lots of other writers. I’m not particularly fast, but like Murakami, I love to run long distances. Not as long as the distances that he routinely runs. But I am happy running 18, 19, 20, even 30 miles, which I’ll often do even when I’m not training for something. I like running in large part not because it helps me actively think about what I’m writing, but because it helps me not to think, or to think in a way that is very indirect and subconscious. When you run for several hours in a row, your muscles and other systems need so much energy that less of it is going to your brain than normal, which makes long-distance running a lot like moving meditation. When I’m running is when I feel most at peace. It also helps me feel safe. I go running in virtually every new place that I visit, and it’s a pretty remarkable way to get to know and feel comfortable in a new environment.

Silence is just as needed in literature as in real life, so I appreciate your approach to it. And the Quaker influence on the book, subconscious or otherwise, is very interesting: I wonder how many of us, or those in the world of The Word Exchange, could withstand an hour of silent contemplation before the “check your phone!” alert chimes in our heads. Practicing presence—that is, choosing to simply live in the present moment instead of trying to document it or escape it—is challenge I’ve set for myself that I strive daily to meet more often than not. It’s not, as you say, some New Age-y self-help mantra, but a simple and practical challenge that can transform our everyday if we let it.

Your love of long distance running is something I’ve yet to enjoy myself, but I find it interesting in respect to Anana’s journey throughout the book. On her quest for truth she seems to always be on the move, with periods of solitude and contemplation between legs of the journey. At this point in your own journey, what are you running toward? Are you on the way from A to B, or are you focusing more on what’s in between?

What an incredible question. And I think that you’ve pointed something out to me about my book that I’d never really noticed before. There’s definitely a lot of me in that process you describe, of running, running, running, interspersed with periods of quiet contemplation. I’ve been told I’m a bit of a paradox, and certainly in that way.

I don’t know that I’m running toward or away from anything at this point. I think that I felt a lot of urgency to finish this first novel for a lot of reasons. Most of them practical: for one, reality kept catching up with all of the “futuristic” elements. I had a feeling that if I didn’t hurry up, everyone would start walking around with word flu, and then I’d have to try to recast the thing as nonfiction.

There were also material concerns. Unlike Anana, I don’t have any wealthy relatives, and when I left my lucrative nonprofit job (ha) to try to finish the book with residencies at a few artist colonies, I took a really big risk. I got to the point where I had to either finish the book very quickly, praying that someone might want to publish it, or else it would have taken quite a few more years, because I was almost completely out of money, and I would have needed to find a new job, new apartment, start over, as I’ve done a few times, and I know how very difficult it is to get any writing done during start-over times.

I also really hoped that I might be able to make a go of a writing life, whatever that means, and I couldn’t really imagine abandoning this book after so much time and sweat and life, but I also knew that in order to move forward and do anything else, I needed to get a book out into the world. So by the end of the process, my life had narrowed down to this really fine point: this book, and more or less only this book.

If anything, that’s the way in which I want my life to change now. I don’t think of being at A or B, and I don’t feel like I’m running anymore, but my hope is that writing never really forces (or enables) the same kind of solipsistic existence that I inhabited for a couple of years. I’m ready for my life to be much less about me and my work. I’ve been striving toward more of a balance lately, but I have a ways to go still.

I hope that risk has paid off for you. If anything, I think the book has added to the conversation about how we live with technology, and will give readers (especially those with a penchant for the dystopian) a glimpse at the implications of an unexamined life. Thanks so much for taking the time to talk with me.

Thank you so much, Chad! It’s been a real pleasure exchanging words with you.

iLibrary: Resistance Is Futile

bexar

If a library doesn’t have books, does it cease to be a library? The coming of BiblioTech, a new Apple Store computer lab bookless library in San Antonio, the first in the nation, begs the question. It has also brought with it rhetorical musings on whether the future of libraries is already here, and whether the end of those pesky paper books is finally nigh.

Disclaimer: I love technology (sometimes too much) and I’m a library school graduate hoping to work in a library/archive, so I’m far from being a fist-shaking, card-catalog-carrying luddite librarian. But I have grown healthily skeptical of new technologies that come with fantastical declarations of What It All Means. If we’ve learned anything from the very company that BiblioTech models itself after, it’s that the newest available product is the greatest creation in the history of the world — until the slightly-altered updated version comes in a few months. “New device desirable, old device undesirable.”

So when Mary Graham, vice president of South Carolina’s Charleston Metro Chamber of Commerce, looks at BiblioTech and says, “I told our people that you need to take a look at this. This is the future. … If you’re going to be building new library facilities, this is what you need to be doing,” I can’t help but wonder whether the Charleston Metro Chamber of Commerce and influential civic bodies like it around the country will be building the next truly revolutionary and innovative development in the library & information world, or the next Segway.

What makes me so hesitant to hop on every wave supposedly headed to the bright, beautiful future — waves like the all-digital library or Google Glass or flying cars (there’s still time for that one, DeLorean) — is the air of inevitability usually attached to them. This is the future, so there’s no sense in resisting it. Given historical precedent I understand the reasoning for that argument, but that doesn’t make it any more justified.

Michael Sacasas dubbed this inevitability-argument the Borg Complex, a way of thinking “exhibited by writers and pundits who explicitly assert or implicitly assume that resistance to technology is futile.” Some symptoms of the Borg Complex, according to Sacasas:

  • Makes grandiose, but unsupported claims for technology
  • Pays lip service to, but ultimately dismisses genuine concerns
  • Equates resistance or caution to reactionary nostalgia
  • Announces the bleak future for those who refuse to assimilate

Nicholas Carr, one of my favorite tech writers, quotes Google chairman Eric Schmidt talking about the company’s Glass product: “Our goal is to make the world better. We’ll take the criticism along the way, but criticisms are inevitably from people who are afraid of change or who have not figured out that there will be an adaptation of society to it.” Don’t even try to resist Glass or any new technology, Earthling, for resistance is foolish.

Perhaps BiblioTech (and Google Glass for that matter) is the future. Perhaps physical books are indeed becoming glorified kindling. I highly doubt that, even setting aside my own predilection for them. But I don’t know the future. Our world is becoming more digitized, and libraries in the aggregate are reflecting that reality. Whether we become the wholly digital society BiblioTech is modeling (expecting?) remains to be seen. I’d love to check out the place in person one day, if only to back up my snap judgments with first-hand knowledge. Until then, I’ll be satisfied with libraries that are truly bibliotechy, achieving a healthy balance of physical and digital resources that honor the past, present, and future.

(Image via BiblioTech)

The Glass Cockpit

spiderman-wallpapers- (1)

Is the Internet making us smarter or stupider? It’s a question Q the Podcast recently tackled in a lively and in-depth debate between lots of smart and interesting people. There is enough evidence to support both sides of the debate. But what I concluded after listening to the show was that for all of the doomsday talk about the technologies and processes that have become embedded in our digitized culture within the last decade or so, how we use the Internet is ultimately not up to the Internet.

No matter how incentivizing are the apps and social networks we frequent; nor addicting the silly games we enjoy; nor efficient the tools we use, there is still a human being making decisions in front of a screen. So while I certainly sympathize with those who profess addiction (willing or otherwise) to Tweeting or checking Facebook, I remind everyone using technology of any kind of Uncle Ben’s famous maxim: “With great power comes great responsibility.

We as autonomous, advanced-brain human beings have the power to do or not to do things. It’s a great power to have, but it also requires perseverance. The allure of instant gratification the usual Internet suspects provide won’t be defeated easily. It takes a willpower heretofore unknown to modern peoples. It takes resolve to fight temptation that is equal or greater than the temptation itself.

Do you have what it takes? Do I? Eh, it’s day to day.

But flipping this entire argument on its head is Nicholas Carr’s recent article in The Atlantic called “All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines,” which delves into the burgeoning world of automation. He writes about how we’ve become increasingly reliant on computers to perform more elaborate and complicated tasks that had previously been done by humans. The benefit of this is that we’re able to get tasks done quicker and more efficiently. The downside is that some human services are no longer required, which means the skills needed to perform those services are eroding.

Carr uses the example of airplane pilots, who have been increasingly relegated to monitoring digital screens (the “glass cockpit”) as the computers do the heavy lifting and only sometimes take the plane’s reigns. While the usefulness of autopilot is obvious, when computers take away control of the primary functions of flying they are also taking away the neurological and physiological skills pilots have honed over years of flying.

This is a problem, says Carr, because “knowing demands doing”:

One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it. While we’re wrestling with a difficult task, we may be motivated by an anticipation of the ends of our labor, but it’s the work itself—the means—that makes us who we are.

Computer automation, he says, disconnects the ends from the means and thereby makes getting what we want easier without having to do the work of knowing. This just about nails social media, doesn’t it? It’s so easy to get what we want these days that the work we used to have to do no longer is required of us. To research a paper in college, one had to go to the physical library and pull out a physical book and transcribe quotes by hand; now a quick Google search and copy-paste will get that done in a jiff (or is it GIF?).

This isn’t a bad thing. I’m thankful that many tasks take eons less time than they used to. (I mean, typewriters are cool, but they’re not very amenable to formatting or mistakes.) My point is it’s important to understand how and why we use technology the way we do, and to acknowledge that we have agency over that use. To disregard that agency is to refuse to accept responsibility for our own power. And we know what happens then.

Bad Tesseractors

Remember in The Avengers when it was revealed that Selvig, a scientist Loki brainwashed to do his bidding, had programmed a failsafe measure into the device he had created to harness the power of the Tesseract, and that failsafe was the villain Loki’s own scepter? Imagine that scenario with the good and evil dynamic reversed and you’ve got a pretty good idea of the new revelation, courtesy of ProPublica and The New York Times, that the NSA has been circumventing many of the encryption and security tools put in place to protect online communications from prying eyes.

NSA agent.

NSA agent.

For this disclosure we can thank former NSA agent and current Russian tourist Edward Snowden, whose data dump contained documents that uncovered the NSA’s secret efforts to undermine Internet security systems using code-breaking super computers and collaboration with tech companies to hack into user computers before their files could be encrypted.

The most nefarious aspect of this revelation, however, is the NSA’s attempt to “introduce weaknesses” into encryption standards used by developers that would allow the agency to easier hack into computers. So now, not only has the NSA flouted basic civil rights and U.S. law, they’re simply playing by their own rules. They couldn’t win the right to insert a “back door” into encryption standards in their 1990s court battles, so they gave the middle finger to the law and tried again anyway, but this time in secret. It’s a betrayal of the social contract the Internet was founded on, says engineer Bruce Schneier, and one that needs to be challenged by engineers who can design a more secure Internet and set up proper governance and strong oversight.

The worst part of all this is that there’s probably some twisted legal justification for this somewhere. Starting in Bush’s administration and continuing into Obama’s, the dark world of “homeland security” has received both tacit and explicit approval from the executive, legislative, and judicial branches for its increasingly Orwellian surveillance techniques — all in the name of “national security.” I’m sure there’s a lot of good being done behind the scenes at the NSA, CIA, and other clandestine organizations, but really, who are we kidding?

A Ship-Shape Ticker

John Harrison in a 1767 portrait.

If I could bring back Google Maps to early eighteenth-century Britain, I’d be a millionaire. See, figuring out a ship’s longitudinal coordinates was a huge problem back then. So much so that the British Parliament offered a prize of what amounts to $2.2 million in today’s dollars to anyone who could produce a practical method for pinpointing a ship’s location.

Latitude was pretty easy: All you needed was the sun and some charted data. But longitude had theretofore only been discernible by sheer instinct and guesswork, which often led to ships crashing into unforeseen hazards and hundreds of casualties. Even renowned navigators armed with a compass (which were still unreliable at the time) had to basically hope they weren’t going the opposite way or that the ship didn’t run aground.

That’s where John Harrison came in. Dava Sobel’s Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time tells the story of this lovably odd son of a carpenter with no formal scientific training who created a revolutionary maritime clock. Previous ship clocks couldn’t keep time in bad weather, but Harrison’s was self-stabilizing and self-lubricating so that it wouldn’t wear down and wouldn’t be affected by the briny sea air and turbulent waters.

Harrison responded to Parliament’s challenge for a longitudinal tool, but unlike other people with crackpot submissions, he wasn’t in it for the money. He was like the Nikola Tesla of maritime horology: eccentric, hermetic, obsessive, but in it only for the joy of the scientific challenge itself. And like Tesla with Thomas Edison, Harrison had a natural antagonist in Nevil Maskelyne, a royal astronomer appointed to Parliament’s “Board of Longitude,” which controlled the terms of the prize money. Maskelyne had his heart set on the lunar distance method, which involved gauging the moon’s distance from another star to calculate the local time, and gave Harrison all kinds of politically motivated headaches along the way in order to get the lunar method some headway. Harrison’s son even had to resort to writing King George III (the King George) to get some help moving the intransigent Board along. Turns out the young monarch was a science geek himself and gladly helped the Harrisons out (just as he was levying heavy taxes on an increasingly disgruntled colonial America).

Overall, Sobel’s book, though heavily biased toward Harrison, is an accessible, breezy account of his engineering process, the radical innovations he made in every version of his “chronometer,” and the obstacles he had to surmount to achieve recognition from a skeptical scientific community. Take some time to read it.

The Millennials Will Be All Right

I finally read Joel Stein’s Time magazine piece on the Millennial Generation, called “The Me Me Me Generation.” For the record, unlike some of my Millennial cohorts I hate “selfies” (the term and the thing it describes), I don’t feel entitled to a great job right out of school, and I don’t sleep next to my phone. But I don’t think the article deserved all of the antipathy it received from the blogosphere. I thought it was a fair if slightly fogeyish and surface-level assessment of overall generational characteristics. The problems my generation struggles with — like narcissism and a sense of entitlement — are so noticeable largely because of the times we live in, with everything more public and social technology more widespread. You don’t think the Baby Boomers would have peppered Instagram with pictures from Woodstock? or that Gen-Xers would have had entire Spotify playlists dedicated to their collection of sad and angsty ballads? The manifestations of narcissism by young people today merely belie the human condition that plagues all humankind: We’re selfish creatures, no matter how old we are or how many Twitter followers we have.

The combination of the influence of technology and how we collectively were reared — being told how special we were by over-protective helicopter parents — also contributes to how we are currently growing into adulthood. Generally speaking, we’re able to postpone full emergence into adulthood and still live with our parents because (a) we can and our parents don’t seem to mind (or at least don’t say so), and (b) because the economy sucks and has changed so much that traditional jobs and careers aren’t as feasible anymore. The Boomers were anxious to get out of the house and their parents were eager for them to leave, so naturally the way things are done now clashes with the way of the past. Welcome to The Present Reality.

Having said that, we can’t abdicate responsibility for making choices about our lives. We don’t have to live with our parents or check Facebook ten times a day or start a YouTube channel to get famous, but we do anyway (well, not me, but the collective We certainly do). And that doesn’t just go for Millennials: Facebook usage is declining among younger people because their parents (Boomers! shakes fist) have slowly taken over. Magazine columnists can try to pin the narcissism epidemic on young people all they want, but when I go to restaurants nowadays I see just as many if not more parents on their phones than younger people. We can’t simply blame the times and the technology for our behavior, because we’re human beings with the capacity to choose whether to succumb to societal forces or to instead carve our own path, peer pressure be damned.

I think we’ll be all right. Like generations before us, we have a great opportunity to make things better. That will involve some pushing back against the political and cultural acrimony that has characterized the Boomers’ ascendency and reign, but every generation has had to clean up the messes of its predecessors. We Millennials will inevitably make mistakes, and our kids will have been formed by them in some way, for better or for worse. Let’s just hope it’s for the better.

Data Dumped: On The Freedom Of Forgetting

Do we have the right to forget the past, and to be forgotten?

That’s the key question in this article from The Guardian by Kate Connolly, which part of a larger series on internet privacy. Connolly talks with Viktor Mayer-Schönberger, professor of internet governance at Oxford Internet Institute, who describes himself as the “midwife” of the idea that people have the legal, moral, and technological right to be forgotten, especially as it relates to the internet’s memory.

In order to make decisions about the present and the future, Mayer-Schönberger claims, our brain necessarily forgets things, which allows us to think in the present:

Our brains reconstruct the past based on our present values. Take the diary you wrote 15 years ago, and you see how your values have changed. There is a cognitive dissonance between now and then. The brain reconstructs the memory and deletes certain things. It is how we construct ourselves as human beings, rather than flagellating ourselves about things we’ve done.

But digital memories will only remind us of the failures of our past, so that we have no ability to forget or reconstruct our past. Knowledge is based on forgetting. If we want to abstract things we need to forget the details to be able to see the forest and not the trees. If you have digital memories, you can only see the trees.

One of his ideas to combat the negative effects of the permanence of data is to implement an “expiration date” for all data — akin to the “Use By” date on perishable food items — so that it can be deleted once it has served its primary purpose. “Otherwise companies and governments will hold on to it for ever,” he claims.

A counter-argument for this right-to-be-forgotten strategy is that it could be impossible to implement due to the many back-ups that are made of the same data; if the data exists somewhere, then you’re technically not forgotten. But Mayer-Schönberger pushes back on this, saying even if Google has a back-up somewhere, if you search for the data and “99% of the population don’t have access to it you have effectively been deleted.”

What’s unclear about his “expiration date” idea is whether it would include a self-destructing mechanism embedded within the data, like how e-books rented from libraries disappear after a predetermined time period, or whether the data’s user could choose to ignore its “Delete By” date. If the data holders are not legally or technologically compelled or obligated in some way to delete the data permanently after an agreed upon time, then this “right to be forgotten” becomes a lot weaker.

As an aspiring archivist, tech enthusiast, and history buff, I can see where something like this could be detrimental to historians, information managers, and culture heritage caretakers. One of the Internet’s strengths is its ability to hold a vast amount of easily transmittable information, much more than any era before ours could, so to effectively neuter this ability would hinder present and future historians and archivists in their quest to accurately document the past. 

A historian studying late-1700s American history has only select diaries, newspaper clippings, and other ephemera of deteriorating quality from which to cull contextual information and interpret that time period for modern audiences. Researchers studying the present day, however, have millions of gigabytes of data available to them on the Internet – way too much information for even the Internet Archive or Library of Congress to adequately archive, let alone make sense of.

But as an individual, having the ability to regain a modicum of control over one’s own data is very appealing. Anyone who has ever posted a photo on Facebook they later regretted, or sent an email they wish they hadn’t, or wrote an inflammatory blog post years ago could see great value in data that can be, if not irreparably extirpated, then at least banished from digital civilization. This may lead to a less-complete record of our existence, but given how much more data we’re producing overall today than ever before we will not lack for records anytime soon.

We should all, I believe, have the right to the digital equivalent of burning a letter we don’t want living on in perpetuity, even though this idea runs counter to the impulses of our over-sharing and hyper-connected world. It is also anathema in archives: just think of all the information in that letter we’ve lost forever! I hear you, imaginary archivist, but, to return to Mayer-Schönberger’s analogy, even if a forest loses a tree — from natural death or manmade causes — it will still be a forest. And as Theodore Roosevelt, a great man of nature and of letters, said, “There are no words that can tell the hidden spirit of the wilderness, that can reveal its mysteries, its melancholy and its charms.”

The Internet, like a forest, should allow for mystery. Otherwise, where’s the fun in the searching?

Science Blows My Mind

Like many English majors, science and mathematics were two subjects that gave me trouble throughout my primary, secondary, and college education. I think it was geometry class sophomore year of high school where I hit a wall and everything after that was a blur. Ditto with chemistry that year (what in the name of Walter White is a mole anyway?). But that didn’t hinder me from being wholly fascinated with science and nature, and more particularly with the people who know way more about those things than I do.

I just finished reading Jennifer Ouellette’s Black Bodies and Quantum Cats: Tales from the Annals of Physics, a collection of short essays on various topics within the world of physics. Ouellette, also a former English major and self-professed “physics phobe,” adapted the essays from her column in APS News, a monthly publication for members of the American Physical Society. She tackles scientific topics from the earliest and most fundamental – like DaVinci and the golden ratio, Galileo and the telescope – to more recent discoveries like X-rays, wireless radio, and thermodynamics.

True to her writing roots, Ouellette manages to take what can be very esoteric and labyrinthine scientific concepts and make them fascinating by linking them to things we regular people can understand: how Back to the Future explains Einstein’s theory of special relativity; Dr. Jekyll and Mr. Hyde representing the dual nature of light; induced nuclear fission as seen in Terminator 2: Judgement Day. These connections are lifesavers for right-brained humanities majors like me, who instead of seeing “SCIENCE” blaring on the cover and fleeing get to experience an “A-ha!” moment nearly every chapter.

But here’s the thing: I love science. I don’t love it like a scientist does, by learning theories and experimenting. I don’t love it because I understand it – Lord knows that’s not the case. Rather, I love it because of what it does. I am consistently flabbergasted by what have become quotidian occurrences in our 21th century lives. Telephone technology is so quaint these days, but the fact that I can pick up a small device, speak into it, and instantaneously be heard by someone thousands of miles away blows my mind. The fact that I can get inside a large container that will propel itself through the air and arrive at a destination relatively quickly blows my mind. The fact that we can send a small, man-made vehicle into outer space and have it land on another planet blows my freaking mind.

Science has improved our lives and advanced our knowledge of creation in a million ways. I’m simply grateful for the multitudes of geeks who have labored in that noble cause of discovery. Because of you, we have cell phones and airplanes and cameras and Velcro (did you know that term is a portmanteau of the French words velours [velvet] and crochet [hook]?) and Mars Curiosity and lasers (an acronym for Light Amplification by Stimulated Emission of Radiation) and automobiles and Xerox machines and countless other inventions, many of them engineered by the men and women Ouellette spotlights in her book.

And that’s just physics. Think about what we know of biology, chemistry, geology, astronomy, and every other sub-category of science. If my mind hadn’t already been blown away earlier, it would have exploded now just thinking about what we know about our Earth and the things that it contains, and also what we have yet to discover.

Though our country is in turmoil, the Curiosity roves a distant planet. Though we often disagree about basic scientific principles, we still seek to discover. As Carl Sagan said: “For all our failings, despite our limitations and fallibilities, we humans are capable of greatness.” As a sci-curious liberal arts nerd, I can’t wait to see what else we can achieve.

Pruning the Rosebushes: What Not to Share

There’s a scene in Saving Private Ryan when Matt Damon’s Pvt. Ryan and Tom Hanks’ Capt. Miller sit and chat, waiting for the impending German offensive to hit their French town. Ryan’s three brothers had recently died and he can’t remember their faces. The Captain tells him to think of a specific context, something they’d shared together. When the Captain thinks of home, he says, “I like of my hammock in the backyard or my wife pruning the rosebushes in a pair of my old work gloves.”

Ryan then tells the story of the brothers’ last night together before the war took them away, his enthusiasm growing as his face brightens with the look of recognition. After he finishes the story, he asks Captain Miller to tell him about his wife and the rosebushes. “No,” the Captain says. “That one I save just for me.”

In this the Age of Oversharing, this is a refreshing if soon-to-be anachronistic sentiment. I’ll admit to feeling the ongoing urge to inform The World via Twitter of funny or interesting things that happen to me during the day, or to display my pithy wit with a topical one-liner. But lately I’ve been compelled by a new urge, similar to that of Tom Hanks’ laconic Captain Miller in this case, which tells me to think twice before sharing whatever it is I want to share with the world.

Perhaps this is due to my being an inherently reserved person, reluctant to simply give away every little thought that enters my brain. Some people, I fully realize, aren’t built this way; they want to share themselves and their lives entirely and get fulfillment out of this. That’s perfectly fine. But I like the idea of keeping some moments – the rosebush prunings of our lives – special, not posted on Twitter or Instagram or even a WordPress blog.

This requires a lot of discipline. Being hyperconnected to social networks makes sharing intentionally easy, so overcoming the desire to post a picture of a sunset scene you’re sharing with a loved one is tough, especially when the desire to share has been engrained and even encouraged by our plugged-in culture. But I think a special moment like that becomes a little less special when every one of your Facebook friends and their mother shares it too.

This notion runs counter to many of my identities. As an amateur techie, I marvel at the capabilities the Web can give ordinary people to express themselves and enhance their lives. As a history buff and librarian/archivist in training, I understand the value of information as the record of history and the zeitgeist of an era. And as a user of Twitter, Instagram, and WordPress, I’ve come to enjoy having easily accessible and usable media to help me share cool photos, links, and thoughts short (on Twitter) and long (on here) whenever and wherever I want.

In spite of all these conflicts of interest, I’m OK with, once in a while, letting moments and images and quotes pass by undocumented and unshared, if only so I can feel in that moment that I got a glance, however fleeting, at something beautiful or inspiring or funny or tragic or all of the above, and that it’s all mine. The memory of that moment may die with me, but hey, that’s life. No matter how high-quality resolution the camera or beautifully eloquent the prose, these second-hand records will never be quite as pure as the real thing, the moments they seek to honor.

So here’s to, once in a while, living in the moment and only in the moment.

Steve Jobs Lives

This 1987 concept video from Apple predicts not only the iPad and all of its capabilities, but also Siri, the speech-activated personal assistant that will be ubiquitous technology in a few years given how Apple products usually work. (H/T to Andrew Sullivan for the video)

Andy Baio finds this amazing:

Based on the dates mentioned in the Knowledge Navigator video, it takes place on September 16, 2011. The date on the professor’s calendar is September 16, and he’s looking for a 2006 paper written “about five years ago,” setting the year as 2011. And this morning, at the iPhone keynote, Apple announced Siri, a natural language-based voice assistant, would be built into iOS 5 and a core part of the new iPhone 4S.

So, 24 years ago, Apple predicted a complex natural-language voice assistant built into a touchscreen Apple device, and was less than a month off.

I never had the emotional attachment to Steve Jobs as many others around the web have been describing, but I do use his products. The iPhone, Macbook laptop, and the iPod seem so ordinary now, but 24 years ago who could have predicted how they would change the world as they did? I suppose that’s the best compliment you can give a technology geek like Jobs – that what he did changed the world for the better.

Facebook vs. Wikileaks

What’s the difference between Facebook founder Mark Zuckerberg and Wikileaks founder Julian Assange? A recent Saturday Night Live skit with Bill Hader as Assange answered that question: “I give you private information on corporations for free and I’m a villain,” he says. “Mark Zuckerberg gives your private information to corporations for money and he’s Man of the Year.”

It seems backwards, right? In a perfect world, the release of free information about corporate malfeasance would be celebrated and the selling of private information for profit would be illegal, or at least frowned upon. But we don’t live in a perfect world. Instead, Assange gets arrested and Zuckerberg makes billions and is named Time magazine’s Person of the Year.

The U.S. government insists on secrecy. Every politician seems to campaign on bring transparency to Washington and making the government more for, by, and of the people. Yet it never seems to work. So when someone like Assange comes along and pulls back the curtain on important areas of public interest like the wars in Iraq and Afghanistan, the government goes code red.

Facebook is the opposite. No one is forced to reveal personal information; we do it willingly. And the company takes that information and uses it to sell advertising and make billions of dollars in profit. Zuckerberg believes in total openness—on Facebook and in the world as a whole—yet somehow I think he’d had a problem if Wikileaks revealed how Facebook was using people and their information to make a huge profit.

I’m not wholly anti-Facebook. I think it’s a great way to communicate and stay in touch with friends and family. And the way things are going it looks like the site will be the Internet one day. But there’s something very unsettling about how disclosure through Facebook is encouraged yet through Wikileaks it’s demonized. And as long as institutions like Time continue to honor this dangerous dichotomy, things won’t change.

iPad? I think not

Originally published in the North Central Chronicle on April 16, 2010.

With all the near-orgasmic praise Apple’s iPad has received lately, I feel like I should want to get one. But I don’t.

Let’s be honest: it’s a cool toy. It does most of the things and iPod Touch or iPhone can do but on a bigger, more vibrant LCD screen. It also does some of the things a laptop does but in a more simplified and mobile way. But I don’t see the point of shelling out $500+ for a product fresh out of the factory just because Steve Jobs says it’s the future.

The Cult of Apple is a little too much for me right now. Sarah Palin whined a lot about the news media fawning over Barack Obama during the presidential campaign, but that was nothing compared to the reception Jobs’ Apple products get every time they are released into the world.

I understand brand loyalty, but some Apple fans get so wrapped up in their products it becomes hard to take their constant adulation seriously. While Apple’s products are often worthy of the praise they receive—it’s a sleek and dependable brand with great marketing—let’s not get carried away.

Jobs may be right: the handheld touchscreen technology the iPad embodies will probably eventually become the standard for computing and communication. Like the iPhone and iPod before it, it will get better with every generation they release. And more people will probably buy it once Apple’s competitors like Google and Microsoft release their own version of the tablet computer.

But the iPad as it is now is not there yet. As the first generation of its kind, it’s going to receive some major upgrades in the next few years. Remember the first generation iPod? At the time it was revolutionary, but now it’s laughably archaic. 

The iPad, I suspect, will be similar. It’s cool now, but I’m going to let it cook a little longer before I buy what Steve Jobs is selling. Once tablet computers become a legitimate and irreplaceable technology—and offered from more companies than just Apple—then will it be worthwhile to invest in it.

Until then, it’s still just a toy. A very expensive toy.

Net Neutrality

This video explains Net Neutrality way better than I ever could…

Created by Aaron Shekey of Apparently Nothing fame.