Mashups from the January 28, 2018 issue of New York Times Magazine. (See more magazine mashups.)
Matt Thomas, via Submitted For Your Perusal, spotlights an interesting contrast between two New York Times stories in the same week.
Exhibit #1, from a brief feature on Danielle Steel:
After all these years, Steel continues to use the same 1946 Olympia typewriter she bought used when working on her first book. “I am utterly, totally and faithfully in love with my typewriter,” she says. “I think I paid $20 for it. Excellent investment! And by now, we’re old friends.”
Exhibit #2, from a John Herrman’s essay What I Learned from Watching My iPad’s Slow Death:
Above all, my old iPad has revealed itself as a cursed object of a modern sort. It wears out without wearing. It breaks down without breaking. And it will be left for dead before it dies.
A machine that’s over 70 years old (!) is still performing exactly as it did the year after World War II ended, and another machine that’s not even 7 years old is now a digital dotard. An iPad of course can do far more things than a typewriter. But if it can only do those things for the length of two presidential terms, tops, is it truly worth the investment?
My 1970 Hermes 3000 originally sold for $129.50, according to the sticker still on its body. That’s about $845 in 2017 dollars, which would get you an iPad Pro or basic laptop today. I bought it last year for $30 at an antique store. It’s in seemingly mint condition all these years later, and I can’t wait to see what words it will produce—from me and any future owners. If the iPad’s “slow death” takes place after only a few years, the death of this Hermes—perish the thought—will be downright glacial.
Yet what Herrman concludes about a tablet is also true of a typewriter: “It will still be a wonder of industrial design and a technological marvel, right up until the moment it is destroyed for scrap.”
Which machine’s scraps, however, can actually be turned into something beautiful? Advantage typewriters.
This, from Andy Weir in his By the Book column at the New York Times, seems like an odd thing to say:
For the record, my stories are meant to be purely escapist. They have no subtext or message. If you think you see something like that, it’s in your head, not mine. I just want you to read and have fun.
#1: It’s not odd for an author to want his books to be purely escapist and fun. It is odd to insist that they have no subtext or message, and further, that if readers detect those things they are wrong.
#2: Not all subtext is intentional and not all intended “messages” are received by the reader.
#3: Authorial intent dies once the book hits the shelves.
Obit is an eloquent, observant, and superbly crafted documentary by Vanessa Gould on the New York Times obituary writers and the people they cover.
One of the writers says writing obits isn’t sad because they are writing mostly about a person’s life rather than their death. I can see why that would be the case, but in spotlighting their subjects from over the years—including well-known ones like Philip Seymour Hoffman and Robin Williams and ones unknown to me like William Wilson and Elinor Smith—the film made me as a viewer grieve all over again. It felt a lot like a memorial service: celebratory, but with an undercurrent of grief. I think of the Japanese concept of mono no aware: the awareness of the transience of things. Or as Wikipedia puts it, “a transient gentle sadness (or wistfulness) at their passing as well as a longer, deeper gentle sadness about this state being the reality of life.”
But it’s the writers themselves who are the subjects of the film, and they are as articulate, quirky, and wry as you’d expect NYT veteran writers to be. Kudos to them for their work, which I ought to seek out more. The literal deadlines they are faced with seem like a case of “take your time, hurry up”. One minute they could be working on an advance obit for someone who could die at any time (Jimmy Carter and George H.W. Bush come to mind), and the next minute word of Michael Jackson’s death arrives and they are 4 hours from print deadline. What a job!
There’s also Jeff Roth, the lone caretaker of the “morgue”, the Times‘s underground archive of historical news clippings, photographs, and other archival material, all stored in rows and rows of filing cabinets and bankers boxes. It’s an historian’s dream: oodles of material to look through, organized enough but not too much to allow for serendipity to strike. He and the Morgue are probably a documentary in themselves.
Gould’s cameras eavesdrop among the warren of cubicles in the Obit section, with longer than expected takes just watching the writers type at their computers and capturing their asides and narrated thoughts about where they are in the process. The slick editing certainly has something to do with it, but it’s the rare instance of the writing process being just as interesting as the writing itself.
Obit pairs well with Life Itself, the documentary about Roger Ebert, which is itself a kind of advance obituary on Ebert. Through his writing Ebert captured the lives of those on screen with a combination of strength and tenderness. The writers in Obit aren’t nearly as famous as he was, but their work is just as salutary to the soul.
This New York Times story about all-male book clubs was not as inflammatory as I knew it would be taken in certain spheres. It turns out (wait for it…) some men are in book clubs just for men.
The reaction from one of the groups to the NYT story is worth reading for important context that didn’t get into the piece: that they do in fact read books by women, and that the group was started out of a desire to get back into reading after kids and life had intervened. The group’s mission: “to leave our day jobs behind, to find meaning and enjoyment in literature, and to know each other better in the process.” What a bunch of misogynist pigs!
They also correctly point out people start exclusive book clubs of every conceivable theme and parameter. To prohibit men from this privilege out of some anti-patriarchy crusade would be misguided, obtuse, and contrary to the spirit of reading.
As a librarian, I cheer anyone who joins a book club at all, or even just starts reading for fun again. And since men participate in book clubs and discussions much less than women, civic groups like this one ought to be applauded, not snickered at. (Although, yes, the International Ultra Manly Book Club has a silly name and some cheeky masculine posturing in the article.)
The morals of the story: Read! For fun! At whim! And do whatever it takes to do so. I didn’t start reading for fun until right after college, when I realized I didn’t have to take notes or
bullshit write a critical essay on the material anymore. I could just read what looked interesting. And I’ve been doing that ever since.
I’ve been a fan of A.O. Scott since his too-short time co-hosting At the Movies with Michael Phillips, which was my favorite post-Ebert iteration of the show. Their tenure was a salve after the brief and forgettable stint of Ben Lyons and Ben Mankiewicz. Phillips and Scott brought a benevolent wonkiness to the show I greatly enjoyed and mourned when it was axed.
So I was quite pleased to read A.O. Scott’s new book Better Living Through Criticism: How to Think about Art, Pleasure, Beauty, and Truth, which is not as self-helpy as it sounds, mercifully. In fact, it’s nearly the opposite of self-help, a genre hell-bent on offering surefire prescriptions for every psychological impediment blocking our true greatness within. Scott is far less strident. He avoids making grand declarations about The Purpose of Criticism, much to the chagrin of grand declarers. All the better. To me, criticism is not about conquering artistic foes or achieving certainty, but about making sense of what goes on inside our heads and hearts when we encounter something beautiful, pleasurable, or truthful — or all (or none) of the above.
The book ambles towards answers to the pointed questions I’m sure Scott receives often: What are critics for? Are critics relevant anymore? One purpose for critics he lands on is to be people “whose interest can help to activate the interest of others.” This is absolutely true, as is its inverse of steering others away. Many movies that I expected to be worthwhile ended up being duds, and the critical consensus that bubbled up before their opening weekends helped convince me to wait for the Redbox or to avoid them altogether.
Conversely, without Bilge Ebiri’s incessant cheerleading for The Lego Movie before it came out in early 2014, I would have assumed it was another cheap kids movie and not a hilarious and surprisingly profound meditation on creativity and identity. Ditto Brooklyn, which I expected to be another overwrought, Oscar-baity period drama but in fact nearly brought this non-crier-at-movies to tears. Critics matter, even when I disagree with them (cough Carol cough).
Scott also feels duty-bound as a critic “to redirect enthusiasm, to call attention to what might otherwise be ignored or undervalued. In either instance, though, whether we’re cheerleading or calling bullshit, our assessment has to proceed from a sincere and serious commitment.” The calling attention to is big: a recent example is last year’s Tangerine, a tiny indie I wouldn’t have given a chance without wide and persistent acclaim from the bevy of critics I admire and follow just so I can get scoops like that.
“Redirecting enthusiasm” might also be considered a challenge to “swim upstream”: to seek out the earlier, influential works that laid the groundwork for whatever we’re watching, listening to, reading now. American culture’s on-demand, presentist bias deprives us of decades of good art, whose only crime is not being made right this live-tweetable second. The critic who compares a new film to an older one, favorably or otherwise, provides context for readers but also a tacit clue that checking out that older film might be worthwhile. The upside of our appified age is that finding those forgotten gems has never been easier: getting upstream is as easy as visiting your local library, Amazon, or streaming service.
But what I consider the most compelling reason for the critic’s job might be their most self-interested one. Scott quotes the ever-quotable critic H.L. Mencken, who wrote the motive of the critic who is really worth reading is “no more and no less than the simple desire to function freely and beautifully, to give outward and objective form to ideas that bubble inwardly and have a fascinating lure in them, to get rid of them dramatically and make an articulate noise in the world.”
The process of making an articulate noise about something is the point, I think. It’s where a writer lives most of the time, engaging in a back-and-forth with the work and with himself until he lands on something approximating the truth of his experience. To that end, Scott writes, the history of criticism is the history of struggle. This book embodies that struggle literally: Scott engages in four interstitial dialogues, wherein he banters with an unnamed interlocutor (or inner critic?) who could also stand in as the aggrieved audience, demanding that Scott justify his existence.
I know this combat comes with the job, but the hostility critics in general receive baffles me. There’s way too much out there to see, read, and hear for one person to sort through. “This state of wondering paralysis cries out for criticism,” he writes, “which promises to sort through the glut, to assist in the formation of choices, to act as gatekeeper to our beiseged sensoria.” Having professional curators with unique, informed, and enthusiastic taste is a good thing, not something to scoff at or claim is irrelevant in the age of Rotten Tomatoes.
But if you think a critic is wrong and want to tell him why, congratulations! You’re now a critic and are obligated to say more.
Anyway, good on Scott for driving this conversation, and for holding his ground against Samuel L. Jackson.
It is right and good that the New York Times chose, for the first time since 1920, to publish an editorial on Page 1. “End the Gun Epidemic in America” captures the zeitgeist well, at least that of reasonable human beings without a vested, monied interest in seeing the NRA-sponsored carnage continue.
“It is not necessary to debate the peculiar wording of the Second Amendment,” the editorial reads. “No right is unlimited and immune from reasonable regulation.” Indeed, it seems the only right in the Constitution that has found itself immune from debate is that of the Second Amendment. The beneficiary of a modern-day gag rule, wherein even researching the causes and effects of gun violence is outlawed, our supposed right as American citizens to own unlimited military-grade weaponry is considered as self-evident and God-blessed as our country itself.
We need a John Quincy Adams. An incorrigible ramrod of righteousness with nothing to lose. Smart enough to use the system to the cause’s favor and intractably annoying to its enemies. We also need the truth to be spoken through the research—research!—that we’ve consistently denied because denial is bliss. When enough people finally open their eyes to this culture of death we’ve protected, the delusional, cowardly mania for guns will compare in the future’s unfavorable eyes to the same delusional, cowardly mania for slavery that gripped this country for far too long.
Rod Dreher recently wrote about Duck Dynasty star Phil Robertson’s comments about, essentially, how happy he believed Black Southerners were in the 1950s before the civil rights movement. To Dreher, Robertson’s comments demonstrate the power of narrative, of the stories we tell ourselves and how they affect how we see the “truth” of our own situations, even when we don’t see the whole truth:
You can tell a lot about who has the power in a particular culture by what you are not allowed to talk about without drawing harsh censure. And in turn, the thoughts you are not allowed to have become internalized, such that you train yourself not to see things that violate those taboos. In the 1950s rural South, a white man was not allowed to speak out against the injustices inflicted on blacks; is it any wonder that he wouldn’t “see” them?
This is a very insightful way at contextualizing Robertson’s ignorant and hurtful comments. Dreher spotlights Alan Ehrenhalt’s (excellent) book The Lost City to add further context to Robertson’s remarks, but I’m finding just as much relevant background and insight in my current read: The Race Beat: The Press, the Civil Rights Struggle, and the Awakening of a Nation by Gene Roberts and Hank Klibanoff.
This amazing book takes an angle I’d never considered before when thinking about and studying the civil rights movement of the 1950s and 1960s: that of the journalists, publishers, and other press figures who were instrumental in wrestling the civil rights struggle to the front page as the movement simmered after World War II to its boiling point in the ’60s.
In newsreels and history books we’ve seen a great deal of the figures directly involved in the decades-long civil rights fight: Martin Luther King, Malcolm X, Emmett Till, Medgar Evers, the Little Rock Nine, Bull Connor, George Wallace, and many others. But what of the people behind the cameras, the ones braving the fire hoses of Birmingham and angry mobs in Greensboro right along with activists to capture the moment for print, radio, or the nascent television news?
For a thesis statement of sorts, Roberts and Klibanoff go back to what they view as the foundational work from which all academic and journalistic interpretations of the postwar civil rights movement emerged: An American Dilemma, a comprehensive study of race in America underwritten by the Carnegie Foundation and spearheaded by Gunnar Myrdal, a Swedish economist and sociologist.
The study found the central problem to be an overwhelming ignorance among Whites (in the North and South alike) about the lives and living conditions of Black Americans. It was easy for Whites to ignore the discrimination Blacks faced every day because they didn’t see it. White newspapers completely ignored the Black community and the Black press along with it. Myrdal believed that to overcome “the opportunistic desire of the whites for ignorance,” the Black community needed one thing: publicity. “There is no doubt,” he wrote, “that a great majority of white people in America would be prepared to give the Negro a substantially better deal if they knew the facts.”
Facts, they say, are stubborn things. But so were the segregationists. And the thought of high-minded out-of-towners coming into the South to tell good Christian people what’s wrong with them and upend generations of tradition didn’t sit well with angry sheriffs and townspeople, who would have every judge and jury (all white, of course) on their side should they decide to teach someone a lesson, or worse.
As a Mississippi attorney put it to Freedom Summer volunteers venturing into the South: “a dark highway at midnight was no place to lecture a Mississippi deputy sheriff with a second-grade education on the niceties of constitutional law.”
Still, the whole point of the civil rights movement, and one that Martin Luther King understood deeply, was to shine a light into the dark places. To walk through the valley of the shadow of death, and bring reporters along for the walk. King knew, as did the other movement leaders in SNCC, CORE, and NAACP, what Myrdal knew: publicity meant power. The more White America would be exposed to the everyday injustices Black Americans faced, the more likely they would be to sympathize and inspire positive action.
The Emmett Till trial was the catalyst. That gruesome murder and clear miscarriage of justice coupled with the earth-shattering Brown v. Board of Education decision to start the movement snowballing toward bus boycotts and Little Rock, through the Woolworth’s lunch counter sit-ins and Ole Miss, each encounter seeming to attract more attention than the last.
While the Freedom Riders and marchers were enduring fire hoses and batons and angry mobs, journalists were close by to report on it. They understood as much as their subjects the power of the pen and camera, and had to wield that power in unexpected ways.
Peter Kihss, a New York Times reporter who was reporting the Autherine Lucy saga at the University of Alabama, decided to abandon traditional journalistic remove and intervene when an elderly Black man became surrounded by an unruly mob. “If anybody wants to start something, let’s go,” he told the crowd. “I’m a reporter for The New York Times and I have gotten a wonderful impression of the University of Alabama. Now I’ll be glad to take on the whole student body, two at a time.”
A similar situation involved John Chancellor, newspaperman turned NBC broadcaster, in the infancy of television news. Chancellor was gathering reactions in Mississippi after the Till trial when “a flying wedge of white toughs” descended on him and a Black woman he was interviewing:
Chancellor squared off against them and held up the only object he could find to defend himself, an object whose power he had not, until that moment, truly fathomed. Thrusting his tiny microphone toward the men, Chancellor blurted out, “I don’t care what you’re going to do to me, but the whole world is going to know it.”
He later called his microphone “the technological equivalent of a crucifix.” The microphone and the newspaper and the camera collectively became a tool and a weapon. They performed the basic service of documenting reality, ugly and unvarnished as it was, while also fighting back against the South’s deeply entrenched culture of silence and racial hegemony.
Their power seemed to coalesce in the fall of 1963 when they broadcasted Dr. King’s “I Have A Dream” speech and then the news of the Birmingham church bombing that killed four Black children. Having the nation witness events like those up close, according to Jack Gould of the New York Times, was a major hurdle overcome for the Negro race as a whole, because until then its biggest challenge had been “communicating and dramatizing” its struggle: “Not to the integrationists, not to the unyieldingly prejudiced, but to the indifferent white millions for whom integration or segregation was of scant personal concern.”
In other words, to the Phil Robertsons of the day. The story White Southerners like him had been telling themselves (and anyone else who had dared to disrupt the narrative) about race and their culture disagreed with the reality of being Black in America. It took over a decade of protests and violence and struggle and political hand-wringing, but finally, Myrdal’s prescription for publicity was working. It wasn’t a panacea, but it was progress.
However, when hit with the reality of someone else’s story, some, like Gov. George Wallace, ignored the cognitive dissonance and dug in their heels. While Phil Robertson is no George Wallace, their shared inability to see beyond the stories they told themselves left them blind to what the cameras were showing in bright lights.
It’s easy to judge from afar in situations like this without thinking about the blind spots we’ve self-imposed today. Racism isn’t over, nor discrimination writ large. The press is different today, as is its power. We’re not so enthralled by television or newspaper editorials anymore. Publicity itself seems an inadequate solution for dealing with the problems we face today when all people do in our selfie-obsessed world is publicize. Simply getting a hashtag trending on Twitter won’t solve homelessness or end abortion.
In that way, our problem is the same as that of generations before us: we need the courage to hear new stories, to not wait for tragedy to spur us to action, and to follow the Atticus Finch model of walking (or marching?) a mile in someone else’s shoes.
The Race Beat goes into great detail about the individuals and institutions involved in this decade-long story. Courage, cowardice, and great copy abound on every side of the tales told that, all together, paint a lush picture of how the movement and its press worked together to change the country forever.
(photo via NYT)
[Article republished from January 2010]
I can’t sit still when it’s down to the wire.
Four minutes to go in the fourth, the Packers are driving for the game-tying score and I’m on my feet, pacing around my room. It’s been a wild shootout at the NFC Wild Card game: Green Bay’s young gun Aaron Rodgers and Arizona’s grizzled gunslinger Kurt Warner were taking turns tearing up the turf with laser-precision touchdown throws, the defense on both teams nonexistent. In the third quarter, the Packers were down by 21 and gasping for air; now, they’re knocking on the door.
This is the second time in three years the Packers have been in the playoffs. In 2007, we—in Green Bay, Packers fans own the team—had quite the playoff run. We demolished the Seahawks at Lambeau Field in the divisional round on a snow-covered turf. The next week, with the field temperature at or around arctic, the Giants come to Lambeau for the NFC Championship game. In the fourth we tie it up 20-20. The Giants have a chance to win with a field goal, but Tynes sends it wide left. Overtime. I’m on my feet, pacing nervously around the room. Favre throws an interception, and the Giants win it with a field goal. It’s all over.
Today, the Packers are sweating in the Arizona dome. Rodgers connects with Havner, tying the game 45-45. Less than two minutes left, the Cardinals drive and set up for a field goal. Wide left. Overtime. I’m on my feet, pacing nervously around my room. Not again, I think. We win the coin toss. The lob to Jennings downfield – the game winner – is overthrown. Then Rodgers is hit, fumbles, a Cardinal picks it up and runs it in for the score. The game. It’s all over.
The heartbreak hangover. Every sports fan has gone through it: the empty feeling after a devastating loss. The aimlessness. The Packers were on such a roll coming into the playoffs—the loss doesn’t seem real. Its suddenness makes it harder to accept. We were playing, then suddenly the ball came loose, it was in the end zone, and we were done. A bad dream, really.
In the days after I joked with friends that I was going through the stages of grief. The denial came quickly: No, it’ll be called back. There was a penalty. Once it settled in, the anger showed up: What the hell? Why didn’t someone pick up that block? Then the bargaining took place: If we could just do the last play over again… The depression stuck for longer. Seeing the highlights from the game on TV the next few days made it worse. It wasn’t until about four days later when I was finally able to accept the loss and look forward to next year.
This is all very melodramatic, is it not? Applying such a serious paradigm to what is ultimately just a game seems belittling to those suffering the loss of something more than a game. But it is a process many sports fan goes through—consciously or not—with teams and games they invest so much of themselves into; surely these emotions cannot be entirely frivolous.
According to some research, avid fandom and a deep commitment to one sports team are anything but frivolous. A 2000 New York Times article explored the psychology of hardcore sports fans—what their investment means and why it is important. “Our sports heroes are our warriors,” Robert Cialdini, a professor of psychology at Arizona State University, said in the article about sports fans. “This is not some light diversion to be enjoyed for its inherent grace and harmony. The self is centrally involved in the outcome of the event. Whoever you root for represents you.”
Often fanatics of any sport are looked down upon as obsessed, depressed loners in search of diversion and self-identity. But one theory the New York Times floats suggests fan psychology has its roots in “a primitive time when human beings lived in small tribes, and warriors fighting to protect tribes were true genetic representatives of their people.” Every team in its own way is a culture of people who share similar beliefs and customs. In sports those customs – unique chants, specialized uniforms, shared investment in the team’s history – allow spectators to form bonds with their “warriors.” Dr. James Dabbs, a psychologist at George State University, said in an interview that “fans empathize with the competitors to such a degree that they mentally project themselves into the game and experience the same hormonal surges athletes do,” especially in important contests, like a playoff game. “We really are tribal creatures,” he said.
We wear jerseys and decorate our homes with the colors and faces of our favorite athletes – our warriors – and follow them into the field of battle, though our battle happens in the living room or in the stadium seats and instead of using our bodies to fight like the athletes do we use our voices and emotional support. So when our favorite team loses an important game, the effect is not just mental and emotional; it is common to feel physically depressed or even ill.
Which brings us back to the Wild Card weekend. I watch my team – my tribe – fall as the others smile victoriously on the field of battle. I don’t feel ill, but I’m not happy. I commiserate with my fellow Cheeseheads online. I call my dad to make sense of the game.
“That throw to Jennings,” I say. “That was the game.”
“I know,” he says. We were so close. We rehash everything that went wrong, but then turn to everything we did right. Everything that gives us hope for next year. And there is a lot of hope for next year.
I think my tribe will be just fine.