May 15, 2018 / 3 comments
Maggie Jackson on ‘Distracted’ and the fragmentation of attention
Today we’re talking about technology and the fragmentation of attention with Maggie Jackson. After an early career as a foreign correspondent, Maggie returned to the US and began writing about workplace and worklife issues. She began noticing the impact of early technologies such as laptops and cellphones on people. At that time, the tone of the national conversation was quite utopian and, Maggie felt, naive. “I call it the gee-whiz factor”, she told me, “many people truly thought that technologies were going to solve our problems, connect us, teach us, transport us, magically and painlessly”. Voicing any concerns or pointing out downsides easily had one labelled as a Luddite.
In 2008, many years before the current debates around technology and attention, Maggie wrote the book ‘Distracted’, which dived into the science of attention and the steep costs of its fragmentation. She is currently working on a book about uncertainty as the gateway to good thinking in an age of snap judgement. We chatted recently by Skype, and I started by asking her what it was that she spotted in 2008 that made her concerned enough to sit down and write ‘Distracted’. [Maggie made significant changes to the transcript of our discussion, so you will find the transcript below more accurate, but we know how you love podcasts, so we’ll share the original audio too].
“When I began researching the book, I was interested in how people in past centuries adapted to the inventions that changed their lives. I thought naively that people in the 18th, or 19th, or 20th centuries had “solved” their problematic initial relationship with technologies. That was something of a fallacy. Yes, we now take for granted many of the technologies that seemed shocking at the time, from the bicycle to electricity. We have “adapted” to these “devices.” But more intriguingly, these inventions are part of a continuous overarching stream of cultural, psychological and sociological shifts in our lives. The underlying themes that connect various technologies can teach us so much about the forces that confront us today.
For instance, the telegraph was in essence the first form of virtual connection; operators flirted, feuded, and even got married over the wire and in Morse code. Electricity in essence inspired the 24/7 sleepless culture that we inhabit today. These inventions radically began testing people’s ideas of what it means to be a human, how we connect from a distance or anonymously with one another, and what it means to find information and forge knowledge.
So by looking into the past, I began to see not pat “solutions” but the bigger picture, the larger landscape beyond what it means to get a cell phone or laptop. I began to see that the challenges and problems that we face today are part of larger social and cultural forces that have been building for hundreds of years. That realization gave me a better perspective on this topic.
As well, as I began to understand this, I suddenly realized – literally, it was sort of a strange moment of epiphany in the New York public library’s vast reading room – that the deeper more interesting issue facing us was the fate of our attention in a technological age. To understand the problems of our day, I realized that I needed to understand how our attention was being fragmented, pulled and pushed by technology’s forces. Attention became the lens through which I began to understand technology.
It was, by the way, an interesting moment in history. By the time I finished the book, people had begun to awaken to the issue of distraction, but this problem at first was viewed quite narrowly in our culture. It was the height of the era when doing something faster seemed better, when juggling and multitasking were emblems of success. Technology became just the booster rocket for the all-out efficiency that one needed in order to succeed and racing from task to task was seen as a kind of badge of honor. Then when we began to feel overwhelmed and splintered, we treated distraction as a kind of medical malady, a sickness, a “disorder” for those who couldn’t keep up. That was when we began to ask, who has ADHD? Is it “spreading” – are we going to catch it, so to speak?
We really didn’t yet know the effects of the vast social experiment that we were undertaking on ourselves and our children. We didn’t understand the costs of distraction and the downsides to inhabiting digital worlds, dismantling cultures built on face to face human connections, and abandoning the kinds of reverie, musing, and what-if questions that are foundations of imagination and hence building blocks of envisioning and creating a better world. And we still have much to learn.
One of the things that’s so interesting about that look backwards that you take is all of that stuff about how a lot of the kind of things that people say now about smart phones, people said about books when books first came out. “These young people, all they do is read books!” Now people would be like, “Oh my god. Please read a book”. When people say, “Oh, it’s just the same, and smart phones and Facebook, it’s just the same thing”, what is qualitatively different about the kind of impacts we’re seeing in terms of attention now?
Excellent question. I’m asked that all the time. How do we know what is different, what has truly changed in our lives? What’s better, what’s worse? Is this just the kind of handwringing that we have always seen when things change? I have two answers to that question. First, the totality of what we’re dealing with is so much greater. Teens are on average exposed to nearly six hours of non-print media a day, and a significant minority experience nearly 8 hours of media a day. So while media and technology were just a slice of life in the past, now they are a constant. They are the reality. We inhabit the virtual world in disproportionate measure to the physical, and that shift has taken just a generation to unfold.
However, it’s important to note secondly that we are getting a better handle scientifically on the impact of these changes, especially cognitively. We have strong correlational evidence linking time spent on smartphones or online to lowered well-being in children and declining empathy among young adults. As well, steep declines in children’s and adult’s capacity to imagine, persist in problem-solving, and reason coincide with recent decades of rapid technological penetration. This is important. We don’t have the full picture but we are beginning to understand the effect of technology on our lives and on our minds.
Certainly we should remain aware of the human tendency to yearn for the familiar and to look nostalgically back at the past. But the bottom line is that we have to solve the problems of our day, and there are just too many signs that digital living in its current forms raises red flags. For instance, the ability of Americans from kindergarten to adulthood to elaborate on a problem, to put flesh on an idea, for instance, has dropped by 40 percent since the 1980s, and the most steep drop has occurred in the years since technology came to play such a dominant role in our lives.
There are warning bells, and just as with climate change, we can wait until all the t’s are crossed and I’s are dotted on the evidence or we can act to solve the problems of our day, using the best possible assessments available to us at this time.
I didn’t have email until maybe 13 years ago, Twitter until maybe 7 years ago, Facebook just a few years ago. This is an experiment that has happened in a very short period of time. If we were to say this has been a 20 year experiment on a massive, massive scale, how would you summarise the interim findings of that experiment?”
Well that’s a very big question! Currently as I prepare for the September release of a new updated edition of my Distracted book, I’m thinking a lot about distraction and how that affects people in new ways. And I am also thinking about the impact of instantaneity and role of uncertainty in thinking as I finish up a new book. These are both points of vulnerability in our culture today.
We often define distraction as being pulled to something secondary, but a lesser-known definition involves being pulled in pieces, being fragmented. That certainly describes life on- and off-line today.
And research shows that when people are avid multi-taskers, when their attention is splintered and abbreviated they actually are shown to have less ability to discern what’s trivial and what’s relevant in their environments.
Even more importantly, when we are jumping from task to task or person to person, we may be undermining our ability to be cognitively flexible, which is a core skill of creativity and problem-solving. In other words, studies show that when people are multi-tasking, they can absorb new information, they can learn, but they encode and store knowledge in more shallow ways, actually using different parts of their brain than if they were paying full attention. As a result, the new knowledge is less assimilated with other stored knowledge and so is less available for transfer to novel situations.
For example, if you multitask your way through your math homework, you can solve the kind of math problem that you studied, but you likely can’t tackle a related but different kind of math problem. Or the surgeon who multitasked her way through med school might be able to fix a routine problem that arises in the operating room, but may be flummoxed when a new, rare complication arises.
I’ve had professors tell me that because kids are multi-tasking their way through an introductory college class, they’ll get to the second level psychology or history class, and it’s as if they hadn’t even taken the first course. They have learned the material in shallow ways.
Additional research shows that the presence of a cell phone, even if it’s silent and turned off, unconsciously siphons our attention away from the moment at hand, so we’re less focused. But as well, we become less able to think in flexible ways. The presence of the phone, beckoning to us even unconsciously, lowers fluid intelligence, which is described as the ability to interpret and solve unfamiliar problems. We simply don’t have the capacity to multitask and think nimbly!
So, we are beginning to discover that our habits of mind and our technologies may be making us less discerning and flexible cognitively – skills that are crucial to imagination and higher-order thinking. That’s alarming and could be linked to the kind of tribalism and risk aversion that we see so often today.
The second assessment I would make about technology as a social experiment is that the instantaneity of information is being shown to undermine our willingness to think in complex ways. And that’s very damaging to our capacity to imagine.
Studies both at Yale and Harvard show that a brief online search for information, just a bit of googling, makes people less willing later to wrestle with a complex problem. Their “need for cognition,” a measure of one’s willingness to struggle with a problem and see it through, drops dramatically after searching online. As well, a bit of searching leads to a kind of hubris; we begin to think that we know more than we actually do. People begin to over-estimate their ability to answer similar type of questions without the computer.
Why? Scientists believe that when information is so instant, we begin to think that answers are just there for the plucking, that “knowing” is easy. One researcher who’s been involved in this work says, “We never have to face our ignorance online.”
What are the implications of this? In our current culture, “knowing” is becoming something brief, perfunctory, neat, packaged, and easily accessible. Yet complex murky problems demand firstly the willingness not to know, to understand that the time for ease in thinking has ended and the real work of reflective cognition must begin.
And second, difficult problems demand tenacity, a willingness to struggle and connect and reflect on the problem and its possible solutions and move beyond the first answer that springs to mind. This is when we must extricate ourselves from automaticity in thinking and call consciously upon the side of ourselves that can decouple from tried-and-true answers, gather more information, test possibilities, and build new understanding. Much of this cognition demands both flexibility and a willingness to grapple with the unknown.
That’s why I’m writing a book now about uncertainty as the gateway of good thinking and as an underappreciated realm of curiosity and wonder. Especially in the West, we think of uncertainty as a negative, as something to be appraised and vanquished as quickly as possible, when in reality it is a space of possibility in thinking. Our uncertainty can inspire us to pause and begin thinking flexibly, imaginatively, and persistently. And those are the kinds of skills badly needed today.
I spoke to Dr Larry Rosen for one of the interviews, who does a lot of work on attention, and he said, “I would say that our imagination is on the decline exactly in the opposite trend of our time spent on smart phones”. I wonder how you see that link between attention and imagination in terms of if culturally our attention, our ability to focus, starts to decline and become more fractured. You said a little bit before but I wondered if you had any other thoughts about that link between distraction and attention, and imagination?
As I mentioned, when we’re hopping from one task to another – and research shows that people switch work tasks on average every three minutes throughout the day – and expecting answers at our fingertips, we are squandering opportunities to learn deeply and think flexibly. We are cutting short the business of deep understanding and problem-solving, a process that is inspired and guided by full focus and our willingness to tolerate uncertainty and ambiguity.
To further elaborate, consider the implications of this cognitive impatience on our abilities to be fully aware of the environment around us. In other words, digital living influences how we pay attention to the world. Is it possible, in other words, to “know at a glance?”
The latest findings from the neuroscientist Stanislas Dehaene in Paris and others, shows that when we are only peripherally or semi-consciously aware – when we are trying to know at a glance – we are only grasping the situation or information approximately. But when you are fully aware – when you’re consciously aware of something, when it is truly in your focus – that’s the only state of mind that allows you to reason, to build thought, to problem-solve deeply. Conscious awareness creates what is called a “global workspace” in the brain, where reflection and considered thinking can occur. It’s a state of mind in which many parts of the brain are fully connected.
That’s another illustration of how splintering our attention leads us to begin thinking in shallow ways.
I wondered, having done all this research, and having been living with this stuff for several years longer than most people, and being really aware of the impacts of the technologies that you write about, what changes it’s led to in your own life in relationship to those technologies?
That’s a very good question. Well, first of all, I zealously guard opportunities for quiet, full focus, and thinking. And one reason that I do so, is that I know how easy it is to fall into the trap of “getting things done,” ticking off the boxes, jumping from task to task, while avoiding the hard problems, the messy difficult aspirations of our lives. We define productivity in a very narrow way. Hyper-busyness is something that our culture reveres, and yet sages from Aquinas to the Buddha warn that this kind of lifestyle inspires us to sidestep the most difficult problems of life.
But believe me I still struggle with the right balance – how to interact with this new world of social media and hyper-connectivity and avalanches of instant-access information yet protect times for deep human connection and for doing justice to the messy complex problems that face us.
Just last fall one of my daughters, who’s in college a thousand miles from our home, was very ill. She had suffered two head injuries which led to medical complications. I took time from work and stayed with her for some weeks, but when I returned home, I began checking in with her more often to make sure that she was getting the right care. And now she is strong and healthy and I’m still trying to battle the habit of checking my phone multiple times a day. The urge is so strong! Every time I take a small break when I’m in the library, or working at home, I just have the urge to pull it out of my pocket, as I had to do for many months. I’m battling this, and yet it’s difficult to pull back and begin to recover time for uninterrupted thinking and focus.
To cope with our “blooming buzzing world,” I also try to do different sorts of work in different physical locations. At home, I do research, searching for scientific papers or studies, interviewing people for my books and articles. It’s a busy kind of mindset. To think deeply, read carefully and to write, I go to a Library where I intentionally do not connect to the Internet, or I retreat to our house in the country, where I am alone for days and can inhabit the space of whatever problem I am working on.
The temptation to fracture our attention and stunt our thinking is also a social challenge. Paying attention fully to one another is a precious and fragile process, especially today.
For instance, I often talk with my husband about my book and whether we do so by phone or in person, there are tensions related to how each one of us interprets the act of paying attention. When I’m asking for his feedback on my writing or evolving ideas, he’ll often start puttering around the house, cooking or cleaning up. He insists that he’s still paying attention, and yet I think he’s not fully present at a crucial moment for me. My view is that these moments when we’re really talking about something that matters are rare and precious, so why do anything that might take away from the possibility for full connection? It’s a difficult call.
I once interviewed a UCLA anthropologist who is a MacArthur Fellow and expert on Americans’ hyper-busy family lives. I will never forget one of her comments to me: “Will we look back someday and say, we could have been having a conversation?” That really stuck with me. So often we could have been having a conversation rather than sitting side by side, hardly present to one another, splitting our focus.
So if you are somebody who really, really gets this stuff, and even you struggle with the addictive, impossible-to-put-down, nature of those dopamine firing technologies, what hope do the rest of us have? Collectively, do we have any chance that we can rein this stuff in? Our attention and where we choose to rest our eyes is now some of the most valuable real-estate in the world, and everybody wants it, and reading the stuff that Larry Rosen and people talk about, they say we are evolutionarily incapable of resisting this stuff.
Well, that is the mantra at the moment. But I strongly disagree that we are helpless in the face of these enticing devices.
We’re finally having a lot of new and important conversations about the brain hacking that the tech systems “do” to us. There’s a lot of anger at the big four or five tech companies, and it’s very true that the inventors, the makers must become more accountable for their inventions. Yet this backlash reminds me of the kind of outcry that a toddler might make when a parent takes away their candy.
In other words, we as users have more choices than we sometimes care to admit, and we have to be accountable to ourselves, and ask not just what technologies can do for us, but what we should be doing for ourselves. We have believed for too long that we can’t do anything about the tsunamis of new technologies shaping our lives. And we have believed and hoped that these technologies will solve our problems if we give ourselves over to them. That’s wrong on both counts. We can be more deliberate about their use, teach our children to do so, and ask more sceptical questions about our relationship with technology and what kind of humans we want to be now and in the future.
It’s important to wake up to our ability to make choices regarding technology – but I’ll admit that this is difficult in a culture that until recently has been so skewed in favour of unthinking adoption and use. Once in an Apple store, I inquired about buying a phone that didn’t have any Internet service. And the employee chastised me for wanting this option, telling me that I should “get with the program” and be fully connected 24/7. It was a chilling experience.
Historically, the inventors often dominate the initial conversation regarding new devices; early makers of the telephone tried to dissuade customers from using the device for social purposes. They saw the phone as a business tool. Period. But just as we all made the telephone into a tool for social and as well as business use, so too we can push back and shape our new devices as we wish.
It all starts very small. Consider the realm of work-life balance. Companies used to be afraid to grant flexible work hours, thinking that parents and others who wanted flex time would sow chaos in the system. But often workers really wanted a tweak or a small fix, such as leaving early once in a while to watch their child’s softball game. Gradually, our ideas about flexibility shifted and expanded and grew more flexible as the conversation on the issue expanded and matured.
Just as the Transition movement stands for small, powerful, bottom-up changes and for seeing opportunity where others see calamity, so too we need to shift our attitudes toward technology. Google just released changes to their Android systems that prompt users to take a break during long stretches on YouTube and allow users to “shush” notifications on a phone more easily simply by turning the phone over on a table. They are important small tweaks that have been made in response to a growing public backlash against 24/7 connectivity. And I’m told that Google was inspired in part by my book, which is heartening.
But we all need to do far more to combat tech-excess in our lives. And first we have to believe that we can create digital lives that are balanced, deliberate, focused and thoughtful.
If it had been you who had been elected as the President rather than the current incumbent, and you had run on a platform of ‘Make America Imaginative Again’ – that you felt that there was a slide in collective imagination, and that it needed to be the focus of education, public life, home life, across the board that there needed to be a real scaling up and care given to the country fostering its imagination as best it could – I wonder what might be some of the things you might do in your first 100 days in office?
We can’t just institute or regulate imagination. If I was President, I could only in my first 100 days plant the seeds of future change. But I would start by celebrating “slow” thinkers, and the stories of the dead ends, circuitous journeys, and cognitive patience that are crucial for creative problem-solving and discovery across many domains.
We as a society are entranced by epiphanies and by instant success, by quick wits and packaged answers and by downloadable solutions. We celebrate the outcome, and the instant. Yet we should celebrate the long, slow “processes” behind good thinking, productive imagination, and creativity. As a scholar in residence at a museum one summer, I once helped curate an exhibit of works by artists whose media was at least partly in wood. I decided for my part to create a corner devoted to the artistic process, with samples of test pieces and quotes from the artists on their emotional connections to the work as it progressed. I wanted to help visitors see the steps that are an essential part of the process of creation.
Secondly, as President, I would try to help change our vocabulary around human thinking. Cognitive science has now rejected the metaphor of the brain as a computer, yet in daily life we still cling to this analogy to the detriment of our understanding of the mind. For instance, we think of memory as a stack of file folders that we whip out and put back, when actually it’s an incredibly organic, growing, network-like entity. To remember and to recall takes time and struggle and effort. It’s not instant.
If we treat humans as if they are machines in terms of our language, we are doing ourselves a vast disservice.
Any last thoughts?
It’s important to remain alert to the unexpected consequences of our technological world. I’ve talked about the ways in which our split-focus lives impact our cognition, and how the instantaneity of online information shapes us. I’ve mentioned how the degree of our immersion into tech-living affects us and how we often approach technology unthinkingly, without fully being aware of our responsibility to shape our relationship to these forces.
And there is one last aspect of technology that we should be alert to: the template-nature of information online. So often information, conversations, data and social media are presented in box-like forms, such as lists and bullet points, in part due to the constraints of the software. This shapes our thinking!
In a sense, computational worlds are the extension of societies increasingly built upon standards and values related to science and money, societies that treat even the unquantifiable as something precise, neat, and packaged, historians say. This is the kind of value system that allows us to mistake the MRI for the patient, or the test score for the child, or the personality test for the human job candidate. It’s reductive.
This is what allows us to think that a tweet can do full justice to a complex moral dilemma or we can weave down a highway as if we were in a video game. Yet so much of life, including the wonderfully messy capacity to imagine new better worlds and what-if solutions, doesn’t fit into neat boxes.
Main image: Courtesy of calmlivingblueprint.com