Preventing Suicide
By Lydia Denworth | April 2018 | Scientific American | Topics: Mental Health, Science and Health
Social scientists have begun to close in on new ways to stop people from taking their own lives.
Thirty minutes and an index card. That’s what clinical psychologist Craig Bryan needs to conduct what he calls crisis response planning with a soldier who is suicidal. “Tell me the story about the day you tried to kill yourself,” Bryan asks. Then he listens and follows up with the type of question intended to build trust and uncover warning signs. “How would you know that you’re getting stressed out?” Planning mode comes next, identifying self-management strategies such as exercise. Bryan also asks about reasons for living. “What is good in your life even though things are bad?” Finally, on the card, a soldier handwrites a “safety net” checklist of emergency resources: a crisis hotline, a therapist, 911, an emergency room.
This simple approach differs in several respects from more traditional therapies. It focuses squarely on suicidal thoughts and behaviors rather than symptoms of depression, post-traumatic stress disorder or any other mental illness. It provides options for what people can do rather than telling them what they can’t, unlike the long-standing contract for safety dating back to the early 1970s that asks suicidal people to promise not to harm themselves. It is fast and may not necessarily require a professional. And of greatest importance, it works. Last year Bryan and his colleagues reported that in a group of 97 soldiers with suicidal thoughts and behaviors, those who underwent crisis response planning were 76 percent less likely to attempt suicide in the next six months than those treated in other ways.
The strength of that result surprised even Bryan, who is executive director of the National Center for Veterans Studies at the University of Utah. But it provided additional evidence of something he already knew. “We’re in the midst of a paradigm shift in suicide prevention,” he says. “There’s this new explosion of research that is calling into question a lot of the old assumptions that not only researchers but also health care providers and members of the public have had about suicide.”
For decades suicide has lurked in the shadows, weighed down by stigma. Once considered a crime, the act of killing oneself is still viewed as a sin in some religions. Even those who know that suicidal thoughts and behaviors stem from a brain disease or a psychological disorder have avoided or misunderstood the subject—hospitals and schools have been reluctant to screen for it, pharmaceutical trials have excluded suicidal patients and funding institutions have been unwilling to support research. The few clinicians and scientists working in the field made little headway.
Meanwhile suicide rates have gone up. Between 1999 and 2016 the overall rate rose by 28 percent in the U.S. The rise was steeper among certain groups: for middle-aged women and men, it jumped 64 and 40 percent, respectively. Among young girls between 10 and 14, the suicide rate more than tripled, although it is still very low. Since 2001 the suicide risk among veterans has also climbed—they are now 20 percent more likely than civilians to take their own lives. Almost 45,000 Americans died by suicide in 2016 making it the 10th leading cause of death. For every person who dies by suicide, nearly 300 consider it.
Finally, suicide has become too urgent a problem to ignore, with the rising rate among military personnel an especially powerful call to action. The U.S. Department of Defense, the U.S. Department of Veterans Affairs and the National Institute of Mental Health are pushing for progress, and a new generation of suicidologists are working to pull suicide out of the shadows and give it the focus required to save lives. That has meant confronting suicide head on as a condition and a research endeavor and recognizing that all along the tortured path that leads to death by suicide—from the earliest warning signs to final attempts—old diagnostic strategies and therapies weren’t working.
In their place, researchers have applied new ideas and pioneering technologies and begun to see promising results. From low-tech approaches like Bryan’s checklist to the application of machine-learning algorithms to analyze medical records and patient thought patterns, a growing body of work suggests we might finally be able to bend the curve on suicide rates. Considerable challenges remain—translating ideas to practice, scaling them up, getting clinicians to adopt them. But for the first time, says Joshua Gordon, who took over as director of the NIMH in 2016 and promptly declared suicide one of his top three priorities, “we now have an evidence base for identifying people at risk and intervening to reduce that risk. I have a lot of hope that we can change things.”
Identifying Risk
The human instinct for self-preservation is strong. What, then, drives people then to contemplate hurting themselves? Theories of suicide have always postulated a mix of social isolation, overwhelming pain (primarily psychological) and hopelessness. There is still no consensus, but in 2005 Thomas Joiner, a clinical psychologist at Florida State University, added the concept of “acquired capability.” It acknowledges that acting on suicidal thoughts requires the ability to overcome the natural aversion to injury and death, a goal not every anguished person can achieve. This insight has led to a new set of theories that separate ideation and action. The three-step theory of David Klonsky of the University of British Columbia and Alexis May of the University of Utah notes that disposition (such as personality), experience (combat exposure) and practicalities (availability of firearms) all contribute to suicide capability.
Better theories, however, have not yet translated into better estimates of who is most likely to attempt suicide. In 2016 an analysis that combined decades of research on risk factors found that predictive ability had not improved over the previous 50 years. “Clinicians are no better than a coin toss at predicting who’s at risk,” says senior author Matthew Nock, a clinical psychologist at Harvard University, who received a 2011 MacArthur Fellowship for his innovative research on suicide.
Depression, for example, has always ranked near the top of the list of warning signs, yet Nock’s analysis revealed just how ineffective it is by itself in making a prediction—and most risk factors are traditionally evaluated on their own. Whereas many people who attempt suicide do suffer from depression, far more with depression do not attempt suicide. “Suicide cuts across all diagnoses, therefore diagnoses don’t matter as much,” Bryan says. “We’ve had it flipped on its head for years. It’s not that suicide is a symptom of psychiatric illness. It’s that psychiatric illness is often a manifestation of the vulnerabilities that lead to suicidal behavior.”
To more accurately identify those vulnerabilities, several research teams, including Nock’s, have applied machine learning to electronic health records, one of a variety of promising avenues they are exploring. The algorithms search thousands of potential risk factors simultaneously, from age and race to medications, number of inpatient and outpatient visits, and diagnoses of schizophrenia or mood disorders and can be taught to make predictions far more efficiently than human beings. In a 2017 study, Colin Walsh, a data scientist at Vanderbilt University, and Jessica Ribeiro and Joseph Franklin, both at Florida State University (the latter was previously a postdoctoral fellow in Nock’s lab), used this technique to review large numbers of records. Their study included 3,250 patients who had attempted suicide and another 1,900 patients who had not (the control group).
Their strategy achieved 80 to 90 percent accuracy at predicting retrospectively who would make an attempt within two years, and it was 92 percent correct at forecasting whether someone would do so within a week. So far the algorithms also generate a lot of false alarms, erroneously flagging risk for suicide attempt. But researchers are working to improve accuracy and to test them widely. “The idea would be to have a software program that would run on medical records generating risk scores,” Nock says.
Nock has also investigated new biological or behavioral signals that communicate risk reliably when patients can’t or won’t admit to suicidal thoughts. A four-minute implicit association test his lab developed proves remarkably good at measuring how people think about suicide, no matter what they say. In the test, during several initial trials, the words “death” and “me” appear on one side of the screen, and the words “life” and “not me” appear on the other side. Words related to each of these categories then appear at the center of the screen—“dead,” “they,” “survive,” “I”— one at a time. Participants press one key if the new word belongs with the paired words on the left, another if it belongs on the right. Then the pairing switches, and now “life” gets coupled with “me” and “death” with “not me.” People who respond faster when “death” and “me” flash together are approximately three times more likely to attempt suicide in the next six months. These findings, first published in 2010, have been replicated several times with thousands of participants.
Recently Nock has launched a larger study in the emergency department at Massachusetts General Hospital that combines this implicit association test administered on iPads with machine-learning reviews of health records, as demonstrated by Walsh and his colleagues, along with self-report questions about known suicide risk factors on an iPad and blood work to look for genetic markers. “We’ve had some encouraging results here and there over the past few years,” Nock says and then asks: “What if we put all these together in one calculator like they do for heart disease? You go to the doctor and based on your height, weight, age and cholesterol, they say this is your probability of heart attack in the next year. Can we do the same thing for suicide attempts?”
Other experiments remain much further from practical use but are still intriguing. Nock’s implicit association test caught the attention of cognitive neuroscientist Marcel Just of Carnegie Mellon University, who uses functional magnetic resonance imaging (fMRI) and machine learning to identify patterns of brain activity that correspond to thought patterns. For example, if a person in a scanner is given the word “jury,” Just’s method can detect that the subject is thinking about a group of people, authority and rules, but not that those people are sitting in a courtroom trying to assess evidence. As Nock, Just, psychiatrist David Brent of the University of Pittsburgh and their colleagues reported in 2017 in Nature Human Behaviour, Just’s neurosemantics method revealed that the brains of some suicidal people responded differently to positive and negative words related to life and death, correctly distinguishing 91 percent of the time between the 17 subjects who had thought about suicide versus the 17 who had not. Just now wants to replicate the work and see if it might be administered using electroencephalography, a less costly technique that monitors electrical activity in the brain, but that work is in the early stages.
Luckily, technologies such as smartphones are more accessible than fMRI and have begun to provide a solution to the need to monitor suicidal thoughts during high-risk periods. “People who are at risk for suicide may respond to stressful situations more intensely than people who aren’t at risk, but we can’t always induce that kind of stress in the lab,” says Evan Kleiman, a research associate in Nock’s lab. The researchers are experimenting with tools for monitoring emotional and physiological changes in patients. There are smartphone apps that check in with a patient or wrist-worn biosensors that track skin conductance, skin temperature and heart rate. If a patient has a fight with a spouse at home, physicians will know right away of their charge’s added stress. These technologies have contributed to the recognition that clinicians need to assess risk over hours, days or weeks, rather than months or years. The Fitbit-like bracelets are being tested on inpatient units by adolescent and adult patients, and results aren’t in yet, but “we think there’s great promise here,” Nock says.
Asking the Question
Preventive technology requires that a person with suicidal thoughts or behaviors is receiving some form of treatment. Unfortunately, most people are not. Even those who see a health care provider are not always helped. About half of all suicide victims visited a medical setting within the 30 days before death (not necessarily because of suicidal risk). But less than half of mental health professionals receive adequate training in suicide risk assessment or intervention during graduate or medical school, and more than basic mental health resources are not generally available in most U.S. emergency departments.
This situation is changing. Increasingly, a mantra of suicide prevention is: “Ask the question.” To tackle suicide directly, we must ask about it directly. (Doing so will not put ideas into someone’s head.) In February 2016 the administrative body that accredits hospitals recommended that they screen all medical patients for suicide risk. A good step, but not every institution knew how to respond. “People started scrambling and making up their own questions,” says Lisa Horowitz, a pediatric psychologist at NIMH. “Either they underdetect or they overdetect and overburden already strapped resources.” Thus, an effort to ensure appropriate screenings and adequate follow-up is under way. In 2017 investigators from across the U.S. reported on an NIMH-funded study to test a screening tool in eight emergency departments. Screening alone did not affect subsequent suicide attempts compared with treatment as usual, but adding intervention to screening achieved a moderate but significant 5 percent reduction in suicide-attempt risk.
Horowitz now leads an effort to put a brief screening tool directed at young people between ages 10 and 24 in as many hospitals as possible. Called Ask Suicide-Screening Questions (ASQ), it begins: “In the past few weeks, have you wished you were dead?” Horowitz is still going through data on the tens of thousands of kids who have taken it, but she is encouraged. “People were worried this was opening Pandora’s box, but what we are finding is that you can detect the risk and that it’s manageable.” Another striking finding: when queried about whether they wanted to be asked about suicide risk; 95 percent of kids surveyed said yes.
Among those who do not make it into medical settings, social media may offer other kinds of warning signs. Bryan and his colleagues reviewed the social media networks of 315 military personnel who died by suicide or other causes. In the 12 months before each person’s death, they looked for differences in content—for example, mention of relationship or financial problems, suicidal thoughts or behaviors, or health or anger issues. By detecting patterns in such content, they could clearly differentiate between who had died by suicide and who had not. A follow-up study revealed just how variable the emotional lives of the soldiers were over that period. “When people kill themselves, in the time leading up to their death, they have good days and bad days,” Bryan says. “It’s tumultuous.” In the past, data analysis used to regard these ups and downs as noise, but by realizing that they contained critical information—the variability was the signal not the noise—Bryan’s team could estimate when individuals were most likely to kill themselves.
Now they are working out how to do this prospectively. Analysis of social media may become more relevant beyond a physician’s office if friends and family know what to look for, Bryan says. But some clinicians can and do access posts and tweets that have no privacy restrictions. In addition, Bryan is already using the same analytic approach on session-by-session assessment data with patients and has found that it may improve tracking and monitoring of a patient’s status.
A comparatively obvious way to lower suicide rates simply takes away the opportunity. Those thwarted do not all eventually find a way; nine out of 10 people who attempt suicide go on to live out their lives. After years of debate over cost and efficacy, the push for such “means restriction” is finally seeing results. A 2015 report in The Lancet Psychiatry found that placing safety nets under known suicide locations reduced death rates by 58 percent (the average moved from 5.8 to 2.4 per year). In December temporary 11-foot-high mesh safety fences went up above the existing waist-high railings on the George Washington Bridge, which spans the Hudson River between New York City and New Jersey and where 15 people killed themselves in 2017 and another 68 tried. Permanent fencing will be part of a larger restoration. At the Golden Gate Bridge in San Francisco, where more than 1,700 people have jumped to their deaths since it opened in 1937, a stainless-steel net is being built that will extend 20 feet beyond the walkways. In 2012 at New York University’s Elmer Holmes Bobst Library, perforated metal walls were installed in the 12-story atrium after three students jumped to their deaths there.
Far more deaths might be prevented by making guns less easily accessible. Nearly half of suicide attempts involve a firearm, and more than 80 percent of those attempts result in death. According to a 2004 study in the American Journal of Epidemiology, having a firearm in the home is associated with an increased risk of suicide. But removing firearms as a method of suicide prevention gets tangled up with the politically explosive issue of gun control, so this is one area where change is not likely soon.
Treatment That Works
Better prediction and questions—even methods of thwarting attempts—only help if clinicians can turn to treatments that reduce suicidal thoughts and behaviors and restore quality of life. One notable approach, dialectical behavior therapy (DBT), has already proved itself—consistently reducing suicide attempts by about half in certain patient populations.
Developed in the 1980s by Marsha Linehan, a clinical psychologist at the University of Washington, to treat suicidal patients with borderline personality disorder, the therapy consists of an intensive regimen, requiring multiple meetings every week for a year and extensive training for therapists. Perhaps the difficulty in fielding enough trained professionals and the commitment required for treatment explains why DBT alone has been unable to make a dent in suicide rates.
To have a broader impact, treatments for suicide will have to scale up and be supplemented with options accessible outside of emergency rooms or even by a download from the iTunes Store. A study published in 2017 in the American Journal of Psychiatry showed that low doses of ketamine, an anesthetic drug, brought about a significant reduction in suicidal thoughts within 24 hours, considerably faster than other antidepressants. A gamelike app developed by Franklin and Nock also shows promise. It matches suicide-related images—blood, wounds and knives—with aversive pictures of snakes, spiders, and the like. It then applies classical conditioning methods—training someone to change his or her natural response to a stimulus—to make people dislike the idea of suicide. In three randomized trials with participants who had recent suicidal thoughts, a few minutes of daily play for a month consistently decreased risk of suicidal behavior, although the effects disappeared when playing stopped. The game, called Tec-Tec for therapeutic evaluative conditioning, is now available on the App Store.
Bryan has focused on identifying the ingredients that work in DBT and other cognitive therapies and on developing treatments that can be taught to non–health care providers. “The essential ingredients boil down to two underlying factors,” he says. “The first is emotional dysregulation, the ability to identify what we’re feeling and then the capacity to change it. The second key element is cognitive flexibility, the ability to generate options or not get stuck in certain thought processes, beliefs or assumptions.” These two elements can be tackled through such tactics as mindfulness or relaxation training, restructuring negative thinking and encouraging social connections. Bryan’s initial 12-session therapy, which he calls brief cognitive-behavioral therapy, incorporated these elements and reduced suicide attempts by 60 percent.
The crisis response plan, his 30-minute intervention, originally came about as an emergency piece of the longer therapy. “The idea was, let’s do this while someone is in crisis and reduce the person’s risk in the short term, then he or she will get connected with ongoing mental health treatment, and that will provide the long-term solution,” Bryan says. But in a six-month follow-up of patients who had undergone only crisis response planning, the effect not only held, it strengthened. “That’s gotten us to think very differently about treatments,” he notes. “How could something so simple be so potent? That’s where we are now. What do we need to do next to figure this out to make it work better?”
Asking that question more broadly might possibly move toward achieving the ambitious goal set by the National Action Alliance for Suicide Prevention, a public-private partnership, of reducing suicide rates by 20 percent by 2025. That objective would provide tangible proof that the pain and hopelessness that lead a person to want to die can be anticipated, addressed and ameliorated.