November 2, 2024

When a child or teenager attempts or dies by suicide, it sets off a desperate search to understand why. While that’s the case with many suicide attempts or deaths regardless of the person’s age, a child’s vulnerability and relative innocence creates a particularly heartbreaking contrast with their feelings of hopelessness.
A new study aims to better understand one set of risk factors for youth: their online experiences. Published Monday in JAMA Open Network, the study analyzes data collected by Bark, a parental and school monitoring app that tracks a child’s online activity and uses artificial intelligence to detect signs of recent or imminent self-harm or suicidal thinking or behavior. Once the system flags concerning activity, like a Google search for suicide methods or harassing instant messages from a peer, it alerts the child’s caregiver or a school official, if the device being monitored is used for classwork.
The researchers, a team of scientists at the Centers for Disease Control and Prevention and Bark, drew data from a sample of more than 2,600 school districts and retrospectively identified 227 child and teen users whose online activity triggered a suicide attempt or self-harm alert sent to school administrators, meaning that they indicated they were contemplating or had completed either act.
After matching those cases with 1,135 controls, or students whose online history hadn’t led to an alert, the researchers found an association between suicide attempts and self-harm and exposure to the following types of content: cyberbullying, violence, drugs, hate speech, profanity, depression, low-severity self-harm, and sexually suggestive language or media, which could depict graphic acts or abuse. When a student experienced five or more of those risk factors, they were 70 times more likely to have a suicide attempt or self-harm alert.
“The use of technology to help prevent youth suicide is an evolving area,” Dr. Steven Sumner, the study’s lead author and senior advisor for data science and innovation at the CDC’s National Center for Injury Prevention and Control, said in an interview. “As a first step, this research shows that red flags related to youth mental health can be spotted earlier and this may open up new opportunities to connect with and help children.”
The study is thought to be the first of its kind since it tracked over time specific types of internet activity, rather than just screen time and self-reported behavior, and evaluated subsequent evidence of suicidal thinking or behavior or self-harm. It didn’t, however, establish a causal link between factors and outcomes, nor were the researchers able to analyze data from the students’ own devices or determine whether students were ultimately hospitalized for an attempt or self-harm as the Bark program used in schools doesn’t have access to private data or medical records. (Bark also offers monitoring services for parents.)
The study hints at the promise of AI monitoring as a suicide-prevention tool for youth, some of whom may not share their feelings or struggles with an adult who can help them. It also raises important questions about using and relying on the technology to predict suicide risk given that human connection is profoundly important to prevention, and because algorithms can unintentionally incorporate different biases into their analysis, thereby missing or wrongly interpreting the significance of various online interactions and activities.
Parents and youth might find the study compelling because it pinpoints the negative online experiences that precede a suicide attempt or severe self-harm alert. The researchers discovered that signs of ongoing depression, like expressions of hopelessness and negative self-esteem, had the strongest association amongst all the risk factors. Cyber-bullying was the most prevalent experience and often showed up in the form of name-calling, mean-spirited comments, and threatening messages. Exposure to profanity may surprise some, but the researchers speculated that cursing could reflect difficulty managing emotions as a result of poor mental health, or that it might be a proxy for life stressors.
Dr. Nance Roy, Ed.D., chief clinical officer of the youth suicide-prevention nonprofit The Jed Foundation and an assistant clinical professor in the department of psychiatry at Yale School of Medicine, called the new study well done but characterized monitoring of a student’s online activity as one tool of many.
The Jed Foundation, which works with schools and colleges to develop suicide prevention plans, uses a multi-pronged approach to reaching youth. That includes helping them develop life skills, promoting social connectedness, and ensuring student access to effective mental health treatment, among other strategies.
“The more tools we have the better, but I think it doesn’t take the place of or substitute every teacher, every staff member, every coach, every student, everyone in a school system being trained and educated to know the signs of struggle, to know what to look for,” said Roy.
Similarly, parents shouldn’t deceive their child by monitoring online activity without their knowledge, or think of it as a “catchall” for identifying risk. Instead, Roy said parents should offer ongoing support and familiarize themselves with signs of suicidal thinking or behavior, which include sustained withdrawal from hobbies and friendships, changes in eating and sleeping, and alcohol and drug use.
“There’s nothing to replace the parents’ eyes or the caregivers’ eyes on the child,” she said.
Dr. Munmun De Choudhury, Ph.D., an associate professor in the School of Interactive Computing at Georgia Tech who was not involved with the study, said it was clever and well-designed. De Choudhury directs the Social Dynamics and Wellbeing Lab at Georgia Tech, where her team analyzes social media to glean data-driven insights about how to improve well-being and mental health. (She collaborates with some of the study’s co-authors but wasn’t aware of the paper prior to publication.)
De Choudhury said the findings prompted her to consider the role social media platforms should play in reducing children’s negative experiences. Typically, parents respond to such challenges by reducing screen time, but De Choudhury said that platforms need to target and develop meaningful solutions for cyber-bullying, violent content, hate speech, and self-harm, among other risk factors. While many platforms offer related resources, it’s not clear how much of a difference they currently make. Meanwhile, there are business incentives to keep users engaged, so platforms’ products may not sufficiently alert children to harm, or protect them from it in the first place.
“These are bad things and this paper shows that they’re having an adverse effect on the mental health of youth,” she said. “We need to do something about these bad uses if we still want to reap the positives of these platforms.”
De Choudhury said that ethically using monitoring programs to predict suicide risk for youth hinges on obtaining their active consent. For the data collected by Bark and used in the study, parents provided their permission. It’s unclear the extent to which students knew their activity was being tracked.
She also noted that it’s crucial for monitoring programs like Bark, which does provide some information about how its algorithm works, to be transparent about the AI that powers the analysis of online activity. In general, critics of AI in public health and medicine say that algorithms aren’t neutral and can easily reproduce racial and ethnic disparities. As experts identify concerning trends like the recent increase of suicide rates among Black children and teens, the work of building prediction algorithms to save lives must address the pitfalls of using AI.
Profanity as a risk factor, for example, might reflect unintentional bias if Bark’s model is trained on large troves of data that “represent the majority voice,” said De Choudhury.
“What is the sensitivity of these algorithms to the conversational styles of different demographic groups?” she said, suggesting that risk factors might vary as a result.
Bark said that its algorithm is updated with the latest in teen slang, and that profanity is often correlated with emotional responses like violence, bullying, and depression. The company uses various techniques to minimize bias, including ongoing training for how to label data accurately.
If you want to talk to someone or are experiencing suicidal thoughts, Crisis Text Line provides free, confidential support 24/7. Text CRISIS to 741741 to be connected to a crisis counselor. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 8:00 p.m. ET, or email [email protected]. You can also call the National Suicide Prevention Lifeline at 1-800-273-8255. Here is a list of international resources.
More in Social Good

source

About Author