This site is part of the Siconnects Division of Sciinov Group
This site is operated by a business or businesses owned by Sciinov Group and all copyright resides with them.
ADD THESE DATES TO YOUR E-DIARY OR GOOGLE CALENDAR
Mar 31, 2025
AI mental health apps, ranging from mood trackers to chatbots that mimic human therapists, are proliferating on the market. While they may offer a cheap and accessible way to fill the gaps in our system, there are ethical concerns about overreliance on AI for mental healthcare, especially for children. Most AI mental health apps are unregulated and designed for adults, but there’s a growing conversation about using them with children.
Bryanna Moore, PhD, assistant professor of Health Humanities and Bioethics at the University of Rochester Medical Center (URMC), wants to make sure these conversations include ethical considerations. No one is talking about what is different about kids—how their minds work, how they're embedded within their family unit, how their decision making is different, says Moore, who shared these concerns in a recent commentary in the Journal of Pediatrics. Children are particularly vulnerable. Their social, emotional, and cognitive development is just at a different stage than adults.
In fact, AI mental health chatbots could impair children’s social development. Evidence shows that children believe robots have moral standing and mental life, which raises concerns that children, especially young ones, could become attached to chatbots at the expense of building healthy relationships with people. A child’s social context—their relationships with family and peers—is integral to their mental health. That’s why pediatric therapists don’t treat children in isolation.
They observe a child’s family and social relationships to ensure the child’s safety and to include family members in the therapeutic process. AI chatbots don’t have access to this important contextual information and can miss opportunities to intervene when a child is in danger.