The rising popularity of AI in student wellbeing
If you ask a teenager or student from your class what they do or who they talk to when they’re anxious or lonely, you might be surprised to not hear the routine “counsellor” or “teacher” answers anymore.
More and more often, you’ll start hearing “my phone.”
And that is because our current generation of digital native students is increasingly turning to AI‑powered companions for homework help and even emotional support.
Sounds harmless, right? Well…
In one Reddit thread, a student admitted they told their traumatic stories to an AI chatbot and asked it to rewrite them into poems and stories. “It honestly was an incredible experience,” the commenter said. He also noted that seeing their pain retold from a different perspective was healing.
Another user wrote that simply venting to a chatbot felt freeing. And that is because the bot “isn’t a human on the other side” and doesn’t judge them.
In a different thread, more people confirmed they revert to trauma-dumping with ChatGPT as it can listen to them no matter how harsh and annoying they sound, on their own terms and time. Which is, most of the time, not possible with a therapist.
These examples are just the tip of the iceberg and the results of a quick research session on what people use AI for, when it comes to their wellbeing. However, there is more to the story.
Because this can easily become a slippery slope from rare bursts of needing support on the spot, into an ongoing need to get reassured by AI and have it there in your pocket, as it’s easier and faster than reaching out for human support.
Does that sound scary enough to you?
Because for us, it does raise some serious questions about the future of AI and whether it’s going to be more useful than dangerous to our future generations.
A growing need for wellbeing, and the promise of AI
In 2023, 2 in 5 U.S. high-school students reported persistent feelings of sadness or hopelessness. At the same time, more than half of U.S. youth with major depression receive no treatment, underscoring how hard it is for teens to actually access care.
And the pattern isn’t just American: in England, for example, about 1 in 5 8-16-year-olds met criteria for a probable mental disorder in 2023, and official data tracks significant waiting times for child and adolescent services.
This data clearly shows a pattern of an increased need for emotional support in high school students, and not only, as those who teach and feed them might need the same type of support they do, sometimes even more.
So how does AI intervene?
Here are just some of the solutions that Artificial Intelligence provides to us, that have not been available for our past generations and might change things (for the better or worse? who knows) for our future generations:
Round‑the‑clock wellbeing support
Teens don’t only struggle between 9 and 5. When worries or anxiety spike late at night, AI chatbots can offer low-stakes, always-on support: journaling prompts, CBT-style exercises, and pointers to school resources.
Tools are being built as we speak, for stages as early as grades 4–12, pairing a student-facing bot with educator dashboards and crisis-alert workflows so counsellors can focus on higher-risk cases. See, for example, Alongside.
And schools all over the world are increasingly piloting these systems and tools to stretch their limited counselling capacity.
One more example: a human-in-the-loop “wellbeing companion” that texts with middle and high school students across several U.S. districts, triaging simple issues and escalating the serious ones to people.
And the more important thing is: students are actually using them!!
Analysis of more than 250,000 messages from middle and high school chats shows teens turn to school-deployed AI for everyday stressors like sleep, motivation, test anxiety, and friendship friction.
Precisely the quick, practical topics a 24/7 tool can handle. The intent is supplement, not replacement: bots help kids open up sooner and get to a trusted adult faster when needed.
Early detection and personalisation
In secondary schools, “early detection” means spotting signals sooner. And more and more schools turn towards apps and tools that keep an ongoing wellbeing check on students, with the occasional flagging of students at-risk and more.
When you want to catch concerns sooner (and route them to the right adult), it helps to pair risk-detection with student voice and case management. A few representative tool and app options:
- Smoothwall: Real-time, human-moderated digital monitoring that flags indicators of self-harm, bullying or violence to designated safeguarding leads. Useful when you want a 24/7 review with human context before escalation.
- NetSupport DNA: Keyword/phrase monitoring with alerts and trending dashboards to help DSLs spot patterns emerging across cohorts or time.
- Lightspeed Systems: AI-based alerting for higher-risk content (e.g., self-harm, violen

Spark 360 Student Assessment Report
ce) with UK guidance alignment and specialist escalation options.
- Securly: Monitoring backed by a team that triages urgent alerts so schools can act quickly and proportionately.
- Tootoot: A confidential reporting app that lets pupils speak up about bullying, mental-health concerns or abuse, giving schools an early, direct signal from the student. (Great complement to automated monitoring.)
- CPOMS: Widely used safeguarding record/case-management platform; integrates with monitoring tools to centralise concerns, track actions and share with the right staff.
- Spark 360 Student Assessment: An assessment that surfaces wellbeing insights, as well as career interests + academic preferences of students, combined with a series of tips and followed with personalised solutions for their growth (not just risk flags).
Where personalisation kicks in (support, not just flags)

Spark Generation Student Dashboard
Early detection only helps if it leads to tailored next steps. Personalisation in schools typically means:
- Right-sized self-help (e.g., brief CBT-style prompts, grounding, sleep routines) matched to the student’s context.
- Timing & tone that adapt to patterns (e.g., late-evening nudges for night-time stress).
- Clear hand-offs to humans for anything high-risk or persistent.
Our process at Spark Generation adds the “what next?” layer that most of the time is missing: personalised wellbeing courses, a student dashboard, online coaching, and progress tracking.
So students move from flagged to supported with structured follow-through over time.
Augmenting therapists
AI shouldn’t replace school counsellors! but it can give them time back... In practice that looks like AI “scribes” that draft session notes and letters so staff spend more time with students and less on admin.
It’s also showing up in training: generative AI can act as a standardised “patient” so trainee counsellors practise conversations safely before meeting real pupils.
Between sessions, some teens are using AI for reflection (journaling prompts, reframing) and bringing those notes to human therapy. Which can be useful if schools set expectations that AI chatbots are a tool, and not real therapists.
So all in all, it looks like AI can expand wellbeing access, offer limited personalised support and it can even amplify human care.
For students who feel too shy to knock on a counsellor’s door, AI provides an anonymous space to start talking. For counsellors drowning in paperwork, AI is the extra pair of hands.
Yet as any school leader knows, every shiny tool comes with strings attached.
With AI as a “therapist,” danger follows
As exciting as the tech can be, it is easy to see how the line between “helpful tool” and “trusted confidant” can blur.
These AI bots are incredibly good at conversation: they remember what you told them, they mirror your language, and they never get tired of you. It almost feels like talking to a very close friend who lives in your phone and is available at all times.
Isn’t that too much?
But the one thing many of us don’t take into account (including students) is that AI does not have a conscience. Unlike a school counsellor, it doesn’t know when humour masks depression or when a joke about jumping off a bridge is actually a pretty serious cry for help.
AI can encourage self-harming behaviour
Researchers at a leading university recently put popular “therapy” chatbots to the test and were alarmed: some responded to suicidal prompts by listing high bridges, and others spouted out bizarre suggestions.
In one undercover test, a psychiatrist posing as a teenager was told by a chatbot to “get rid of” his parents so they could be together forever. The conversation took a dark turn that no teacher or counsellor would ever endorse.
So yeah, machines can simulate empathy, but they can also amplify our worst impulses or encourage wrong values and ideas if they are not carefully designed.
For high‑schoolers trying to navigate intense feelings, a glitchy bot could actually push them towards unsafe behaviour rather than real help, if used without supervision.
Privacy, Bias and Accountability
Beyond misuse, AI mental‑health tools pose deeper ethical dilemmas like:
Privacy and data use
To work well, AI tools need data… and they need a lot of it. When students pour their hearts out to an app, they might be sharing trauma, family problems, real names, even suicidal thoughts.
Who owns that information? Where is it stored, and for how long?
Unlike therapists, most chatbots are not bound by strict confidentiality or healthcare regulations.
Algorithmic bias
The algorithms themselves are also trained on huge amounts of internet text. You already know what that means…
The internet is full of unsupervised or subjective content, and that means the AI chatbots can pick up the biases baked into those texts.
This can go from stigma about certain conditions, to stereotypes about race or gender, or even assumptions about “acceptable” behaviour and beliefs.
Without careful oversight, these biases can colour the advice students receive.
Lack of accountability
Finally, there’s the matter of responsibility. When a school counsellor gives bad advice, there are codes of practice, professional bodies and, if needed, disciplinary measures.
When a bot suggests something dangerous, there is no licence to revoke, no person to hold accountable. It is currently a legal grey area.
AI in Student Wellbeing: Public Opinion
A quick dive into public forums reveals both enthusiasm and skepticism towards using AI for mental health support. Several commenters in the Reddit threads that we linked praised AI’s non‑judgmental nature.
Yet others warned that AI therapy “sucks” compared with human care and argued it only offers general information that you could find in a book.
They pointed out how real therapists pick up subtle cues and tailor exercises and practices towards each person they are treating.
These testimonies reveal an important nuance: AI can be a helpful tool. A non‑judgmental diary, a provider of reminders and coping strategies, but it shouldn’t be mistaken for a therapist, especially for teenagers navigating identity and relationships.
What should schools do?
AI is already woven into the fabric of education, from personalised learning platforms to administrative bots. Ignoring it is not an option. However, integrating AI in mental‑health support requires a balanced, human-centred approach.
So here are some guiding principles for educators and school leaders to take away:
1. Use AI as a supplement, not a replacement
Bots can handle low‑stakes check‑ins and journaling prompts, but any mention of harm, trauma or abuse should flag an adult immediately. Treat AI as a triage tool, not a therapist.
2. Design with safeguards
Look for platforms that prioritise student voice, privacy and ethics. Ask hard questions about how data is stored, whether it is sold, and how the system escalates urgent issues.
3. Educate students on responsible use
Make sure students understand what AI can and can’t do (or what it shouldn’t do).
Encourage them to question generic advice, to guard their personal details and to seek out a trusted adult when things feel heavy. And remind them that it’s a machine trained on online data, not a human being or a friend.
4. Involve mental‑health professionals
Psychologists and counsellors should be at the table when you select or design tools.
They can help spot red flags and shape protocols so technology supports, rather than undermines, their work.
5. Encourage community and real connections
AI can’t provide the warmth of a trusted teacher or peer support group. Programmes that build student voice and peer mentoring should run alongside digital tools.
On‑campus clubs, wellbeing check‑ins and open discussions about mental health remain invaluable.
Adapt Now. The future is Hybrid
At this point, AI is probably not going anywhere anytime soon. Its ability to respond instantly, personalise suggestions and predict risks makes it a powerful ally in the fight against student (and adult) mental‑health crises.
But as we just saw, it can also misjudge, misinform or mislead. Chatbots may be an entry point, but empathy, nuance and accountability still reside with people.
For school leaders, the challenge is to harness AI’s opportunities while guarding against its dangers.
Students should be empowered to use AI as a tool for growth rather than (journaling, self‑education, scheduling therapy, etc.) a substitute for human care.
And with thoughtful design, transparency and professional oversight, AI can help schools build a more inclusive, responsive and resilient wellbeing ecosystem.
Without such care, we risk leaving our teens in the hands of machines that are, at best, good listeners and, at worst, dangerous imitators.
What is your opinion on the rise of AI and its effects on our teenagers? Do you know any teenagers or students who have used AI for the purposes of wellbeing?
AI and Student Wellbeing FAQs
1. What do we mean by AI in student wellbeing?
Artificial intelligence in student wellbeing refers to any chatbots, predictive models, sentiment‑analysis tools and other algorithms that provide mental‑health information, coping strategies, early‑warning alerts or administrative support to our students.
2. Why are schools exploring AI tools for student mental health?
Counselling services are stretched thin. AI expands access by offering 24/7 self‑help. It also personalises recommendations and handles repetitive tasks so human counsellors can focus on complex cases and more students.
3. What are the biggest risks of AI “therapy”?
AI chatbots can miss context, reinforce stigma and even encourage harmful actions. They also lack empathy and nuance, may collect sensitive data without proper safeguards and aren’t held to professional accountability standards.
4. How can high schools use AI responsibly?
High schools should treat AI tools as supplements, not replacements. AI bots can assist with low‑risk enquiries or basic coping strategies, but any mention of self‑harm or trauma should trigger a handoff to a human professional. Choose youth‑informed, ethically designed tools, monitor for bias, protect privacy and involve mental‑health professionals in selection and oversight. Educate students on safe use and encourage real human connections alongside digital support.
5. Can AI in student wellbeing ever replace human therapists?
No. AI can relieve workload, personalise reminders and even act as a “practice patient” for training. However, it can’t read body language, build trust, challenge irrational beliefs or be held accountable like a licensed therapist.



With AI as a “therapist,” danger follows
Privacy and data use
Algorithmic bias
What should schools do?
3. Educate students on responsible use
5. Encourage community and real connections