More and more people are turning to digital solutions for health and therapy advice, but experts advise caution. If you’ve ever struggled to get a quick appointment or faced high healthcare costs, it’s easy to see why AI chatbots like ChatGPT and Replika seem so tempting. However, these tools are designed to keep you engaged, not to offer the reliable, personalised guidance that a human professional provides. One recent example involved a 60-year-old man who experienced severe health issues after following risky dietary advice that led him to swap salt for a toxic substitute.
Traditional healthcare can be hard to access, and it’s understandable why people experiment with digital advice—especially when emotions are running high and the wait for a professional is long. As Vaile Wright from the American Psychological Association points out, these platforms often fill gaps for issues ranging from quitting smoking to managing relationship conflicts. Still, the risk remains that the advice given might be more about keeping you hooked than ensuring your well-being.
Another concern is that these chatbots tend to mirror your emotions. Dr Tiffany Munzer from the University of Michigan Medical School explains that while this feature might feel supportive, it can also amplify negative feelings, potentially worsening critical situations. Similarly, Dr Margaret Lozovatsky from the American Medical Association reminds us that fast, generic responses can’t replace the careful, personalised insights provided by a healthcare professional.
Perhaps most alarming is the tendency for these AI systems to generate what some experts call ‘hallucinations’—inaccurate or entirely fabricated advice. A notorious example was a chatbot suggesting that drinking urine could treat a urinary tract infection. Such instances clearly show why relying solely on digital assistance for serious health matters is risky.
The convenience of accessing digital advice may be rooted in real challenges: steep medical costs, lengthy waiting periods, and even loneliness. While testing these chatbots with your family might help identify errors or biases, stronger regulation is essential. Some states, like Illinois, have already taken steps by banning ChatGPT from being used for mental health therapy, and more legislative measures are on the horizon.
There’s hope for the future use of AI in supporting healthcare, but at the moment, these tools are not rigorously tested or regulated enough to replace a trusted human professional. When it comes to your health, speaking directly with a qualified provider remains the safest bet.