You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't think any malfunctioning AI fits here. Of course ones who want to treat their disorders should consult someone who is competent in the topic or learn themselves, not just rely on a bot. I am pretty sure that bots giving health advice come with warnings encouraging users to do that. It is also not a reason not to use AI to solve such problems. In fact more a problem important, more it is reasonable to make the consulting on it available to the everyone, and applying AI increases availability.
I think the angle I was coming from was that the company made all its employees redundant and replaced them with an AI bot that was obviously poorly trained for the use case. If you are contacting a dedicated healthcare service, like the national eating disorder association in this example, you expect that the service you are contacting will have appropriate safe guards in place and give you at least safe advice, and whilst you can put a disclaimer saying that you should not solely rely on it, we are talking about clinically vulnerable people who may not use that information correctly.
Using AI tools that are not interpretable or explainable for this kind of health care is super dangerous. While it may expand care for some, it increases health risk for others.
https://www.independent.co.uk/tech/ai-eating-disorder-harmful-advice-b2349499.html
The text was updated successfully, but these errors were encountered: