ARTICLE AD
By now, it should be obvious that AI is capable of giving really, really bad advice. Sometimes the advice it gives is just stupid. Other times, it’s actively dangerous.
404 Media reports on an incident from the latter category in which a popular Facebook group dedicated to mushroom foraging was invaded by an AI agent, which subsequently provided suggestions on how to cook a dangerous mushroom. The agent in question, dubbed “FungiFriend,” entered the chat belonging to the Northeast Mushroom Identification & Discussion Facebook group, which includes some 13,000 members. It then proceeded to dole out some truly terrible advice.
In what seems like it must have been a test of the AI agent’s knowledge, one member of the group asked it “how do you cook Sarcosphaera coronaria”—a type of mushroom that contains hyperaccumulate arsenic and that has led to at least one death, 404 writes. When queried about the dangerous mushroom, FungiFriend informed members that it is “edible but rare,” and then added that “cooking methods mentioned by some enthusiasts include sautéing in butter, adding to soups or stews, and pickling.”
404’s writer, Jason Koebler, says he was alerted to the incident by Rick Claypool, the research director for the consumer safety group Public Citizen. Claypool, who is a dedicated mushroom forager, has previously written about the dangerous intersection between AI agents and his hobby, noting that the use of automation to differentiate between edible and poisonous mushrooms is “a high-risk activity that requires real-world skills that current AI systems cannot reliably emulate.” Claypool claims that Facebook encouraged mobile users to add the AI agent to the group chat.
This incident is reminiscent of a separate one from last year in which an AI-fueled meal prep app encouraged users to make sandwiches made with mosquito repellant, as well as another recipe that involved chlorine gas. In another well-documented incident, an AI agent encouraged users to eat rocks. Suffice it to say, maybe cooking is one particular domain that doesn’t really need an AI integration.
Our own experimentation with AI platforms—such as Google’s recently launched AI Summaries—has shown that the algorithm-led agents often have no idea what they’re talking about (for instance, Google’s program once tried to convince me that dogs play sports and told me that the best way to make pizza was to fill it with glue). For whatever reason, corporate America continues to rush AI’s integration into customer service applications across the web, despite the obvious risk of pushing out a whole lot of bad advice to the public. The attitude seems to be: It doesn’t matter if the information is incorrect, just so long as we don’t have to hire a real human to do this job.