Strange IndiaStrange India



Eating wild mushrooms is a famously dangerous hobby. If you are an expert forager, you’ll know what grows in your area, where to find it, and how to be absolutely sure that you’ve found an edible species rather than a poisonous one. If you aren’t, you could end up chowing down on mushrooms with names like “death cap” and “destroying angel.” 

It takes years of experience and a keen eye for detail to become an expert in identifying mushrooms. There are no easy rules for telling good from bad; the poisonous ones often look very similar to popular, tasty edible mushrooms. But you have to know that this confusion is possible, and that you, as a beginner at this, are liable to fuck it up. Join your local mycological (mushroom-studying) society, and you can start learning from the experts. 

You may think there’s a shortcut: can’t you just download an app? If iNaturalist (for example) can tell you that the white-flowered tree in your neighbor’s yard is a dogwood, it should be able to tell you what mushroom you found in the woods, right? It cannot.

AI mushroom apps could literally kill you

In an in-depth report in Public Citizen, wild mushroom enthusiast Rick Claypool shows all the ways that AI-powered identification apps and AI-generated field guides can kill you or make you sick if you trust them. 

He cites an example of Google Lens identifying a mushroom nicknamed “the vomiter” as a different mushroom it described as “choice edible.” (The person who posted the photo got very sick, but survived.) In an even scarier 2022 incident, an Ohio man used an app to confirm that some mushrooms he found were edible—the app said yes—and ended up in the hospital fighting for his life. (An experimental treatment may be what helped him pull through; 40% of people who eat toxic Amanita mushrooms, the type he’s thought to have eaten, end up either dying or needing a liver transplant.) 

As Claypool points out, real live mushroom experts do not look at a picture and say “yep, that’s edible.” They’ll ask to see details of the underside and the bottom of the mushroom, they’ll want to know exactly where and when it was found, and they may recommend further steps for identification like making a spore print. They’ll also be able to say how sure they are of their conclusion. Claypool notes: “An app that responds to an identification attempt with a vague or non-committal answer may be perceived as malfunctional instead of cautious.”

He also points out that identifying the species is not the only step in knowing whether mushrooms are safe to eat: “The first mushrooms novice foragers find are often mushrooms that are beyond the state of freshness required for safe consumption. Foragers are in a race against mold, insects, slugs, and everything else in the wild that eats mushrooms. Unless you know the signs, whether a mushroom is infested with maggots or grubs might not be obvious until it’s cut.”

AI is not “intelligent” and never has been

The term “artificial intelligence” is a buzzword, a nickname, a fantasy. It is not a description of what these apps are or do. It’s a term coined by scientists who dreamed about what might be possible in the future, and was then popularized in science fiction. The creators of tools like ChatGPT chose to use it because it sounds exciting and futuristic. 

Never forget that AI hype is mostly just marketing from big tech companies who hope to get money from other tech companies before the bubble bursts. This will all die down once people realize AI is not actually doing anything useful for anybody who cares about the output, but it will take a while for the tech bros to figure that out.

Claypool’s article lays out several things that AI can ostensibly do for mushroom identification, and the deadly flaws in each: 

  • Photo identification, through mushroom apps: Even a human cannot identify all mushrooms with certainty through pictures. 

  • AI-created guidebooks: These have been found to contain incorrect information. (It hasn’t been conclusively proven that the guidebooks in question were written by AI, but they sure look like it.)

  • AI-generated pictures: When Claypool tested image generation tools, they routinely drew incorrect pictures of the features of edible and toxic mushrooms, and labeled them incorrectly. 

  • AI descriptions of pictures: Mushroom experts use specific terminology to describe the features of mushrooms in guidebooks. When Claypool asked an AI tool to describe a photo of a toxic mushroom, it said the mushroom had “free gills” when it actually had attached gills, and got other identifying features wrong.

  • AI-summarized search results: Google happily provided a recipe for cooking toxic mushrooms, claiming that boiling can remove toxins. (This is not true.)

The AI tools Claypool tested also dropped bits of misinformation here and there in the process, implying that toxic mushrooms are brightly colored and that brightly colored mushrooms are toxic (neither is true as a rule).

The bottom line for you and me? AI doesn’t actually “know” anything. These algorithms are better thought of as predictive: if you ask it a question, it writes a prediction of what an expert’s answer to that question might look like. Sometimes it’s good at this kind of prediction, and sometimes it’s absolutely terrible. 

Just like when Futurama’s Bender the robot made terrible food for his human friends because he didn’t understand the concept of taste, AI produces text or images that superficially look like what it thinks you asked for, without understanding the concepts involved. AI does not know toxic from edible mushrooms. It doesn’t even know what a mushroom is. It just spits out words and images that it thinks will make you happy, and it does not know how to care whether you live or die.





Source link

By AUTHOR

Leave a Reply

Your email address will not be published. Required fields are marked *