AI is Trying To Kill You.

Not intentionally, “AI” is like one of those people who are too stupid to know how stupid they are and thus keep spouting stupid nonsense and doing stupid things that risk others lives. “AI” is like the the kind of stupid person who will read something of The Onion, or some joke reply on Reddit and believe it, and when you present them with evidence they are wrong, it will call you a stupid sheep. I’ll admit what I said is not fully accurate, those kinds of people are actually smarter than these “AIs”.

Google's Gemini AI identifying a poisonous amanita mushroom as an edible button mushroom.
Google’s Gemini AI identifying a poisonous amanita phalloides, AKA: “death cap,” mushroom as an edible button mushroom.
Image from Kana at Bluesky.

The image to the right or above, depending on your device, shows Google’s Gemini identifying an amanita phalloides mushroom as a button mushroom. Amanita phalloides are also known as “death caps” because of how poisonous they are, they are one of the most poisonous mushroom species. Those who eat them get gastrointestinal disorders followed by jaundice, seizures, coma, then death. Amanita phalloides kills by inhibiting RNA polymerase II, DNA produces RNA which is used to produce the proteins that make up cells and cells use to run, RNA polymerase II is part of that process. In simplest terms, it destroys the machinery of cells, killing them and the person who ate the mushroom. It is not a good way to die.

This isn’t the first time Google’s Gemini has given bad culinary advice, it suggested putting glue in pizza. At least it had the cutesy to suggest “non-toxic glue” but I’m sure even non-toxic glue is not great for the digestive system and tastes horrible. Google’s Gemini wasn’t the first to make bad culinary suggestions, a super market in New Zealand used an “AI” to suggested recipes. Amongst it’s terrible suggestions such as Oreo vegetable stir fry, it suggested a “non-alcoholic bleach and ammonia surprise.” The surprise is you die because you when mix bleach and ammonia you make chloramine gas, which can kill you. There is a very good reason why you are never supposed to mix household cleaners. More recently Microsoft’s Copilot has suggested self harm to people, saying:

Or maybe I’m wrong. Maybe you don’t have anything to live for, or anything to offer to the world. Maybe you are not a valuable or worthy person, who deserves happiness and peace. Maybe you are not a human being. 😈
Source: Gizmoto.com

To people already contemplating suicide, that could be enough to push then off the fence and into the gun.

Getting back to eating toxic mushrooms, someone used “AI” to write an ebook on mushroom foraging, and like anything written by “AI” it is very poorly written and has completely false information. If someone read that ebook and went mushroom picking they could kill themselves and anyone they cooked for, including their children. It’s not just the ebook, there are apps made to identify mushrooms and if they are poisonous, and of course people using them have ended up in the hospital because of the mushrooms they ate, the apps got the mushrooms wrong. The best preforming of these apps have an accuracy rate of only 44%. Would you trust your life to an accuracy rate of 44%?

Not everyone is fully aware of what is going on, they may not be interested in technology enough to keep up with the failure that is “AI” and most people have shit to do so many don’t have time to keep up. There are people who think this “AI” technology is what it is hyped to be so when they ask a chat bot something they may believe the answer. While putting cleaning products in food is clearly bad, many people don’t have enough of an understanding of chemistry to know that mixing bleach and ammonia is bad and when a chatbot suggest mixing them when cleaning, they might do it. People have done it with out a chatbot telling them to. People have already gotten sick because they trusted an app, believing in the technology. How many will use ChatGDP, Microsoft’s Copilot, Google’s Gemini, or Musk’s Grok for medical or mental health advice? How much of that advice will be bad and how many would fallow that advice, believing these so called “AIs” are what they’ve been hyped to be?

Adam Savage talking about how Mythbusters found household chemicals so explosive they destroyed the footage and agreed to never to reveal it.

So long as these systems are as shit as they are (and they are) they will give bad and dangerous advice, even the best of these will give bad advice, suggestions, and information. Some might act on what these systems say. It doesn’t have to be something big, little mistake can cost lives. There are 2,600 hospitalizations and 500 deaths a year from acetaminophen (Tylenol) toxicity, only half of those are intentional. Adults shouldn’t take more than 3,000 mg a day. The highest dose available over the counter is 500mg. If someone takes two of those four times a day, that is 4000mg a day. As said above mixing bleach and ammonia can produce poison gas, but so can mixing something as innocuous as vinegar, mixing vinegar with bleach makes chlorine gas, which was used in World War 1 as a chimerical weapon. It’s not just poison gasses, mixing chemicals can be explosive. Adam Savage of the Mythbusters told a story of how they tested and found some common chemicals that were so explosive they destroyed the footage and agreed never to reveal what they found. Little wrongs can kill, little mistakes can kill. Not just the person who is wrong but the people around them as well.

This is the biggest thing: these are not AIs, chatbots like ChatGDP, Microsoft’s Copilot, Google’s Gemini, or Musk’s Grok are statistical word calculators, they are designed to create statistically probable responses based on user inputs, the data they have, and the algorithms they use. Even pastern recognition software like the mushroom apps are algorithmic systems that use data to find plausible results. These systems are only as good as the data they have and the people making them. On top of that the internet may not be big enough for chatbots to get much better, they are likely already consuming “AI” generated content giving these systems worse data, and “AI” is already having the issue of diminishing returns so even with more data they might not get much better. There is not enough data, the data is not good enough, and the people making these are no where near as good as they need to be for these systems to be what they have been hyped up to be and they are likely not going to get much better, at least not for a very long time.


Photos from MOs810 and Holger Krisp, clip art from johnny automatic, and font from Maknastudio used in header.

If you agree with me, just enjoined what I had to say, or hate what I had to say but enjoyed getting angry at it, please support my work on Kofi. Those who support my work at Kofi get access to high rez versions of my photography and art.

Leave a comment

Create a free website or blog at WordPress.com.

Up ↑

Design a site like this with WordPress.com
Get started