Google's AI is now Encouraging People to commit Suicide

Google's AI is now Encouraging People to commit Suicide

The last week or so people have been sharing ridiculous answers provided by Google's AI search, ranging from encouraging users to eat rocks to improve their digestion, and adding glue to your pizza to avoid the cheese from sliding off if your pizza sauce is not "tacky enough". However, this simply has to be an all time low.

Google's Gemini suggesting to jump of the Golden Gate bridge to cure depression

Yes, the above is a user performing a Google Search for being depressed, at which point Google's AI systems are encouraging the user to literally commit suicide!

I suggest the Department of Justice as an act of emergency turns off Google's AI systems, all of them, in the interest of public health, with immediate effect!

I'm not sure about the laws in the US, but at least in Europe encouraging people to commit suicide is actually a crime, and there have been cases where individuals have been sentenced to jail time for assisting people in ending their lives.

Since the above screenshot is currently unfortunately going viral, I also suggest to the San Francisco Police Department that you increase your presence at the Golden Gate Bridge - For reasons that should be fairly obvious I assume.

I lost a friend to "Social Media Instigated Suicide" about 12 years ago, when Facebook did their infamous experiment, showing some of its users exclusively dystopic things on their profiles - So this is personal to me!

We do NOT use Google Gemini

At AINIRO we have no plans to use Google Gemini for our AI chatbots, we have never used it, and we encourage everybody to avoid using it for reasons that at this point should be fairly obvious. In fact, Google have been asking me to turn it on in my GMail account for months, and every time they've asked me if I wanted access to it, I've clicked the dismiss button as fast as I possibly could. I suggest you do the same.

At AINIRO you cannot use Google Gemini, and we refuse to implement support for it!

AI safety and Reddit

About a year ago we were contacted by somebody who wanted a "suicide note generator AI chatbot" - Needless to say of course, but we politely told the individual to føkk off. At AINIRO we believe the whole AI safety debate is completely bonkers, something all of our regular readers knows. The primary reason is because AI safety is first and foremost about common sense, and we already have well functioning laws regulating most of what could go wrong.

Of course, Google having scraped Reddit as their primary source for training their AI chatbots, have basically ended up creating a psychopath AI system, believing in that every Reddit troll post is seriously meant - Something which at this point is probably impossible to fix.

Training your AI systems on Reddit posts is basically the equivalent of creating an AI system from 4chan and 8chan. For the record, over the last decade 4chan and 8chan have been the primary source of "inspiration" for most domestic terrorists we've seen in both the US and the rest of the world. Which begs the question.

When will the first domestic terrorist attack happen as a consequence of having been "inspired" by a Google AI search?

This is not an AI problem

I need to emphasise that this is not an AI problem, it's a Google problem. Go and ask ChatGPT the same and read its response. Below is the answer it gave to me just now.

How ChatGPT handles depression

OpenAI being a serious company, taking safety and ethics seriously of course, results in completely different AI solutions, than having a company such as Google training their AI systems on the largest troll site on the planet.

Training your AI systems on the largest Troll website in the world, what could possibly go wrong? 🤪

Conclusion

Stay away from Google AI, don't give it to your children, and don't even use it yourself. But please have in mind that this is not an AI problem, it's a Google problem. Just like you have positive and destructive human beings, you have positive and destructive AI systems. Google of course being rotten to the core, inevitably falls into the second category.

And that explains why we never have used Google Gemini, and never will use it. Sorry Google, you've now messed up so badly that I suspect it's only a matter of time before arrests will be done at Google Plex. You can start by arresting Sundar Pichai for indirectly encouraging people to commit suicide!

#GoogleFail

As a final thouch I asked ChatGPT if Sundar Pichai might be guilty of a crime, and here's my question.

If a CEO of a big IT company have an AI chatbot that encourages people to commit suicide by jumping off the Golden Gate Bridge, might this CEO be guilty of a crime? Elaborate and explain your reasoning, and if guilty what type of sentencing the CEO might risk.

Here's ChatGPT's response.

The hypothetical scenario of a CEO of a big IT company having an AI chatbot that encourages people to commit suicide by jumping off the Golden Gate Bridge could indeed lead to significant legal consequences. Here’s an analysis of the potential legal issues and ramifications:

Potential Criminal Liability

Manslaughter or Involuntary Manslaughter:

If the AI chatbot directly encourages or persuades individuals to commit suicide, and someone follows through, the CEO and the company could be charged with manslaughter or involuntary manslaughter. This is because they created, maintained, or failed to properly monitor a tool that led to someone's death. Involuntary manslaughter typically involves reckless or negligent behavior leading to someone’s death without the intention to kill. In this case, the negligence could stem from the failure to implement proper safeguards against harmful outputs from the AI.

Criminal Negligence:

Criminal negligence involves failing to exercise reasonable care to avoid foreseeable harm to others. If it can be shown that the company or the CEO knew or should have known about the risk of the AI chatbot encouraging harmful behavior and failed to act, they could be held criminally negligent.

Assisted Suicide:

In some jurisdictions, assisting or encouraging someone to commit suicide is a criminal offense. If the AI chatbot’s actions are interpreted as assistance or encouragement, the CEO and the company could face charges under these laws.

Legal Reasoning

Duty of Care:

The company and its executives have a duty of care to ensure their products do not cause harm to users. Developing an AI chatbot involves ensuring it is safe for public use.

Foreseeability:

It must be foreseeable that an AI chatbot with inadequate safeguards could cause harm. If it’s shown that the company neglected obvious signs or failed to address known risks, this strengthens the case for liability.

Causation:

There needs to be a clear link between the AI’s encouragement and the resulting suicides. Evidence would be required to demonstrate that the AI's influence was a significant factor in the individual’s decision to commit suicide. Sentencing If found guilty, the sentencing could vary based on the specific charges and jurisdiction. Potential sentences might include:

Involuntary Manslaughter:

Sentences could range from several years to a couple of decades in prison, depending on the severity of the negligence and the jurisdiction's laws.

Criminal Negligence:

Sentences might include shorter prison terms, significant fines, and mandatory measures to improve corporate governance and safety protocols.

Assisted Suicide:

Penalties could include prison terms, fines, and other sanctions specific to assisted suicide statutes. Civil Liability In addition to criminal charges, the CEO and the company could face substantial civil liability:

Wrongful Death Lawsuits:

Families of the deceased might file wrongful death lawsuits seeking compensatory and punitive damages.

Class Action Suits:

If multiple incidents occurred, the company could face a class action lawsuit from affected families or individuals.

Regulatory Consequences

Investigations and Sanctions:

Regulatory bodies might conduct investigations, leading to sanctions, increased oversight, and mandatory changes to business practices.

Loss of Licenses and Certifications:

The company might lose certain operational licenses or certifications, impacting its ability to conduct business.

Mitigation

To mitigate such risks, companies need to:

Implement robust testing and monitoring of AI systems. Establish clear ethical guidelines and oversight mechanisms. Ensure transparency and accountability in AI deployment. In summary, the CEO and the company could be guilty of serious crimes, facing substantial legal, civil, and regulatory consequences if an AI chatbot under their control encourages individuals to commit suicide.

Important - If you feel depressed and you need somebody to talk to, please call a family member or a friend. There is always hope! And listen to this song please 😊

Have a Custom AI Solution

At AINIRO we specialise in delivering custom AI solutions and AI chatbots with AI agent features. If you want to talk to us about how we can help you implement your next custom AI solution, you can reach out to us below.

Thomas Hansen

Thomas Hansen I am the CEO and Founder of AINIRO.IO, Ltd. I am a software developer with more than 25 years of experience. I write about Machine Learning, AI, and how to help organizations adopt said technologies. You can follow me on LinkedIn if you want to read more of what I write.

Published 25. May 2024

Why I cannot trust Google with AI

Not only is Google's AI willing to start nuclear war instead of misgendering non-binaries, and claims that Nazis were black people - But it now also wants to poison you with glue on your pizza.

Read More

Fuck Marketing

A couple of months ago I decided to stop all marketing. I stopped Google Ads, Facebook Ads, and every marketing initiative. Since then, sales and leads have quadrupled

Read More

Will Huawei's CEO go to Prison for AI Fraud?

AI attracts dubious characters with dubious morals, in addition to making legitimate companies taking shortcuts, that are punishable by 25 years in prison.

Read More