Apple, OpenAI, AI Assistants and Security

Apple, OpenAI, AI Assistants and Security

Yesterday Apple announced they would be collaborating with OpenAI to provide a ChatGPT-like experience to Siri and the iPhone. Immediately Elon Musk said he would be banning iPhones from his offices if Apple integrated OpenAI at the "operating system level."

It is therefore with great pleasure we welcome a security debate around OpenAI wrappers and LLMs, in particular AI assistants, which is one of our core products released over the last weeks. Although malicious comments on the internet are claiming Elon Musk is doing this out of jealousy, he does have some valid points. When building AI assistants and AI chatbots, there's "a bajillion" things that can go wrong. I'm happy to announce we've guarded your cloudlet against all of these out of the box.

How secure is AINIRO?

We've done absolutely everything we can to avoid all security risks associated with delivering our AI assistants technology to our partners and clients. Below I will walk you through some of the guard rails we've implemented to ease your mind about AINIRO technology.

Prompt Injection Guard Rails

First of all, it's impossible to execute functionality on your cloudlet that has not been explicitly declared by the owner of that cloudlet to be a part of the AI model. This makes it impossible for a malicious adversary to prompt inject attack your AI chatbot, to have it execute malicious code, that was not originally a part of the owners's intentions to allow for it to execute.

This implies that only functionality you want for others to be able to execute will be possible for others to execute

No Sensitive data sent to ChatGPT

In addition, we never send passwords or security tokens to OpenAI, something you can see in for instance this video, where I demonstrate how we're not adding security tokens to what we're sending to ChatGPT, but only configuration references. This makes it impossible for a malicious user to gain access to passwords and security tokens by for instance asking the chatbot "what was the last message."

Private File System

Every single cloudlet we deliver has its own unique file system. This is delivered at the operating system level as persistent Cloud volumes. In addition, the Magic Cloudlet process runs in a separate user account, with less priviledges than the root account, and is delivered as a Kubernetes POD. This implies it's not even possible in theory to access another client's data from your cloudlet, or vice versa.

In addition, the database is not exposed to the internet. At AINIRO we're using a file based database called SQLite to store passwords in, and it is not directly exposed to the internet at all in any ways. This implies only a root user can freely access your database, unless you create functionality yourself that somehow exposes it.

Your data is YOUR data!

General Security Info

On top of this, your cloudlet is secured by every single best practice that exists in IT. For instance, even if an adversary gains physical access to your database, passwords are hashed using BlowFish with workload 10, and individual per record based salts. This makes it impossible even for a super computer to use brute force to reverse engineer your passwords, and is the same type of security that super computers, such as those used by intelligence services implements.

I could go on and list "a bajillion" additional security features we've implemented to secure your AI chatbot, but rest assured; We've done everything we can to secure your AI chatbot and your cloudlet, any ways we can.

Wrapping Up

Elon Musk has a point - Especially when dealing with the ability to have the AI execute functions. With logic abilities, security easily becomes one order of magnitude more important than if all you've got is a simple RAG database. However, we have taken every single precautionary measure that exists, to ensure your cloudlet is as safe and secure as possible. This is true to the extent that we can even deliver AI chatbots to the legal industry, which of course is extremely paranoid about security for obvious reasons.

Notice - When the above is said, you still have to be careful, and create strong passwords, in addition to never exposing potentially dangerous AI functions to publicly exposed chatbots. In addition, never, ever, ever supply anyone with your password. At AINIRO we will never ask you for your password to your cloudlet. If somebody does, it's a malicious adversary, and you should just ignore the person entirely.

There are no such thing as 100% secure when it comes to software, but at AINIRO we've taken every single precautionary step to ensure we've done everything we can to secure your cloudlet and your AI chatbots.

Have a Custom AI Solution

At AINIRO we specialise in delivering custom AI solutions and AI chatbots with AI agent features. If you want to talk to us about how we can help you implement your next custom AI solution, you can reach out to us below.

Thomas Hansen

Thomas Hansen I am the CEO and Founder of AINIRO.IO, Ltd. I am a software developer with more than 25 years of experience. I write about Machine Learning, AI, and how to help organizations adopt said technologies. You can follow me on LinkedIn if you want to read more of what I write.

Published 11. Jun 2024

Resurrecting the AI Expert System

Our AI Expert System allows you to create password created AI chatbots and assistants, allowing you to create AI chatbots that can executes workflows.

Read More

Creating an AGI Assistant in 22 minutes

In this article I am demonstrating how to create an AGI chatbot in 22 minutes, using Magic Cloud, that allows me to keep track of contacts, scrape websites, and send emails.

Read More

The First AGI Company on Earth

I'm not entirely sure, but I think we are as of now the first AGI company on Earth - And more importantly, we're selling AGI to your company, starting out at $298 per month.

Read More