The 5 Elements of a Custom AI Chatbot

The 5 Elements of a Custom AI Chatbot

There are 5 basic elements to a custom AI chatbot. Most people will simply choose the first good looking chatbot, that's not too expensive - But there exists good reasons for paying a little bit more for a little bit more effect. Let me explain why, but first let's walk through the 5 basic elements that creates an AI chatbot.

  • Frontend user interface (UI)
  • RAG database (VSS database)
  • Instruction or system message
  • The LLM itself
  • API integrations for creating AI actions or AI workflows

The frontend UI

This is actually the least interesting part of an AI chatbot. It's still crucial, especially if you want to embed the chatbot on your website - But technically it's the least interesting part. However, it's also the first thing people notice when they see an AI chatbot, so it's still crucial to get these parts right - And it just so happens to be the "wrapping" of the chatbot, so unless it's done right, it will not invite users to engage with it.

If you look at our chatbot, you'll rapidly see that we've spent a lot of energy on its UI. You can try it for yourself here, or look at the screenshot below.

AI chatbot recommending best selling products and displaying images for one of our partners

In the above screenshot you can see the UI for our chatbot, in addition to how it's recommending best selling products, displaying images, and also integrating with Shopify to provide live data from a Shopify inventory.

Things to consider

Once you've found a UI you're happy with, there are several technical parts you need to take into consideration. These are as follows.

  • How "heavy" the UI is, as in does it download several megabytes of resources, or is it small and loads fast. Our chatbot downloads only 50 to 100 kilobytes. Anything substantially larger than this is a no-no.
  • How easy is the UI to interact with. Implying, is the chatbot easy to work with. Our chatbot shuts down scrolling on the page, and adds a backdrop, which significantly reduces "cognitive noise", and makes it more pleasent to interact with. If you're using an AI chatbot for customer service, users will use your chatbot over and over again. If it's painful to use, this will reflect badly on your business.

UI is actually two things; UI and UX. UI implies "how beautiful is it", while UX implies "how is the user experience". The last part is actually more important, and can only be tested by literally hammering your chatbot with dozens of questions, to see if it's still comfortable to use after you've asked it 20+ questions.

An AI chatbot with a bad UX will not be used, and therefore not help you sell or reduce customer service queries substantially. This is true regardless of how "beautiful" your chatbot is. So UX is actually much more important than UI.

The RAG database

This part is much more important than its UI. A RAG database is typically created by uploading PDF files, and/or scraping websites, and the quality difference here is substantial between different vendors. When we launched our product we spent almost all of our initial energy here, which implies that for a long time we had a sub-optimal UI, since we wanted to focus on the foundation before we focused on the UI and the UX.

In the short term it made us suffer, since we'd loose clients to other chatbot vendors with a more "beautiful" UI - But in the long run, I suspect our strategy here was better, since we've now got a super high quality foundation upon which we can build very rapidly on top of.

When you build a sky scraper, you create its foundation first. The same is true for software!

How to QA your RAG database

One method to determine the quality of an AI chatbot, is to simply crawl and scrape a website, and look through your training data. Most AI chatbot providers will allow you to manually view or edit its training data, and if the data looks like a mess, or your chatbot provider don't show you its training data - It probably implies it's so bad they don't want you to even see it, which significantly reduces the quality of its responses.

Garbage in, garbage out

Regardless of how smart the LLM you're using is, if you feed it with garbage, it will return garbage.

We've spent insane amounts of energy getting these parts right, such as creating our own libraries to turn HTML into Markdown, which is one of the reasons why our AI chatbot can display images. Something you can see in the above screenshot.

Instruction or system message

This is an art form in itself, and it could be argued that every single great custom GPT ever created is basically just a very, very, very good instruction or system message. Lately we have improved a lot on our chatbot's ability to generate high quality system instructions - Something you can see in the following video.

The point with our technology is that we've got template system instructions, which we're braiding together with the client's own data, taken from their website - Which again creates a super high quality system instruction, specifically created to sell the client's products and services.

We're basically using ChatGPT to create ChatGPT instructions based upon a pre-defined template instruction, which allows ChatGPT to become a "superman sales executive", based upon highly relevant instructions, created according to the client's website itself.

We have seen quality improvements almost impossible to imagine with this technique, something you can verify by trying out our chatbot in the bottom/right corner of this page.

The LLM

This is the only place where there's actually any AI occurring. 99% of every single AI chatbot vendor is using some sort of service here, including us. We are using OpenAI here, which of course has consistently been the best GPT model money can buy according to tests, over and over again.

In addition, we allow for clients to choose between any model OpenAI supports, implying we're already supporting models OpenAI haven't even released yet. This might come as a surprise, but there are legitimate reasons to not choose the best LLM here. Some arguments for choosing a downscaled AI might be costs and speed. If you've got one question towards your AI chatbot ever 20 seconds for instance, using GPT version 4 rapidly becomes very expensive. GPT-3.5-turbo is also 3 times faster than GPT-4-turbo.

We allow for our clients to choose their own model, and we don't really care which model they're choosing, since the relationship between OpenAI and our clients are their own business, and we don't allow our clients in general to use our tokens. The exception to this is our basic model, where we're using GPT-3.5, and we cap each chatbot to maximum 500 requests per month.

However, since most AI chatbot providers are using OpenAI's APIs, there are few real differences here. The differentiating parts between different AI chatbot providers are rarely if ever seen in what LLM they're using - Unless somebody only gives you GPT-3.5, at which point you should consider something else for obvious reasons.

API integrations

This is where you can have an AI chatbot that actually does something. Without these parts, it's just a fancy conversation partner really. We've got a Low-Code software development platform called Magic that allows us to literally build almost any integration you can think of. Some examples of existing integrations we've already created are listed below.

Below you can see how we're tracking Shopify orders from within the AI chatbot itself.

Track Shopify orders in your AI chatbot 1Track Shopify orders in your AI chatbot 2

Such small details significantly increases the usefulness of an AI chatbot, but very few chatbot providers can deliver such things.

UX, the bonus point

There is one more thing that's difficult to quantify and measure neutrally, and that thing is UX. We've already talked about UX to some extent, but basically it implies "User eXperience". Neutrally quantifying UX is almost impossible, but once you start using an AI chatbot, you will rapidly be able to assess its UX yourself. And it basically implies the following ...

Does it "feel nice" to use?

UX is a science in itself, and there are entire libraries written about the subject. However, if you play with our chatbot, you will rapidly realise we've spent a lot of time on these parts. Some examples are listed below.

  • Conversation starters suggesting questions to ask to the user.
  • Follow up questions, allowing for a fluent conversation, without having to write complex queries on phones and other devices, where writing words can sometimes be difficult.
  • Backdrop to remove focus from things in the background while interacting with the chatbot.
  • Stop page scrolling while the chatbot is active to eliminate "UI noise" as you're using the chatbot.
  • Etc, etc, etc ...

Basically, to measure UX implies trying the AI chatbot, asking it 25 questions, for then to ask yourself.

Does it feel nice to interact with, or am I tired of using it?

If you're no longer interested in using the chatbot after 20 questions, chances are neither would your clients be interested after some few questions. This is a major warning sign, and a symptom of "bad UX".

At the end of the day, there is one single reason you need an AI chatbot, which is to have it answer users' questions. If users don't want to use it, because its font is too small, it acts weird, or it's difficult to navigate - Then user won't use it. If users won't use it, it doesn't matter what its price is. You can buy a broken car for probably $100. But unless you're a car repairman, why would you want a broken car? The same analogy is true for an AI chatbot.

In Norway where I'm from we've got an expression, which is as follows.

Sometimes it is very expensive to save $1

And that is also true for AI chatbot.

Unless you want an AI chatbot to be able to brag about that you've got an AI chatbot, the chatbot's quality becomes crucial - And quality will cost you. If you want to see our prices here, you can find them below. We start out at $49 per month, but to be honest with you, I would highly recommend you updating to at least the $298 plan, since that's where things are becoming interesting ...

... and to be honest with you, we're not ashamed of sharing our prices - Since we know for a fact that people still think it's a bargain!

Thomas Hansen

Thomas Hansen I am the CEO and Founder of AINIRO.IO, Ltd. I am a software developer with more than 25 years of experience. I write about Machine Learning, AI, and how to help organizations adopt said technologies. You can follow me on LinkedIn if you want to read more of what I write.

Published 26. Apr 2024