OpenAI Allows you to Fine-Tune GPT3.5
Every time somebody asked us "What's that 'train' button" in our dashboard, we would tell them to simply ignore it and use vectorizing instead. Fine-tuning your own Machine Learning model to create an AI chatbot was simply never good enough.
I want to emphasize that 99% of our users still shouldn't bother about fine-tuning. First of all, it's ridiculously expensive compared to embeddings. Secondly it requires 100x as much training data, and for a Q&A chatbot, embeddings is superior 99 out of a 100 times. However, as of today users can actually fine-tune GPT3.5.
For extreme use cases, this is a very, very, very big deal. Previously the most powerful model you could fine-tune was davinci
. GPT3.5 is easily 10x as powerful, probably 100x.
Use cases
This allows you to create your own machine learning model, hosted at OpenAI, that can answer questions without context data. First of all, this allows you to generate much longer content, in addition to allowing you to ask much longer questions. However, the most important part is that GPT3.5 if trained correctly actually outperforms GPT4 on certain tasks. This opens up a whole new range of use cases, such as for instance.
- AI Expert System based upon mountains of training data, such as for instance every single lawsuite in your country, allowing it to answer much more accurately when asked legal questions.
- Medical AI systems, allowing you to much more accurately diagnose patients based upon what they tell you about their symptoms.
- Classifications of data into categories.
- Etc, etc, etc
These are example use cases where their embeddings API simply won't be accurate enough. For the record, training your own model based upon data such as illustrated above, will probably cost you thousands of dollars - But at least now it is possible.
Basically, when you have "fuzzy logic" for correct answers, this might provide a better alternative for you than embeddings.
The difference between embeddings and fine-tuning
The key difference between using OpenAI's embeddings API and their fine-tuning features is that when you use embeddings, you actually know the answer to whatever question is being phrased, because you do a lookup into your own database for context data, and pass this to OpenAI together with the question, and instruct ChatGPT to answer the question given the specified context.
When you use fine-tuning, you start out with an existing machine learning model, run through hundreds or thousands of back propagation sessions based upon your training-data, to modify the vectors. Fine-tuning gives you your own private Machine Learning model, with your own vectors, functioning as "your own private AI brain".
A fine-tuned GPT3.5-based model is like "your own private AI brain"
Another key difference is that when using embeddings AI hallucinations are ipso facto impossible - If you use OpenAI's embeddings API correctly. While when using fine-tuning, the machine learning model will "fill in the gaps", sometimes resulting in "AI hallucinations". This is why fine-tuning is better for "fuzzy logic" where you don't know the answers, because most of the time the fine-tuned machine learning model will "fill in the gaps" with correct information.
For 99% of our users we will still recommend using embeddings, but I need to emphasize, that as far as I know, we're actually the only ChatGPT chatbot vendor that I'm aware of that actually supports fine-tuning. If you know for a fact you need fine-tuning, we would love to hear from you - However, I need to emphasize, it will be expensive - But starting from today, we can now deliver fine-tuned machine learning models based upon ChatGPT version 3.5 😎
- Contact us if you're interested in fine-tuning your own Machine Learning model based upon GPT3.5
Have a Custom AI Solution
At AINIRO we specialise in delivering custom AI solutions and AI chatbots. If you want to talk to us about how we can help you implement your next custom AI solution, you can reach out to us below.