GPT4-turbo, 5 times as fast

GPT4-turbo, 5 times as fast

Today we woke up to a happy email informing us that OpenAI just released GPT4-turbo. It's got a context window of 130K, 16 times as large as GPT4. Its price is 50% and 33% of the price (per token) of GPT4, and it's roughly 4 to 5 times as fast compared to its predecessor.

Basically, it's yet another jaw breaking release from OpenAI, illustrating how they're still the champions when it comes to AI and LLMs. It makes you wonder what memes Socra AI will play upon on Facebook in the future ... 😉

1 hour after we had read the email, we of course had implemented official support for GPT4-turbo in our AI products, allowing our clients to for instance deploy AI Expert Systems that are capable of analysing entire books in one go due to the increased context window. 130K of context is the equivalent of 300 paged from a book. This allows users to literally upload an entire book and have OpenAI create a summary of it in its entirety.

If you're interested in additional speed comparisons between OpenAI models, you might want to read the comparison between GPT3.5-turbo and GPT4 - However, in general, this puts GPT4 on pair with GPT3.5-turbo, at least in regards to speed, so unless you're looking to save money on API tokens or fine-tune GPT3.5, there are no reasons why you should not use GPT4 anymore.

Speed was the only real advantage GPT3.5 had. Now that advantage is gone, and we're only left with price as an argument. Previously we used to advise clients to use GPT3.5 if they had extreme speed requirements. Now that argument is gone, and you're better of in all regards chosing GPT4.

GPT4 is simply an amazing LLM model!

Updating our default models

We support GPT4-turbo in our products, but we'll need some extra time to implement it as the default demo model when you create a demo chatbot. However, hopefully by the end of the day you'll be able to create a free demo chatbot using GPT4-turbo as its completion model. We pride ourselves of being "fast movers", and we want to implement this ASAP as an "out of the box" feature.

However, for now you can play around with GPT4-turbo in our own chatbot. Click the chatbot button in the bottom/right corner of this screen and play with it as much as you wish. As you can see, it is clearly a lot faster than previously, probably around 5 times as fast as it used to be.

Consequences for our AI Expert System

Besides from the speed differences, there are probably not that many interesting features for an AI-based website chatbot in GPT4-turbo. However, for our AI Expert System it is clearly a huge advantage having access to 130K tokens instead of 8K. GPT4-turbo have 16 times as much room for context data. This allows you to literally analyse entire books in one go, making it arguably 16 times "smarter".

Our AI Expert Systems are of course supporting GPT4-turbo out of the box as of from today

Conclusion

OpenAI never stops impressing us. Since they went viral in December of 2022, they've created half a dozen amazing releases, with jaw breaking features every single time. Obviously supporting OpenAI out of the box in our products was a smart choice. Not only do they seem to have the superior product, but they are able to improve upon it at a speed almost impossible to fathom.

Since December 2022 OpenAI have increased the quality of their services by several orders of magnitudes, and they just continue delivering, over and over again. If you want to talk to us how we can help you leverage AI and ChatGPT in your own business, you can contact us here - Or create a demo chatbot here. If you're reading this 10 hours after it's published, you'll probably get to leverage GPT4-turbo in your demo.

Thomas Hansen

Thomas Hansen I am the CEO and Founder of AINIRO.IO, Ltd. I am a software developer with more than 25 years of experience. I write about Machine Learning, AI, and how to help organizations adopt said technologies. You can follow me on LinkedIn if you want to read more of what I write.

Published 7. Nov 2023