Extend your GPTs with C#
A couple of weeks ago OpenAI released GPTs. A GPT is basically a custom ChatGPT chatbot hosted by OpenAI. Its most crucial feature is the ability to extend your GPTs with an API. Magic's most crucial feature is the ability to easily create an API.
Magic and GPTs therefor becomes a perfect match, allowing you to create your own custom GPT, something already illustrated in an article we published a couple of days ago demonstrating how to create a Low-Code CRUD-based GPT in some few seconds.
CRUD of course is kick ass, and important when dealing with your database - However, every now and then you need some custom functionality, allowing you to extend your GPT with a completely unique piece of code, entirely created from scratch using C#. This article demonstrates how to accomplish this.
The code
First you need some C# code. Create a new folder named "cs-demo" inside your "modules" folder, and create a file in it called "slot.cs". Put the following code in your file.
using System;
using magic.node;
using magic.node.extensions;
using magic.signals.contracts;
[Slot(Name = "get-employee-details")]
public class Foo : ISlot
{
public void Signal(ISignaler signaler, Node input)
{
switch (input.GetEx<string>().ToLower())
{
case "thomas":
input.Add(new Node("title", "cto"));
input.Add(new Node("phone", "90909090"));
input.Add(new Node("email", "thomas@ainiro.io"));
break;
case "aria":
input.Add(new Node("title", "coo"));
input.Add(new Node("phone", "91919191"));
input.Add(new Node("email", "aria@ainiro.io"));
break;
case "tage":
input.Add(new Node("title", "ceo"));
input.Add(new Node("phone", "92929292"));
input.Add(new Node("email", "tage@ainiro.io"));
break;
}
}
}
When you're done your Hyper IDE should resemble the following minus the "get-employee-details.get.hl" file which we will come back to further down in this article.
Explanation of C# code
The above code declares a slot called [get-employee-details]. It takes a single parameter being some name, and returns data for Tage, Thomas, or Aria - Depending upon the value of your argument. Once compiled, the above code can be invoked from Hyperlambda using the following in your Hyperlambda Playground.
get-employee-details:Thomas
One of Hyperlambda's crucial feature is the ability to pass in and return graph objects from your slots. The above C# code assumes the caller passed in a name as the value of the invocation, being the sole parameter to your C# code, for then to return 3 new nodes as children of your invocation. After executing the above code in your Hyperlambda Playground you will end up with the following result.
get-employee-details:Thomas
title:cto
phone:90909090
email:thomas@ainiro.io
Notice, the GetEx<string>()
invocation in our C# code allows us to evaluate lambda expressions in our code.
Creating our API endpoint
We need to compile the code before we can use it though, and wrap it into an HTTP endpoint. One of Magic's features is being able to almost use C# as a scripting language - Implying we can compile the code on the fly, and execute it almost the same way we'd normally use a scripting language such as JavaScript or Python.
This feature allows us to simply wrap the compilation process inside our Hyperlambda endpoint, compile the code on the fly, execute it, and return the result of our execution. Create a new file called "get-employee-details.get.hl" inside your "cs-demo" folder and put the following content in your file.
.arguments
name:string
.description:Returns the title, phone and email for the specified employee
// Loads file, compiles it, and loads the resulting assembly.
io.file.load:/modules/cs-demo/slot.cs
system.compile
references
.:netstandard
.:System.Runtime
.:System.ComponentModel
.:System.Private.CoreLib
.:magic.node
.:magic.node.extensions
.:magic.signals.contracts
code:x:@io.file.load
assembly-name:employees.dll
system.plugin.load:x:@system.compile
// Executes our C# slot.
get-employee-details:x:@.arguments/*/name
// Returns the result of our C# code.
return:x:@get-employee-details/*
In the video below we're applying some intelligent caching to avoid recompiling the code on every single request towards our endpoint, but to keep the code easily understood, I removed these parts in the above code. However, with the above code we're now fundamentally done, and we can already connect our GPT with our API endpoint. Use the Manager/Endpoints component in Magic to execute your endpoint such as illustrated below.
Make sure you filter on "employee" to find the endpoint, and then add "thomas" as your name parameter.
Explanation of Hyperlambda code
The above Hyperlambda code loads your C# file, then it compiles the code producing a raw byte[]
that we dynamically load as an assembly, injecting the assembly into our AppDomain. This implies you can change your C# code without manually triggering any recompilation, and the new code will be dynamically compiled "on the fly" and reflect your changes immediately.
In a real world application you will want to avoid recompiling the C# code on every single invocation, since it's an expensive process. I am walking you through some tricks related to this in the video further down on the page - But to keep the code clear and easily understood, I avoided this in the above example.
Connect your endpoint to a GPT
At this point all we need to do is to connect our API endpoint to a custom GPT. Create a new GPT and make sure you filter your endpoints on "openapi" in your Magic's endpoints component. This should give you one endpoint resembling the following.
When you have done the above you can either copy the resulting JSON or copy the URL to your endpoint with the copy button in your endpoints component. At this point all we need to do is to give this OpenAPI specification to OpenAI as an "Action" for our GPT, and we're done.
Testing your GPT
At this point you're fundamentally done, and you can already start testing your GPT. You can ask it questions such as:
- What's Tage's phone number?
- Thomas is an employee, look up his information using my get employee action.
- What is Aria's email address?
- Etc ...
Below is an example of consuming our GPT.
As you can see above our GPT is now correctly passing in "Tage" to our API endpoint, which again returns Tage's email address, and uses the email address in its conversations with you. At this point you can create for instance a "Send email" endpoint where ChatGPT is helping you compose an email, and sends an email to Tage - But that's an exercise for another day. For now you can play with the custom GPT I created below.
Watch me walk through the entire process in the following video.
Have a Custom AI Solution
At AINIRO we specialise in delivering custom AI solutions and AI chatbots. If you want to talk to us about how we can help you implement your next custom AI solution, you can reach out to us below.