How to create a NetSuite AI Agent

NetSuite is a highly complex and mature ERP system. Its purpose is to help organisations manage their financial records, and optimise their accounting processes and workflows. NetSuite does not come with an AI Agent, implying if you want to leverage AI to automate your NetSuite business process workflows you're basically on your own.
We've got several clients wanting to connect their AI Agents to their NetSuite accounts, so I wanted to write something up related to how our processes are related to this. Hopefully it will be beneficial to others out there looking to do similar things.
The core NetSuite Plugin
We've got a core NetSuite plugin for Magic. This takes care of authentication, creates wrappers methods to access NetSuite, allows us to use SuiteQL to extract data from NetSuite, etc. It resembles the following when installed.
I'm not going to spend too much time on explaining how to configure it, since that's adequately covered in the README file for the plugin. However, you'll have to use a "RsaOaepKeyWrap" type of key pair when you create the key pair to create access tokens. Below is a shell script that will create this for you.
openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out private_key.pem
openssl req -new -key private_key.pem -out request.csr
openssl x509 -req -days 3650 -in request.csr -signkey private_key.pem -out public_cert.pem
The plugin wraps all relevant APIs from NetSuite to interact with the system, and once configured the plugin exposes a whole range of helper functions as listed below.
- netsuite-create-record - Create a new object of the specified [type] with the specified [object] values.
- netsuite-get-record - Returns the specified [id] record of the specified [type].
- netsuite-list-records - Lists objects of the specified [type], optionally apply filtering and paging using [q], [limit] and [offset]. Notice, this workflow only returns IDs of records and not the actual records themselves.
- netsuite-get-records - Returns the actual records for the specified [type] to caller, including every field on records. This endpoint allows for paging using [limit] and [from] arguments.
- netsuite-update-record - Updates the specified [id] record with the specified [object] values.
- netsuite-delete-record - Deletes the specified [id] object of the specified [type].
- netsuite-query - Executes the specified [q] SuiteQL query and returns the result to caller.
- netsuite-openapi-spec - Returns the OpenAPI specification for the specified [type] to caller.
- netsuite-schema-spec - Returns the JSON schema specification for the specified [type] to caller.
The most important of the above is probably the [netsuite-get-records] and the [netsuite-query] functions. With these functions you can retrieve objects from NetSuite.
Custom AI functions
When working with an LLM to have it execute functions on your behalf, there's a lot of considerations you'll need to tackle. First of all you don't want unauthorised people becoming capable of accessing objects they should not be able to access. In addition, you want as concise functions as possible, that doesn't confuse the LLM in any ways.
The way we do this is by "working backwards" from the prompt. Typically when creating software for clients we're using user stories. A user story might be something such as follows;
"As a user of the system I want to be able to find the price difference between product A and product B"
However, user stories are for all practical concerns prompts. The above user story for instance can be translated into the following prompt;
"What is the price difference between product A and product B"
Since it's easier to explain to the average client what a prompt is, we tend to circumvent the whole idea of "user stories", and simply ask them for 10 prompts they want the AI agent to be able to handle. Example of prompts can be for instance;
- "Show me all open invoices from x"
- "How much revenue have we made on y"
- "What's the price difference between product A and product B"
If we're to solve the 3rd prompt from above, our thought process would be as follows;
- What type of objects do we need to solve the above? For the above it would be "item" entities, since these typically contain products.
- What type of fields from these objects do we need? For the above price comparison prompt we'll obviously need the price field.
- How to filter objects to get to the correct items as requested by the end user? For the above it would be the "fullname" column typically, since the user provided two names for us to compare.
Once we know what types and fields we'll be needing, we might want to create a single specific Hyperlambda function that extracts these products, allowing the LLM to work with these. If we look at the API wrappers, we can find it's got one function or Hyperlambda workflow that's called; [netsuite-get-records]. This can be used as follows.
Executing something such as the above will result in the following.
Now that we know what Hyperlambda base workflow to use ([netsuite-get-records]), and which type of records to extract ("item" entities), in addition to what fields we'll need from this function (price) and how to filter (fullname) - We can go ahead and create a specific function that returns products by name.
/*
* Returns the item / product from NetSuite with the specified name.
*/
.arguments
// Mandatory name of product you want to search for.
name:string
.type:public
// Forwarding heavy lifting to NetSuite workflow integration.
execute-file:/modules/netsuite/workflows/workflows/netsuite-get-records.hl
type:item
column:fullname
value:x:@.arguments/*/name
// Returning result of above invocation.
add:x:../*/return-nodes
get-nodes:x:@execute-file/*/0/*/fullname
get-nodes:x:@execute-file/*/0/*/description
get-nodes:x:@execute-file/*/0/*/price
return-nodes
We want to add the above code to the customer specific code, and not extend the plugin itself. The reasons for this is because we might need to update the NetSuite plugin in the future, and if we've changed the plugin's code, our changes will be gone once we update it. The way we do this, is by creating a client specific module such as illustrated below.
If you look carefully at the above code, you will realise it's just a thin wrapper around the original [netsuite-get-records] workflow, and by placing it inside a custom module, and specifically a folder named "workflows" inside the module, we simplify the process of adding the AI function to the machine learning type later.
In addition, we reduce the cognitive complexity required for the LLM to consume our function, since the function is very specific and does only one thing (high degree of "cohesion").
And finally, we can keep our functions simple doing only one thing, allowing us to easily add authorisation requirements according to entity types, making sure users only have access to the object types they should have access to.
Once executed the function will ask for the following parameters.
Its single parameter again is just a product name, and the function will return a single object only. At this point it might be useful to remember our original "prompt" or "user story", which was as follows;
"What is the price difference between product A and product B"
We now have the ability to find products according to names, and it will return the prices of our products - And we've got a Hyperlambda workflow that we can associate with our Machine Learning type. At this point we can go to the machine learning component and add a reference to the function as follows.
Notice - The AI function can be added to the machine learning type either in its system instruction, or as RAG training data. Which method you choose depends upon how accessible you want the function to be. For a rarely used function adding it as RAG training data is preferred, but for a frequently used function such as the above, typically we want the LLM to always have the capability of executing the function, so for this example we installed the AI function directly into the system message.
To understand the difference, realise that the system message is always sent to the LLM, so even though it might be tempting to simply add all functions to the system message, this will use a lot of your available context window for functions rarely used, and possibly overflow the context window if you add hundreds of functions to the system message itself. So some type of conscious process must be followed when deciding which functions to add into the system message and what functions to use as RAG training snippets.
Since the platform takes care of automatically prompt engineering the system message based upon the file level comment, and parameter comments you supply, you rarely need to further prompt engineer the function reference after you've added it to the machine learning type. Below is an example of how adding the above AI function to the system message will change it.
Notice - At this point it's important to already having chosen the "AI Agent" as your flavor. If you can't find this flavor, install the "openai" plugin since it's referenced as a template system message in this plugin.
Since the system message is now changed to instruct the LLM about how to execute our function, we're done and we can now start using our AI agent having solved one of our prompts / user stories.
Reusing AI functions
Since we now have the ability to have our LLM execute functions returning products, we have a function the LLM can use for anything related to products, allowing us to ask questions such as;
- "When did we add product xyz to our portfolio"? Assuming you return the create date of products.
- "How many items do we have left of product xyz"? Assuming the stock quantity is returned for each product.
- Etc ...
And these prompts are just "given for free" since the LLM knows how to retrieve product information, and can thereby extrapolate how to answer the above questions using our existing function.
Notice - For some functions like this, it might help to add to the description of the AI function what fields it will return, and what it means. By default only input parameters are documented in the AI function declaration. By adding return fields to its system instruction, we instruct the LLM what type of information specifically each of our functions will return. This makes the LLM much more capable of determining which function to use for whatever prompt the user supplied to it.
However, as you move onwards to the next prompt on your list, it might be beneficial to look at what AI functions you already created. If you can solve all prompts with 5 AI functions, that's "better" than if you need 20 functions. So if some other prompt requires another field from your products, instead of creating one AI function for each prompt, it might be better to simply add that field to your existing AI function.
However, be careful to avoid returning irrelevant fields, since when creating an AI agent, the recurring problem is always your context size. If one function returns 100+ fields from some entity for instance, one single function invocation towards that AI function might end up consuming the whole context, preventing the AI from "remembering" the result of previous function invocations. So sometimes it might be better to have 5 AI functions wrapping a single entity type returning different fields, than 1 AI function returning all fields from that same type - Even though obviously most of the time 1 function would be preferred.
Incrementally Building AI Agents
When you've got a collection of AI functions, you can instruct the LLM how to combine these into more complex workflows. This is done by further prompt engineering the LLM, either in its system message, or as RAG training data - And it can be as simple as adding a simple RAG training data snippet to your machine learning type as follows with a bunch of rules of how to solve the problem.
## Send marketing email
If the user wants you to send a marketing email, then you will need to know
what contact to send the email to. Unless you already know the email address,
search for the contact using the _"get contact function"_. Once you know the
contact, find the customer the contact belongs to, which will contain a URL.
Scrape this URL and create a personalised marketing email based upon the
customer's website to convince the client about why he needs product 'xyz'.
When you've constructed a marketing email, display it to the user, and if
the user accepts the email, you can send the email to the contact.
The above will allow you to prompt the LLM such as follows;
Send a marketing email to John Doe
Which will result in a search for a contact named "John Doe", then it will find the customer ID and lookup the customer, which will return a URL for the customer's website. The LLM will then scrape this website, and construct a personalised marketing email trying to sell the customer products "xyz".
Once the LLM has constructed a marketing email, it will show the email and ask the user if it should send it. If the user answers yes, the LLM will send the email.
The above saves the users of the system probably 10 to 20 minutes of work per customer they're sending a marketing email to, and was incrementally built under the assumption of that the following AI functions exists.
- Get contact (returning customer ID)
- Get customer (returning URL)
- Scrape website
- Send email
Both of the two last functions are integrated into the platform and easily consumed by simply adding existing AI functions to your type - Implying with some simple prompt engineering, we were able to instruct the LLM how to perform a fairly complex task, saving possibly hours of manual labour each day.
Studying NetSuite
To understand what types of objects you need to retrieve, the Magic NetSuite plugin contains a couple of helper functions. Specifically the following functions ...
- [netsuite-openapi-spec]
- [netsuite-schema-spec]
These functions takes a type, and returns the description of all fields on that type. Below is an example of invoking the schema function.
This will help you understand the different fields on individual entities, and might help you out even with 100% custom fields added to NetSuite extending its core. In addition to this, obviously to understand how NetSuite works and what types of records you can retrieve, it's also beneficial to read NetSuite's documentation.
Especially the last one from above will give you information about which record types are relevant for whatever information you're trying to retrieve. However, NetSuite is a complex system, and can be configured 11 ways to Sunday - So the process unfortunately implies some experimenting.
Security
To ensure no malicious entity can access your NetSuite data, it's imperative to add authorisation requirements to your machine learning type. The way we do this typically is to create a custom role the user must belong to in order to access the LLM itself. This is done as follows on the type.
The "netsuite-agent-employee" is a role the user must belong to in order to be able to use the type. Only if you've got multiple different roles accessing the agent, you need to add authorisation on your specific Hyperlambda files. Typically for most solutions this is not needed, but rather solved by creating multiple different types with different authorisation requirements.
Wrapping up
Unfortunately we can't provide a live demonstration for our NetSuite integration, but you can see in the screenshot above how it would work for you. If you can describe your requirements using natural language, and the information is possible to retrieve - We can typically create AI agents wrapping NetSuite into almost anything you need.
In this article we wrapped NetSuite's APIs into an AI agent, allowing the AI agent to retrieve product information from your NetSuite account. But really, anything goes, and once you've created multiple AI functions such as illustrated above, you've also solved dozens of additional problems you couldn't even anticipate as you wired together the integration - Assuming you create reusable AI functions, returning base entities intelligently from NetSuite.
The rest of the job from this point and onwards, is simply creating the correct wrapper functions, retrieving the correct base information, for then to allow the LLM to extrapolate its prompts into AI function invocations.
If you want to have us do this for you, we're of course more than willing to onboard you and help you out creating an AI agent interacting with your NetSuite account.
Have a Custom AI Solution
At AINIRO we specialise in delivering custom AI solutions and AI chatbots with AI agent features. If you want to talk to us about how we can help you implement your next custom AI solution, you can reach out to us below.