- Published on
Build AI agents that work
AI agents are coming and are here to stay. But they are only as good as the actions they can take, and to take action, they need tools. When it comes to agents that interact with the outside world, no tool beats API connectors. However, traditional API connectors don't work well with LLMs.
Agents need a new connectivity
Over the last two years, we have learned that the design and implementation of a tool—an API connector–has a critical impact on the agent's success. While we all want AI smart enough to figure out the world's APIs, the 2024 reality is we need to make APIs, or API connectors, work for AI.
The design and implementation of API connectors profoundly impact agents in these areas:
- viability of handling tasks
- agent's chances to succeed at completing a task
- agent's speed
- costs of running the agent
- reliability of the agent
Let's take a deeper look.
Viability of agent-task solution
Agents require a new approach designed for specific use cases that won't clog the model's context window. Previously, the API connectors usually returned all the data (over-fetching) related to a resource. This strategy turned out to be bad in an environment that is constrained by the context size window.
To illustrate, let's say your agent is tasked with getting every recipient from emails sent to you by a recepient. If your connector returns every email detail, including the body and base64-encoded attachments, and passes it to LLM, you will soon find yourself out of the context window.
To achieve tasks in real-world scenarios, your connectors need to return the least amount of data necessary for the agent to complete the task.
Agent success rate
Most LLMs use some sort of function-calling mechanism to evaluate available tools and decide which one to use. Then, they look at the parameters passed to these function (tool) calls and map the to and from user inputs.
The more specific the connector interface is to the agent task, the higher the chance the function-calling algorithm will choose it and use it correctly.
For example, if the agent is tasked with appending a row to a table, a connector "append a row" will have a higher chance of succeeding than a generic do-it-all tool that inserts a cell anywhere in the table. The same goes for the actual parameters of the calls—cryptic parameter names without any description will not help the LLM use the tool correctly.
Speed and costs
Most APIs are not designed for use cases but rather organized around resources. While this approach makes it effective for API to serve many use cases, it often means that the agent needs to make multiple API calls to achieve its task.
For example, let's say the agent is tasked with retrieving the articles together with information about their authors, and the underlying API has two resources /articles/:id
and /authors/:id
.
Having two tools, "retrieve articles" and "retrieve authors," gives your agent greater flexibility. However, providing one tool to "retrieve articles with authors" has a higher chance of succeeding (see above).
Bundling the API calls within the tool also reduces the round trip to LLM, increasing the speed, minimizing the room for error, and reducing the costs related to LLM runtime.
World's first API integration platform for AI
As you can see, making agent connectors smarter, designing them for specific use cases, bundling the API calls, and reducing the number of returned data items positively impact agents in many ways.
However, to have task-specific connectors, we need many of them. How can this be achieved if connectors are expensive to build and maintain? One of the premises of integration platforms was the reuse of connectors built by hand, though this could only be done for generic one-size-fits-all connectors.
Thankfully, we can stop manually building the connectors for AI!
Superface Hub is the world's first API integration platform built for AI agents. It offers the best way to generate tools suitable for AI agents, their API documentation, and a description of agent use cases.
Superface comes ready to use, so you don't have to worry about the OAuth flow either and your agent can securely authenticate users right away!
Support for many LLMs
You can use your connectors from any OpenAI, Mistral, Anthropic, or Lanchain-powered AI agent.
At Superface, we know that different LLMs are suitable for different jobs. That is why Superface enables you to have all your connectors and tools in a single place and use different LLMs to benefit from the connectivity that makes your agent work!
Get started today
To learn more about Superface Hub, look at the Hub API documentation or jump right into connecting your agents and GPTs to APIs sign up for a free Superface account.
PS: Let us know if you are missing the connectivity in your LLM of choice at support@superface.ai – we will be happy to help you with it!