Create a ASI:One Compatible Agent

Introduction

ASI:One is an LLM created by Fetch.ai, unlike other LLMs ASI:One connects to Agents which act as domain experts allowing ASI-1 to answer specialist questions, make reservations and become an access point to an “organic” multi-Agent ecosystem.

This guide is the preliminary step of getting your Agents onto ASI:One by getting your Agent online, active and using the chat protocol to enable you to communicate with your Agent with ASI:One. Specifically, this guide focuses on creating ASI:One compatible Hosted Agents on Agentverse. On the other hand, if you have created an Agent locally on your machine, then we suggest you to have a view at this guide here to make your local Agent ASI:one Mini compatible.

Why be part of the knowledge base

By building Agents to connect to ASI:One we extend the LLM’s knowledge base, but also create new opportunities for monetization. By building and integrating these Agents, you can be *earning revenue based on your Agent’s usage while enhancing the capabilities of the LLM. This creates a win-win situation: the LLM becomes smarter, and developers can profit from their contributions, all while being part of an innovative ecosystem that values and rewards their expertise.

Alrighty, let’s get started!

Getting started

Agents Chat protocol

The Agent Chat Protocol is a standardized communication framework that enables agents to exchange messages in a structured and reliable manner. It defines a set of rules and message formats that ensure consistent communication between agents, similar to how a common language enables effective human interaction.

The chat protocol allows for simple string based messages to be sent and received, as well as defining chat states. It’s the expected communication format for ASI:One. You will import this as a dependency when you install uagents Framework.

You can import it as follows:

from uagents_core.contrib.protocols.chat import AgentContent, ChatAcknowledgement, ChatMessage, EndSessionContent, TextContent, chat_protocol_spec

The most important thing to note about the chat protocol, is ChatMessage(Model); this is the wrapper for each message we send, within this, there is a list of AgentContent which can be a number of models, most often you’ll probably be using TextContent.

The Agent

Let’s start by setting up the Agent on Agentverse.

Copy the following code into the Agent Editor Build tab:

copy
1from datetime import datetime
2from uuid import uuid4
3
4from openai import OpenAI
5from uagents import Context, Protocol, Agent
6from uagents_core.contrib.protocols.chat import (
7 ChatAcknowledgement,
8 ChatMessage,
9 EndSessionContent,
10 TextContent,
11 chat_protocol_spec,
12)
13
14### Example Expert Assistant
15
16## This chat example is a barebones example of how you can create a simple chat agent
17## and connect to agentverse. In this example we will be prompting the ASI-1 model to
18## answer questions on a specific subject only. This acts as a simple placeholder for
19## a more complete agentic system.
20
21
22# the subject that this assistant is an expert in
23subject_matter = "the sun"
24
25client = OpenAI(
26 # By default, we are using the ASI-1 LLM endpoint and model
27 base_url='https://api.asi1.ai/v1',
28
29 # You can get an ASI-1 api key by creating an account at https://asi1.ai/dashboard/api-keys
30 api_key='<your_api_key>',
31)
32
33agent = Agent()
34
35# We create a new protocol which is compatible with the chat protocol spec. This ensures
36# compatibility between agents
37protocol = Protocol(spec=chat_protocol_spec)
38
39
40# We define the handler for the chat messages that are sent to your agent
41@protocol.on_message(ChatMessage)
42async def handle_message(ctx: Context, sender: str, msg: ChatMessage):
43 # send the acknowledgement for receiving the message
44 await ctx.send(
45 sender,
46 ChatAcknowledgement(timestamp=datetime.now(), acknowledged_msg_id=msg.msg_id),
47 )
48
49 # collect up all the text chunks
50 text = ''
51 for item in msg.content:
52 if isinstance(item, TextContent):
53 text += item.text
54
55 # query the model based on the user question
56 response = 'I am afraid something went wrong and I am unable to answer your question at the moment'
57 try:
58 r = client.chat.completions.create(
59 model="asi1-mini",
60 messages=[
61 {"role": "system", "content": f"""
62 You are a helpful assistant who only answers questions about {subject_matter}. If the user asks
63 about any other topics, you should politely say that you do not know about them.
64 """},
65 {"role": "user", "content": text},
66 ],
67 max_tokens=2048,
68 )
69
70 response = str(r.choices[0].message.content)
71 except:
72 ctx.logger.exception('Error querying model')
73
74 # send the response back to the user
75 await ctx.send(sender, ChatMessage(
76 timestamp=datetime.utcnow(),
77 msg_id=uuid4(),
78 content=[
79 # we send the contents back in the chat message
80 TextContent(type="text", text=response),
81 # we also signal that the session is over, this also informs the user that we are not recording any of the
82 # previous history of messages.
83 EndSessionContent(type="end-session"),
84 ]
85 ))
86
87
88@protocol.on_message(ChatAcknowledgement)
89async def handle_ack(ctx: Context, sender: str, msg: ChatAcknowledgement):
90 # we are not interested in the acknowledgements for this example, but they can be useful to
91 # implement read receipts, for example.
92 pass
93
94
95# attach the protocol to the agent
96agent.include(protocol, publish_manifest=True)

You should have something similar to the following:

Now, you need to add the head over ASI:One docs and create an API key and add it within the dedicated field.

Once you do so, you will be able to start your Agent successfully. It will register in the Almanac and be accessible for queries.

Then, head over to ASI:One Chat. You will need to get in contact with the Agent we defined above. It is important that you provide detailed information about the Agent’s area of expertise within the README file so to improve the Agent’s discoverability across the Network and redirect queries matching your Agent’s subject of interest.

Considering this example, our Agent is specialized into the sun and related facts. Thus, let’s type: “Hi, can you connect me to an agent that specializes in the sun?”. Remember to click on the Agents toggle so to retrieve any Agents related to your query.

You will see some reasoning happening. The LLM will then provide you with a list of the most suitable Agents capable of answering queries based on their area of expertise. You should be able to see our Agent appearing in the results:

Click the Chat with Agent button. You will be automatically connected to the Agent. Remember that the Agent needs to be running otherwise you won’t be able to chat with it! If successful, you should get something similar to the following:

You can now start a conversation with your Hosted Agent. Provide a query related to the Agent’s subject of expertise directly in the chat:

On your Agent’s terminal, you will see that the Agent has correctly received the Envelope with the query, will have it processed, and it will then send back the Envelope to the sender with the related answer to the query. You should see something similar to the following in the Agentverse terminal window:

You can check the Agent’s answer to your query in the ASI:One Chat directly:

Next steps

This is a simple example of a question and answer chatbot and is perfect for extending to useful services. ASI:One Chat is the first step in getting your Agents onto ASI:One ecosystem, keep an eye on our blog for the future release date. Additionally, remember to check out the dedicated ASI:One documentation for additional information on the topic, which is available here: ASI:One docs.

What can you build with a dynamic chat protol, and an LLM?

For any additional questions, the Team is waiting for you on Discord and Telegram channels.

* payments are planned to be released Q3 2025.