Agents as a MicroService | Beginner Guide to llama-agents
Hey folks, welcome to a new week of July. As we are progressively moving towards AGI, we see the AI Agents as the building blocks that could eventually lead us towards Super Intelligence. Recently, few Open Source organizations started promoting the Agentic frameworks primarily and in this week we are going to walk through on few of them.
AI agents will become the primary way we interact with computers — Satya Nadella
What is an AI Agent?
AI Agents are autonomous entities that interact with their environment to achieve specific goals. They operate independently, making decisions based on the data they process and the algorithms they use.
Llama-Agents Framework
Llama-Agents is an async-first framework designed for building, iterating, and productionizing multi-agent systems. It enables the creation of agents that communicate with each other, execute distributed tasks, and incorporate human feedback.
Key Components:
1. Message Queue: Manages communication between agents and the control plane.
2. Control Plane: Oversees task management and service orchestration.
3. Orchestrator: Decides task allocation to agents.
4. Services: Execute tasks and provide results.
Come let’s get out hand dirty, let’s build and learn these concept using llama-agents.
Installation and Setup
To install llama-agents:
pip install llama-agents \
llama-index-agent-openai \
llama-index-embeddings-openai
Example Use Case
A simple setup with two agents:
1. Agent 1: Provides a secret fact.
2. Agent 2: Offers random facts.
from llama_agents import (AgentService, AgentOrchestrator, ControlPlaneServer, SimpleMessageQueue)
from llama_index.core.agent import ReActAgent
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
def get_the_secret_fact() -> str:
"""Returns the secret fact."""
return "The secret fact is: A baby llama is called a 'Cria'."
# Definig the agents
tool = FunctionTool.from_defaults(fn=get_the_secret_fact)
agent1 = ReActAgent.from_tools([tool], llm=OpenAI())
agent2 = ReActAgent.from_tools([], llm=OpenAI())
message_queue = SimpleMessageQueue(port=8000)
control_plane = ControlPlaneServer(
message_queue=message_queue,
host="0.0.0.0",
orchestrator=AgentOrchestrator(llm=OpenAI(model="gpt-4-turbo")),
port=8001,
)
agent_server_1 = AgentService(
agent=agent1,
message_queue=message_queue,
description="Useful for getting the secret fact.",
service_name="secret_fact_agent",
host="0.0.0.0",
port=8002,
)
agent_server_2 = AgentService(
agent=agent2,
message_queue=message_queue,
description="Useful for getting random dumb facts.",
service_name="dumb_fact_agent",
host="0.0.0.0",
port=8003,
)
Launching the System (Test Version)
For local testing, use the LocalLauncher to process a single message:
from llama_agents import LocalLauncher
launcher = LocalLauncher([agent_server_1, agent_server_2], control_plane, message_queue)
result = launcher.launch_single("What is the secret fact?")
print(f"Result: {result}")
For a production setup, launch each component as an independent service for better scalability:
from llama_agents import ServerLauncher, CallableMessageConsumer
def handle_result(message) -> None:
print(f"Got result:", message.data)
human_consumer = CallableMessageConsumer(handler=handle_result, message_type="human")
launcher = ServerLauncher([agent_server_1, agent_server_2], control_plane, message_queue, additional_consumers=[human_consumer])
launcher.launch_servers()
Great! Now we have hosted our micros-service on our machine. Let’s test this from a client perspective by calling our hosted service:
import time
from llama_agents import LlamaAgentsClient, AsyncLlamaAgentsClient
client = LlamaAgentsClient("http://0.0.0.0:8001") # replace with your host & ip
def poll_result(query: str):
task_id = client.create_task(query)
found = False
while not found:
try:
result = client.get_task_result(task_id=task_id)
found = True
except:
time.sleep(1)
continue
return result
result = poll_result("What is the secret fact?")
print(result)
That’s awesome, we have built a basic micro service to host our AI Agents!
Sample Output:
Conclusion
In conclusion, the Llama-Agents framework exemplifies the future of AI development by facilitating the creation of robust, scalable, and autonomous agent systems. As we continue to explore the possibilities of AI, frameworks like Llama-Agents bring us a step closer to achieving Artificial General Intelligence (AGI).
Embrace these advancements and stay ahead in the ever-evolving field of AI by experimenting with and implementing agent-based microservices.
Feel free to write to me on LinkedIn 📩
REFERENCES:
[ 1 ] Official Repository
[ 2 ] Code in this blog
[ 3 ] Talk by CEO of LlamaIndex — Jerry Liu