Building Your First Agent
This tutorial will guide you through creating and running your first AI agent using Agent Forge's decorator-based architecture. You'll learn how to define agents with decorators, configure LLM providers, and orchestrate agent execution.
Prerequisitesā
- Agent Forge installed (see Installation)
- An LLM API Key (e.g., OpenAI) set up in your environment variables
- TypeScript configured with decorator support
Step 1: Set Up Your Projectā
First, create a new file for your agent application:
// my-first-agent.ts
import * as dotenv from "dotenv";
import { 
  agent, 
  llmProvider, 
  forge, 
  readyForge, 
  Agent, 
  AgentForge, 
  LLMProvider 
} from "agent-forge";
// Load environment variables
dotenv.config();
Step 2: Define Your Agent with the @agent Decoratorā
Agent Forge uses decorators to define agent configurations declaratively. Here's how to create a simple research agent:
@agent({
  name: "ResearchAssistant",
  role: "Research Specialist",
  description: "An AI agent that specializes in research and information gathering",
  objective: "Find accurate and relevant information on any given topic",
  model: process.env.LLM_API_MODEL || "gpt-4",
  temperature: 0.7
})
class ResearchAgent extends Agent {}
Understanding the @agent decorator:
- name: Unique identifier for your agent - used to reference it in your application
- role: The agent's primary function or persona - influences how it responds
- description: What the agent does - provides context for the LLM
- objective: The agent's specific goal - guides the agent's behavior
- model: Which LLM model to use (e.g., "gpt-4", "gpt-3.5-turbo", "claude-3-opus")
- temperature: Controls creativity (0.0 = focused and deterministic, 1.0 = creative and random)
Step 3: Configure Your Application with @llmProvider and @forgeā
Create the main application class that will manage your agents using the decorator pattern:
@llmProvider(process.env.LLM_PROVIDER as LLMProvider, {
  apiKey: process.env.LLM_API_KEY
})
@forge()
class MyFirstAgentApp {
  static forge: AgentForge;
  static async run() {
    // Initialize the framework with your agent classes
    await readyForge(MyFirstAgentApp, [ResearchAgent]);
    
    console.log("š Agent Forge initialized!");
    console.log("š Available agents:", MyFirstAgentApp.forge.getAgents().map(a => a.name));
    
    // Run your agent
    const result = await MyFirstAgentApp.forge.runAgent(
      "ResearchAssistant",
      "What are the latest developments in quantum computing?"
    );
    
    console.log("\nā
 Agent execution completed!");
    console.log("š Response:", result.output);
    console.log("ā±ļø  Execution Time:", result.metadata.executionTime, "ms");
    console.log("š« Tokens Used:", result.metadata.tokenUsage?.total || "N/A");
    
    process.exit(0);
  }
}
// Start the application
MyFirstAgentApp.run().catch(console.error);
Understanding the decorators:
- @llmProvider: Configures the LLM provider and credentials for all agents
- @forge(): Creates the main AgentForge instance and makes it available as a static property
- readyForge(): Initializes agent classes and registers them with the forge
Step 4: Complete Working Exampleā
Here's the complete, runnable example with error handling and enhanced output:
// my-first-agent.ts
import * as dotenv from "dotenv";
import { 
  agent, 
  llmProvider, 
  forge, 
  readyForge, 
  Agent, 
  AgentForge, 
  LLMProvider 
} from "agent-forge";
dotenv.config();
@agent({
  name: "ResearchAssistant",
  role: "Research Specialist", 
  description: "An AI agent that specializes in research and information gathering",
  objective: "Find accurate and relevant information on any given topic",
  model: process.env.LLM_API_MODEL || "gpt-4",
  temperature: 0.7
})
class ResearchAgent extends Agent {}
@llmProvider(process.env.LLM_PROVIDER as LLMProvider, {
  apiKey: process.env.LLM_API_KEY
})
@forge()
class MyFirstAgentApp {
  static forge: AgentForge;
  static async run() {
    try {
      console.log("š§ Initializing Agent Forge...");
      
      // Initialize the framework with your agent classes
      await readyForge(MyFirstAgentApp, [ResearchAgent]);
      
      console.log("š Agent Forge initialized successfully!");
      console.log("š Available agents:", MyFirstAgentApp.forge.getAgents().map(a => a.name));
      
      // Run your agent with a research query
      console.log("\nš Starting research task...");
      const result = await MyFirstAgentApp.forge.runAgent(
        "ResearchAssistant",
        "What are the latest developments in quantum computing?"
      );
      
      console.log("\nā
 Agent execution completed!");
      console.log("š Response:", result.output);
      console.log("ā±ļø  Execution Time:", result.metadata.executionTime, "ms");
      console.log("š« Tokens Used:", result.metadata.tokenUsage?.total || "N/A");
      
      // Demonstrate agent metadata access
      const agent = MyFirstAgentApp.forge.getAgent("ResearchAssistant");
      if (agent) {
        console.log("\nš¤ Agent Details:");
        console.log("   Name:", agent.name);
        console.log("   Role:", agent.role);
        console.log("   Model:", agent.model);
      }
      
    } catch (error) {
      console.error("ā Error:", error);
      if (error instanceof Error) {
        console.error("Details:", error.message);
      }
    } finally {
      process.exit(0);
    }
  }
}
MyFirstAgentApp.run();
Step 5: Run Your Agentā
Execute your agent with ts-node:
npx ts-node my-first-agent.ts
You should see output similar to:
š§ Initializing Agent Forge...
š Agent Forge initialized successfully!
š Available agents: [ 'ResearchAssistant' ]
š Starting research task...
ā
 Agent execution completed!
š Response: Quantum computing has seen significant advancements recently, including improved quantum error correction, developments in quantum supremacy demonstrations, and progress in quantum networking protocols...
ā±ļø  Execution Time: 2341 ms
š« Tokens Used: 156
š¤ Agent Details:
   Name: ResearchAssistant
   Role: Research Specialist
   Model: gpt-4
Understanding the Decorator Architectureā
@agent Decorator Deep Diveā
The @agent decorator is the core of agent definition in Agent Forge:
@agent({
  name: "MyAgent",           // Required: Unique identifier
  role: "Assistant",         // Required: Defines the agent's persona
  description: "...",        // Required: What the agent does
  objective: "...",          // Required: The agent's goal
  model: "gpt-4",           // Required: LLM model to use
  temperature: 0.7,         // Optional: Creativity level (default: 0.7)
  maxTokens: 2000          // Optional: Maximum response tokens
})
class MyAgent extends Agent {}
@llmProvider Decoratorā
Configures the LLM provider for all agents in your application:
@llmProvider("openai", {
  apiKey: process.env.OPENAI_API_KEY,
  // Optional provider-specific settings
  organizationId: process.env.OPENAI_ORG_ID,
  baseUrl: process.env.CUSTOM_API_URL
})
Supported providers include:
- "openai"- OpenAI GPT models
- "anthropic"- Claude models
- "google"- Gemini models
- "azure-openai"- Azure OpenAI Service
- And many more via the token.js library
@forge Decoratorā
Creates the main AgentForge instance that manages all agents:
@forge()
class MyApp {
  static forge: AgentForge; // Automatically populated by the decorator
}
readyForge Functionā
Initializes agent classes and makes them available to the forge:
await readyForge(MyApp, [AgentClass1, AgentClass2, AgentClass3]);
This function:
- Instantiates each agent class with its decorator configuration
- Registers agents with the forge instance
- Sets up the execution environment
Adding Tools to Your Agentā
You can enhance your agent with tools using the @tool decorator:
import { tool, WebSearchTool } from "agent-forge";
@tool(WebSearchTool)
@agent({
  name: "EnhancedResearcher",
  role: "Web-Enhanced Research Specialist",
  description: "A research agent with web search capabilities",
  objective: "Find the most current and accurate information using web search",
  model: "gpt-4",
  temperature: 0.6
})
class EnhancedResearchAgent extends Agent {}
Error Handling Best Practicesā
Always wrap your agent execution in try-catch blocks:
static async run() {
  try {
    await readyForge(MyApp, [MyAgent]);
    const result = await MyApp.forge.runAgent("MyAgent", "task");
    console.log(result.output);
  } catch (error) {
    if (error instanceof Error) {
      console.error("Agent execution failed:", error.message);
    } else {
      console.error("Unknown error:", error);
    }
  }
}
Environment Variable Configurationā
Create a .env file with your configuration:
LLM_PROVIDER=openai
LLM_API_KEY=your_openai_api_key_here
LLM_API_MODEL=gpt-4
And reference them in your decorators:
@llmProvider(process.env.LLM_PROVIDER as LLMProvider, {
  apiKey: process.env.LLM_API_KEY
})
@agent({
  model: process.env.LLM_API_MODEL || "gpt-3.5-turbo"
})
Next Stepsā
Now that you've built your first agent with decorators, explore these next topics:
- Decorator Pattern - Learn about advanced decorator patterns
- Adding Tools - Add capabilities like web search, calculations, and more
- Team Basics - Create workflows and teams
- Advanced Features - RAG integration, streaming, and production considerations
Try creating multiple agents with different roles and coordinating them in teams and workflows!