LLM Integration in Agent Forge
Effective integration with Large Language Models (LLMs) is at the heart of Agent Forge. The framework aims to provide a unified interface for connecting to various LLM providers, abstracting away much of the provider-specific boilerplate.
Core Component: The LLM
Class
Agent Forge uses an LLM
class (or a similarly named abstraction) to manage connections to different language model providers.
import { AgentForge, LLM, Agent } from "agent-forge";
import dotenv from 'dotenv';
dotenv.config();
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
throw new Error("OPENAI_API_KEY not set. Please set it in your .env file.");
}
const llmProvider = new LLM("openai", { // First argument is the provider name
apiKey, // Second argument is the provider-specific configuration
});
// console.log("LLM Provider initialized.");
// This llmProvider instance can then be used to initialize an AgentForge instance
const forge = new AgentForge(llmProvider);
// console.log("AgentForge instance created with the LLM provider.");
// Or it can be passed directly when creating an Agent:
const agentWithDirectLLM = new Agent({
name: "DirectLLMAgent",
role: "Helper",
objective: "Demonstrate direct LLM usage",
model: "gpt-3.5-turbo", // Ensure this model is compatible
llm: llmProvider
});
// console.log(`Agent '${agentWithDirectLLM.name}' created with direct LLM provider.`);
Key Aspects:
- Provider Abstraction: You initialize the
LLM
class by specifying a provider name (e.g.,"openai"
,"anthropic"
,"google"
) and an options object containing necessary credentials (likeapiKey
) and other configurations. - Unified Interface: Once initialized, the
llmProvider
object offers a consistent way for agents to interact with the chosen LLM, regardless of the underlying provider. This typically includes methods for sending prompts, receiving completions, and managing conversational history. - Powered by
token.js
: LLM integration is often powered by libraries liketoken.js
. This means Agent Forge can leverage such libraries for broad support of LLM providers. For a detailed list of supported providers and their specific configuration options, you should refer to the documentation of the underlying library (e.g.,token.js
documentation).
Configuring LLM for Agents
There are a couple of primary ways an agent gets its LLM configuration:
-
Via
AgentForge
Instance: When you create anAgentForge
instance, you typically pass a defaultllmProvider
. Agents loaded or created via thisforge
instance (e.g., usingforge.loadAgent()
, or agents within workflows/teams managed byforge
) may inherit this default LLM configuration.import { AgentForge, LLM } from "agent-forge";
import dotenv from 'dotenv';
dotenv.config();
async function configureAgentForgeWithLLM() {
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
console.error("API Key not found. Please set OPENAI_API_KEY in your .env file.");
return;
}
const llmProvider = new LLM("openai", { apiKey });
const forge = new AgentForge(llmProvider); // Default LLM for this forge instance
// console.log("AgentForge configured with default LLM provider.");
// Assuming 'my-agent.yaml' exists in the current directory or a specified path
// and contains a valid agent definition.
try {
const agent = await forge.loadAgent("./my-agent.yaml");
// console.log(`Agent '${agent.name}' loaded and uses forge's LLM provider by default.`);
// You can now run the agent, e.g.:
// const result = await agent.run("Hello world!");
// console.log(result);
} catch (error) {
console.error("Failed to load agent:", error);
}
}
// To run this example:
// configureAgentForgeWithLLM(); -
Directly in Agent Definition:
- Programmatic: When creating an
Agent
instance programmatically, you can pass thellmProvider
directly in the constructor options:import { Agent, LLM } from "agent-forge";
import dotenv from 'dotenv';
dotenv.config();
function createAgentWithDirectLLM() {
const apiKey = process.env.OPENAI_API_KEY;
if (!apiKey) {
console.error("API Key not found. Please set OPENAI_API_KEY in your .env file.");
return;
}
const llmProvider = new LLM("openai", { apiKey });
const agent = new Agent({
name: "MyDirectAgent",
role: "Assistant",
objective: "To be helpful and demonstrate direct LLM assignment",
model: "gpt-4", // Ensure this model is compatible with the provider
llm: llmProvider,
});
// console.log(`Agent '${agent.name}' created with its own LLM provider configuration.`);
// You can now run this agent, e.g.:
// async function runAgent() {
// const result = await agent.run("What is Agent Forge?");
// console.log(result);
// }
// runAgent();
}
// To run this example:
// createAgentWithDirectLLM(); - YAML Configuration: Agent YAML definitions typically include fields like
model
,temperature
, etc. The framework uses these to configure the LLM for that specific agent. It might use theAgentForge
instance's default provider but override parameters like the model name or temperature based on the YAML.name: MyFineTunedAgent
role: Specialist
model: gpt-4-turbo-preview # Specific model for this agent
temperature: 0.2
# ... other properties
- Programmatic: When creating an
Benefits of This Approach
- Flexibility: Easily switch LLM providers or models by changing the initialization of the
LLM
class or updating YAML files. - Simplicity: Reduces the amount of boilerplate code needed to connect to different LLMs.
- Consistency: Agents interact with LLMs through a standardized API within Agent Forge.
Understanding how LLMs are integrated and configured is crucial for tailoring agent behavior, managing costs, and leveraging the specific strengths of different language models.
Next Steps
- Review the Agent YAML definition to see how model parameters are specified.
- Explore how different Execution Patterns utilize LLMs.