AI Agents
Keep hearing that buzzword "Agentic" and wondering what the hell it means?
It's not a user facing "agent" like customer service.
What makes an LLM Agent Agentic is :
- It has access to a menu of tools it can use- edit a file, look up data, launch nukes.
- It's provided a bunch of context about the task, the operating environment, relevant external data, and the list of available tools.
- The agent then operates in a loop:
- Given the current conversation, it chooses whether to
- Keep thinking/talking
- Return control to the user
- Use one of the tools at its disposal
- It uses the tool, usually with parameters (a "tool call")
- The tool result is added to the conversation.
- Goto 1.
- or: decide there's no more work to do with its current task (both success and failure)
- Return control to the user
How it works:
- LLM can “speak” a special phrase - the tool call
- it runs in a ConversationContainer listening for tool calls
- The container executes the tool and “tells” the LLM the result
Agent examples :
- ChatGPT decides to use a web search tool to get more information
- A customer service chatbot looks up your order in the order system
- A coding assistant reads/writes your files or runs your code
- Anything that calls a MCP server
It's important to recognize that the tool call result may not be the end of the response—the LLM looks at what it got back, and predicts what the next token should be—some “thinking”, or commentary, or summary, or another tool call, or a “end of turn” token.
You can end up with a very long chain of tool calls.
Memory is often a tool call. Save memory, search memories, etc.
There can be many layers of agents, or multiple agents working as a team—there are tons of different architectures.
Importantly there’s a distinction between a single LLM token sequence of text, and tool calls simply executed by the ConversationContainer, vs an orchestrator (another type of ConversationContainer) that runs more deterministic logic around the calls (cf Claude code hooks),
Or multiple models validating or reacting to the inputs and outputs of the tool calls (is this an approved tool call? Is there a security risk hiding in there?)