Lutz RoederMay 24, 2025 · Updated Jun 21, 2025

Tiny Agents

At their core, many AI agents are simple loops, language models calling tools to observe and act. Three areas where this already works, and will accelerate further as models improve, are coding agents that edit codebases, computer-use agents that control software by clicking and typing like a human, and deep research agents that plan and summarize multiple web searches run in parallel.

With the OpenAI Agents SDK, you can build minimal versions of such agents with surprisingly little code: around 200 lines for a coding agent, and 100 lines for a computer-use agent or deep research agent. The three examples are available in the lutzroeder/agents repository on GitHub.

Coding Agents

Coding agents use LLMs to modify code across multiple files by reading, editing, and running shell commands much like a developer would. This makes their behavior intuitive and easy to follow.

One example is the minimal coding agent in the repository, built to work with Claude 4, o3 or Gemini Pro 2.5 by prompting the model to loop through a plan-edit-review cycle. It uses simple tools for file access and a bash command tool for shell operations like running builds or tests.

Computer-Use Agents

Computer-use agents let LLMs control real software by simulating user actions on the screen. The model works at the same level as a human user would, viewing the screen and interacting through mouse and keyboard input.

A good starting point is the minimal computer-use agent in the repository, built to work with the OpenAI Computer-Using (CUA) model. It uses a tool to perform actions, like clicking or typing, to accomplish a goal. The same tool is used to take a screenshot on each turn. With just 100 lines of code, it's a lightweight and hands-on way to explore how a multimodal agent can observe and navigate.

Deep Research Agents

A deep research agent integrates a planning component with concurrent web searches, executed via tool calling, and utilizes a summarization agent to compile the findings into a comprehensive final report.

The research agent in the repository employs the o3 and o3-mini models as well as GPT-4o web search tools to accomplish this task.

Final Thoughts

These tiny agents aren’t just proofs of concept, they are hands-on playgrounds. With little code, you can follow each model step, tweak behaviors, and understand what’s happening under the hood.