AI agents are software systems designed to reason, plan, and act toward achieving defined goals. They move beyond simple automation by making decisions, adapting to changing information, and coordinating multiple steps to complete complex tasks.
The operational effectiveness of AI agents is underpinned by several core principles:
At their core, agents use Large Language Models (LLMs) as their reasoning engine. However, the true capability of an agent comes from combining this intelligence with these supporting components, enabling them to act effectively in dynamic, real-world environments.
While LLMs provide the reasoning power for agents, they need structured approaches to handle complex tasks effectively. This is where agentic design patterns come in. These are proven strategies that guide agents to reason, act, and improve over time.
Here are three of the most common and effective patterns for building practical agents:
These patterns are often combined. For example, a multi agent system may use ReAct for individual agents while employing Reflection at the system level to refine outputs. Together, they form a foundation for building more capable, reliable, and transparent agents that can tackle increasingly complex tasks.
Now, let’s build a simple AI agent from scratch.
Let’s put everything together by building a simple agent using Crew AI. For this example, we’ll create a blog-writing agent that can research topics, gather information, and generate well-structured content.
A tool is a function that an agent can call to perform actions. Tools expand what the model can do — fetching real-time data, querying APIs, summarizing documents, or even publishing results.
Every agentic framework provides some predefined tools for common tasks such as web search or file operations, but for specific workflows you often need to define custom tools. In the case of a blog-writing agent, the first step is being able to gather research material for a given topic.
Here’s a simple custom tool that does that:
This is a simple example for demonstration. In a real-world setup, the fetch_research_data
function would call an external API (like a web search service or knowledge base) or scrape trusted sources to return actual, up-to-date research.
With this tool in place, our blog-writing agent will be able to collect background material before drafting any content.
Large language model (LLM) is the reasoning core of our agent. It processes inputs, breaks down tasks, and generates structured outputs. For a blog-writing agent, this means analyzing research material, drafting outlines, and creating coherent content that aligns with the topic.
Not all models are equally suited for this. For agentic workflows, it’s best to use models that are optimized for reasoning and capable of working with tools. While large foundational models provide strong general performance, smaller or fine-tuned models can be more efficient and cost-effective for specific tasks like content generation.
Clarifai provides a variety of models accessible through an OpenAI-compatible API, making it easy to integrate them into an agent’s workflow. For this blog-writing agent, we’ll use DeepSeek-R1-Distill-Qwen-7B
.
Before configuring the model, you’ll need to set your Clarifai Personal Access Token (PAT) as an environment variable so the API can authenticate your requests.
Here’s how to configure it:
This configuration connects our agent to the DeepSeek-R1-Distill-Qwen-7B model using the OpenAI-compatible endpoint. In production, you could easily swap this model for another depending on your content needs — for example, a larger model for more complex reasoning or a smaller one for faster drafts.
With this setup, our blog-writing agent now has a functional core that can process research inputs and turn them into structured, well-written content.
With our research tool defined and the model configured, we can now assemble the core components of our system:
Agent: The intelligent entity with a defined role, goal, and backstory.
Task: The specific work we want the agent to accomplish.
Crew: The orchestrator that manages agents and tasks.
For our use case, we’ll create a blog-writing specialist who can gather research, analyze it, and generate a structured draft.
In this setup:
fetch_research_data
tool to gather information before drafting the blog.Crew
that handles execution. While this example uses only one agent, the same structure can easily scale to multi agent projects.With these components in place, the agent has everything it needs: a clear purpose, the right tools, and an actionable task to deliver a well structured, high quality blog draft.
To execute our setup, we call project_crew.kickoff()
. This method triggers the full workflow — the agent interprets the task, uses the research tool to gather insights, reasons through the information, and generates a complete blog draft.
Here’s the entire code:
If you are looking to build and deploy your own custom MCP servers, check out our detailed blog tutorial here. Once built, these MCP servers can be integrated as tools within your AI agents, enabling you to create MCP-powered agentic applications. We’ll dive deeper into this integration in upcoming tutorials.
In this guide, we covered what AI agents are, their key components and design patterns, and built a blog-writing agent using a Clarifai-hosted reasoning model, showing how tools, memory, and reasoning work together to create dynamic, goal-driven systems.
That said, it’s important to remember that agents are not always the right choice. When building applications with LLMs, it’s best to start simple and only add complexity when it is needed. For many use cases, workflows or even well-structured single LLM calls with retrieval and in-context examples can be enough.
Workflows are predictable and consistent for well-defined tasks, while agents become valuable when you need flexibility, adaptive reasoning, or model-driven decision-making at scale. Agentic systems often trade off latency and cost for better task performance, so consider where that tradeoff makes sense for your application.
If you want to dive deeper into building more advanced applications, explore more AI agent examples in the GitHub repo. Check out the documentation to learn how you can build with other agent frameworks such as Google SDK, OpenAI SDK, and Vercel AI SDK.
© 2023 Clarifai, Inc. Terms of Service Content TakedownPrivacy Policy
© 2023 Clarifai, Inc. Terms of Service Content TakedownPrivacy Policy