
🧮 Building an Agentic Workflow with Tools and Memory
In this tutorial, we will build an agentic workflow using a large language model (LLM), integrating tools and memory to create an autonomous agent capable of reasoning, acting, and remembering past interactions.
We'll use Python and OpenAI’s API as an example.
1. Setup
First, install the required packages:
pip install openai langchain
Then, import the necessary libraries:
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, Tool
from langchain.memory import ConversationBufferMemory
2. Define Tools
def calculator_tool(query: str) -> str:
"""Simple calculator tool."""
try:
return str(eval(query))
except Exception as e:
return f"Error: {e}"
def wiki_search_tool(query: str) -> str:
"""Mock Wikipedia search."""
knowledge_base = {
"Python": "Python is a programming language used for AI," \
"web development, and more.",
"Australia": "Australia is a country and continent known " \
"for its wildlife and landscapes."
}
return knowledge_base.get(query, "No results found.")
Wrap these functions as LangChain tools:
tools = [
Tool(
name="Calculator",
func=calculator_tool,
description="Performs arithmetic calculations."
),
Tool(
name="WikiSearch",
func=wiki_search_tool,
description="Searches for short summaries of topics."
)
]
3. Configure Memory
memory = ConversationBufferMemory(memory_key="chat_history", \
return_messages=True)
4. Initialize the Agent
llm = ChatOpenAI(temperature=0)
agent = initialize_agent(
tools=tools,
llm=llm,
agent="chat-conversational-react-description",
memory=memory,
verbose=True
)
5. Run the Agent
You can now interact with the agent:
# Example interaction
response = agent.run("What is Python? Can you also calculate 12 * 7?")
print(response)
The agent will:
-
Use WikiSearch to explain Python.
-
Use Calculator to compute 12 * 7.
-
Remember the conversation for future interactions.
6. Extending the Workflow
You can extend the agent by:
-
Adding more tools (e.g., web scraping, file handling, APIs).
-
Using advanced memory types for long-term knowledge.
-
Customizing prompts to guide reasoning behavior.
7. Conclusion
With tools and memory, your LLM becomes an autonomous agent capable of:
-
Reasoning over tasks.
-
Using external resources.
-
Maintaining context over multiple interactions.
This structure forms the foundation for agentic AI workflows, which are crucial for building interactive applications like chatbots, virtual assistants, and intelligent research tools.