Agents⚓︎
TL;DR
Learn how to create different chatbot and agent builds as well as easy to use web UIs to facilitate interactions . Dive into the tutorials below to check them out, or keep scrolling to learn more .
-
Chatbot
Create a simple chatbot without any memory.
-
Agent with Memory
Create an agent with awareness of past conversation history.
-
Document Agent
Give your agent a tool to retrieve info from your personal docs.
-
Code Agent
Tweak your agent build so that it's specialized to help with your coding needs
What will we create?⚓︎
Remember where we started. We created all these local servers that house latent potential to do some crucial work for our agents. Now, we can start passing them over to see how they fit and what we can make with them (turns out some really fun, interesting, and useful AI assistants ).
Chatbot
First is the chatbot without any memory or tools. This will be the local Ollama server that we created in Docker connected to a chat model in LangChain and all packaged up in an easy to interact with Gradio web UI.
This one will be a blank slate, it'll have no recollection of any conversation history . This kind of assistant is impractical, but it does serve as a good learning opportunity for how to use an Ollama server with LangChain and display the results in a user friendly manner with Gradio .
Agent with Memory
Next is the agent with memory. This will be the same as the chatbot, but instead of a chat model in LangChain, we're going to use an agent in LangGraph. Also, we're going to give our agent memory so that it'll have access to our conversation history . This is much more practical and is going to serve as a clean base for all of our more advanced agent builds .
We're also going to package this one up in a Gradio web UI but this time we'll add more functionality to accomodate our more complex agent . Specifically, we'll see how to setup individual chat threads that can be managed and selected, so that we can easily have different conversation topics stored and ready for future use . Finally, we'll learn tricks to building code that runs faster and more efficiently .
Document Agent
This is where we'll learn how to pass our agents tools and what it looks like when our agents use them: the doc agent. This will be the agent with memory equipped with a tool that can be used to query a Milvus vectorstore that we created in Docker. We'll use our Gradio web UI to upload Markdown documents that we want to be analyzed , then interact with our agent to gain information about them .
We'll see that in order to use the Milvus vectorstore , we'll need embedding
models that are used to create special representations for our data to be utilized when searching our data for a particular query . We can serve these embedding models with the same Ollama server that we built. Just as when we created the chatbot, we can connect this server to a LangChain object which can then easily be passed to our agent for proper use .
We'll also add functionality to our web UI so that we can upload and manage all the documents that we'll want to analyze and we'll learn how to split our documents and store them as chunks so that cleaner and more relevant information is passed to our agents .
Remember when we created the Milvus server? To demonstrate how to use a vectorstore, we performed a full-text search in which we scanned our documents for particular keywords . In this tutorial, the document search tool that we'll pass to our agents will utilize a more advanced search that will allow for more nuanced relationships in the data to be captured , which means our agents will give us more informed results .
Code Agent
Finally, in the last tutorial of the series, we're going to use LangChain to create a metasearch tool using the SearXNG server that we created in Docker and pass this tool over to our doc agent. We're then going to tweak the agent and tool settings a bit to get specialized coding agents.
We'll interact with our agents through our familar web UI, but we'll also add functionality to mananage different projects . Finally, we'll learn how to split Python documents into easier to digest chunks just as we did with Markdown documents in the doc agent tutorial.
This agent will not only serve as a workable tool to help you code in Python, but also as an example for how to build different types of specialized agents; like a job search agent , or a global news analyst , or a personal journal assistant .
After finishing these tutorials, we'll know how to create whatever specialized agents we like , as long as we can pass the proper tools and tweak the agent and tool settings to become more specialized. We'll have all the code available so that it really is just that easy !
Check out any of the tutorials above to get started , or get a refresher on how to create all the necessary servers to power our agents in the servers tutorials. If you want to learn techniques for improving the information retrieval of your agents, check out the RAG tutorials.