Building AI Agents with LangChain

Learn to build AI agents that can use tools, search the web, and complete complex tasks autonomously.

Building AI Agents with LangChain

AI agents go beyond simple chatbots—they can reason, use tools, and complete multi-step tasks autonomously. LangChain is the most popular framework for building them. Here's how to get started.

What is an AI Agent?

An agent is an LLM that can:

  • Receive a goal
  • Break it into steps
  • Use tools to complete steps
  • Iterate until done
  • Example: "Find the weather in Tokyo and suggest what to pack"

    • Agent searches for Tokyo weather
    • Reads the results
    • Reasons about appropriate clothing
    • Generates packing suggestions
    • LangChain Basics

      Installation:

      bash
      pip install langchain langchain-openai
      

      Simple Chain (Not an Agent):

      python
      from langchain_openai import ChatOpenAI
      from langchain.prompts import ChatPromptTemplate

      llm = ChatOpenAI(model="gpt-4") prompt = ChatPromptTemplate.from_template("Tell me about {topic}") chain = prompt | llm

      response = chain.invoke({"topic": "artificial intelligence"}) print(response.content)

      Creating Your First Agent

      Step 1: Define Tools

      python
      from langchain.tools import tool

      @tool def get_word_count(text: str) -> int: """Count words in text.""" return len(text.split())

      @tool def get_current_time() -> str: """Get current time.""" from datetime import datetime return datetime.now().strftime("%H:%M:%S")

      Step 2: Create Agent

      python
      from langchain.agents import create_openai_functions_agent, AgentExecutor
      from langchain import hub

      Get a pre-built prompt template

      prompt = hub.pull("hwchase17/openai-functions-agent")

      Create agent with tools

      tools = [get_word_count, get_current_time] agent = create_openai_functions_agent(llm, tools, prompt)

      Create executor

      agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

      Run

      result = agent_executor.invoke({ "input": "What time is it, and how many words are in 'hello world'?" })

      Built-in Tools

      LangChain provides many pre-built tools:

      Web Search:

      python
      from langchain.tools import DuckDuckGoSearchRun
      search = DuckDuckGoSearchRun()
      

      Wikipedia:

      python
      from langchain.tools import WikipediaQueryRun
      from langchain.utilities import WikipediaAPIWrapper
      wikipedia = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
      

      Python REPL:

      python
      from langchain_experimental.tools import PythonREPLTool
      python_repl = PythonREPLTool()
      

      File Operations:

      python
      from langchain.tools.file_management import ReadFileTool, WriteFileTool
      read_tool = ReadFileTool()
      write_tool = WriteFileTool()
      

      Advanced Agent Types

      ReAct Agent (Reasoning + Acting):

      python
      from langchain.agents import create_react_agent

      prompt = hub.pull("hwchase17/react") agent = create_react_agent(llm, tools, prompt)

      Plan-and-Execute Agent:

      python
      from langchain.agents import PlanAndExecute
      from langchain.agents.plan_and_execute import PlanAndExecute, load_agent_executor

      planner = load_chat_planner(llm) executor = load_agent_executor(llm, tools) agent = PlanAndExecute(planner=planner, executor=executor)

      Memory for Agents

      Conversation Buffer:

      python
      from langchain.memory import ConversationBufferMemory

      memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

      agent_executor = AgentExecutor( agent=agent, tools=tools, memory=memory, verbose=True )

      Building a Research Agent

      Complete example of an agent that can research topics:

      python
      from langchain_openai import ChatOpenAI
      from langchain.agents import create_openai_functions_agent, AgentExecutor
      from langchain.tools import DuckDuckGoSearchRun, WikipediaQueryRun
      from langchain.utilities import WikipediaAPIWrapper
      from langchain import hub

      Setup

      llm = ChatOpenAI(model="gpt-4", temperature=0)

      Tools

      search = DuckDuckGoSearchRun() wikipedia = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())

      @tool def save_to_file(content: str) -> str: """Save research results to a file.""" with open("research_results.md", "w") as f: f.write(content) return "Saved to research_results.md"

      tools = [search, wikipedia, save_to_file]

      Create agent

      prompt = hub.pull("hwchase17/openai-functions-agent") agent = create_openai_functions_agent(llm, tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

      Run research task

      result = agent_executor.invoke({ "input": """ Research the latest developments in quantum computing. Include: 1. Recent breakthroughs 2. Major companies involved 3. Potential applications Save a summary to a file. """ })

      Streaming Responses

      python
      from langchain.callbacks import StreamingStdOutCallbackHandler

      llm = ChatOpenAI( streaming=True, callbacks=[StreamingStdOutCallbackHandler()] )

      Error Handling

      python
      from langchain.agents import AgentExecutor

      agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, max_iterations=10, # Prevent infinite loops early_stopping_method="generate", # Generate final answer if stuck handle_parsing_errors=True # Gracefully handle parsing errors )

      Custom Tool Best Practices

      Good Tool Design:

      python
      @tool
      def analyze_sentiment(text: str) -> dict:
          """
          Analyze the sentiment of text.

      Args: text: The text to analyze

      Returns: Dictionary with 'sentiment' (positive/negative/neutral) and 'confidence' (0-1 float) """ # Implementation return {"sentiment": "positive", "confidence": 0.85}

      Tips:

    • Clear, descriptive docstrings
    • Specific parameter names
    • Predictable return types
    • Handle errors gracefully
    • Keep tools focused (one job)
    • Common Patterns

      Tool Selection:

      python
      from langchain.agents import initialize_agent, AgentType

      Let agent choose from tools automatically

      agent = initialize_agent( tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True )

      Conditional Logic:

      python
      @tool
      def conditional_tool(query: str) -> str:
          """Process query with conditional logic."""
          if "urgent" in query.lower():
              return process_urgent(query)
          else:
              return process_normal(query)
      

      Production Considerations

      1. Rate Limiting:

      python
      import time

      @tool def rate_limited_search(query: str) -> str: """Search with rate limiting.""" time.sleep(1) # Simple rate limit return search.run(query)

      2. Logging:

      python
      import logging
      logging.basicConfig(level=logging.INFO)

      LangChain will log agent steps

      3. Cost Control:

    • Set max_iterations
    • Use smaller models for simple tasks
    • Cache tool results when possible
    • 4. Security:

    • Validate tool inputs
    • Sandbox code execution
    • Limit file system access
    • Monitor agent actions

    Next Steps

  • Start simple - One or two tools
  • Add complexity gradually - More tools, memory
  • Monitor and iterate - Watch what agents do
  • Explore LangGraph - For more complex workflows
  • Check LangSmith - For debugging and monitoring
  • Agents unlock powerful autonomous capabilities. Start with well-defined tasks and expand as you learn the patterns.

    Share this article: