Leveraging Tools in LangChain: Building a Smart Kangala Care Assistant

Author

Ravi Sankar Krothapalli

Published

March 5, 2025

✍️ Note:

You can download the entire notebook from this link Leveraging Tools in LangChain: Building a Smart Kangala Care Assistant

Introducing LangChain Tools and Agents

This tutorial will guide you in building a smart Kangala care assistant using LangChain. The assistant offers care instructions for Kangalas, imaginary creatures, based on current weather conditions. In this tutorial, we will explore the fascinating area of tools and agents.

Tools

Tools are useful helpers designed to perform specific tasks. They can retrieve data, process information, or interact with other applications and external APIs. Each tool is built to accept certain inputs and provide results in a clear format.

Agents

Agents manage information flow and decision-making. They utilize tools to collect data, process inputs, and generate responses. Agents can be tailored with specific prompts and logic for various tasks.

Setup the environment

Note

Install necessary libraries by downloading the reqirements.txt file located here

from rich import print
from dotenv import load_dotenv

if loading_envs := load_dotenv():
    print("Loaded environment variables")
Loaded environment variables

Initialize the Chat Model

from langchain.chat_models import init_chat_model

kangala_care_assistant_model = init_chat_model(
    "gpt-3.5-turbo", model_provider="openai", temperature=0.0)

Define input schemas for your tools

This input schemas are necessary to ensure data is validated.

from pydantic import BaseModel, Field

class WeatherInput(BaseModel):
    location: str = Field(description="Location to check weather")


class KangalaInfoInput(BaseModel):
    temperature: int = Field(description="Current temperature")

Retrieve Weather Data

Create a tool to retrieve the current weather data based on the location using the OpenWeatherMap API.

To access the OpenWeatherMap API, you need to sign up for an OpenWeatherMap Account

Use of StructuredTool

Structured tools use schemas to define the expected inputs and outputs. This ensures that the data passed to and from the tool is well-structured and validated, reducing the chances of errors.

import requests
from datetime import datetime, timedelta, timezone
import os
from typing import Dict, Any
from langchain_core.tools import StructuredTool

# Get the OpenWeatherMap API key from environment variables
api_key = os.getenv('OPENWEATHERMAP_API_KEY')


def get_current_time_weather(location: str) -> Dict[str, Any]:
    # Construct the API URL
    url = f"http://api.openweathermap.org/data/2.5/weather?q={location}&appid={api_key}"

    # Make the API request
    response = requests.get(url)

    # Check if the request was successful
    if response.status_code != 200:
        raise Exception(
            f"Error fetching data from OpenWeatherMap API: {response.status_code}")

    # Parse the JSON response
    data = response.json()

    # Extract the timezone offset from the response
    timezone_offset = data['timezone']

    # Extract and convert the temperature from Kelvin to Fahrenheit
    temp_at_loc = data['main']['temp']
    temp_at_loc = round(
        (temp_at_loc - 273.15) * 9/5 + 32, 2)

    # Calculate the current time based on the timezone offset
    current_time = datetime.now(timezone.utc) + \
        timedelta(seconds=timezone_offset)

    return {
        "temperature": int(temp_at_loc)
    }


# Create a structured tool for retrieving current time and weather
current_time_weather_tool = StructuredTool.from_function(
    func=get_current_time_weather,
    name="CurrentTimeWeather",
    description="Retrieves current time and weather for a given location",
    args_schema=WeatherInput
)

Create Vector Database for Kangala Information

Vector stores are specialized databases that index and retrieve information based on vector representations, capturing the semantic meaning of data. For more information on embeddings, refer to this section in part 1 of this series

InMemoryVectorStore is a simple in-memory vector store especially useful for small datasets and quick prototyping.

Other popular vector stores include:

  • FAISS: Known for high performance and scalability.

  • Chroma: Easy to use for small to medium datasets.

  • Pinecone: A managed service offering high performance and scalability.

  • LanceDB: Balances performance and ease of use.

  • Weaviate: An open-source and highly extensible vector search engine.

  • pgvector: An open-source extension for PostgreSQL that adds support for vector operations and similarity searches.

# from langchain_community.vectorstores import FAISS
from langchain_core.vectorstores import InMemoryVectorStore
from langchain_openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter

# Sample Kangala fun facts and care information
kangala_docs = [
    "Kagalas need to be fed every 4 hours when the temperature is below 60°F."
    "Kagalas need to be fed every 6 hours when the temperature is between 60°F-75°F. "
    "Kagalas need to be fed every 8 hours when the temperature is above 75°F.",
    "Kangalas' fur changes to a darker shade below 50°F, indicating that it is too cold for them to survive hence, they need to stay indoors at a comfortable temperature. "
    "Kangalas' fur changes to a lighter shade above 80°F, indicating they are starting to get overheated hence, they need to stay indoors at a comfortable temperature.",
    "When temperatures exceed 80°F, Kangalas need to be misted with water every 2 hours to prevent overheating and need plenty of water.",
    "Kangalas are more vocal and active at temperatures between 65°F and 75°F, which is their optimal comfort range."
]

# Create vector database for Kangala information
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=200,
    chunk_overlap=20,
    separators=["\n", ".", " "])
texts = text_splitter.create_documents(kangala_docs)
embeddings = OpenAIEmbeddings()
kangala_db = InMemoryVectorStore.from_documents(texts, embeddings)

Create Temperature-Aware Retriever Tool

The code is an oversimplification of the retriever tool where we extract relevent documents given the temperature of a location.

from langchain.tools import StructuredTool


class TemperatureAwareRetriever:
    def __init__(self, vector_store, embeddings):
        self.vector_store = vector_store
        self.embeddings = embeddings

    def retrieve_with_temperature(self, temperature):
        # Modify the query to include temperature context
        enhanced_query = f"Temperature is {temperature}°F. Retrieve specific care documents"

        # Perform similarity search with a higher k value
        docs = self.vector_store.similarity_search(enhanced_query, k=30)

        # Filter documents based on temperature
        filtered_docs = [
            doc for doc in docs
            if self._is_relevant_to_temperature(doc.page_content, temperature)
        ]

        # Return filtered docs if not empty, otherwise return original docs
        return filtered_docs or docs

    def _is_relevant_to_temperature(self, text, temperature):
        # Define temperature ranges with their corresponding descriptors
        temperature_ranges = [
            ((0, 50), ["below 50°F", "under 50°F",
             "less than 50°F", "below 60°F"]),
            ((50, 60), ["between 50°F-60°F", "50°F to 60°F",
             "around 50°F", "between 50°F and 60°F", "below 60°F"]),
            ((60, 75), ["between 60°F-75°F", "60°F to 75°F",
             "around 70°F", "below 75°F", "above 60°F"]),
            ((75, 80), ["between 75°F-80°F",
             "75°F to 80°F", "around 75°F", "above 75°F"]),
            ((80, float('inf')), ["above 80°F",
             "over 80°F", "exceed 80°F", "above 75°F", ])
        ]

        # Convert temperature to float for comparison
        temp = float(temperature)

        # Check if temperature falls in the range
        for (low, high), range_descriptors in temperature_ranges:
            if low <= temp < high:
                # Check if any of the range descriptors are in the text
                return any(desc in text for desc in range_descriptors)

        return False


# Create temperature-aware retriever
temp_aware_retriever = TemperatureAwareRetriever(kangala_db, embeddings)

# Create a tool that accepts temperature


def kangala_info_with_temperature(temperature=70):
    # Retrieve documents relevant to the query and temperature
    docs = temp_aware_retriever.retrieve_with_temperature(temperature)

    # Convert retrieved documents to readable text
    return "\n".join([doc.page_content for doc in docs]) if docs else "No specific information found."


# Create a structured tool for retrieving current time and weather
kangala_info_tool = StructuredTool(
    name="KangalaInfo",
    description="Retrieves Kangala care information based on current temperature and query",
    func=kangala_info_with_temperature,
    args_schema=KangalaInfoInput
)

Create the Agent & Custom Agent Executor

The agent below is designed to provide care instructions for Kangalas based on the current weather conditions.

The agent uses the current_time_weather_tool to retrieve weather data and the kangala_info_tool to provide care information.

The agent defined here is a ReAct agent.

Introducing ReAct Agent

The ReAct (Reasoning and Acting) agent in LangChain is designed to handle complex tasks by reasoning through the steps required and taking appropriate actions. It uses a structured approach to break down tasks into smaller, manageable actions, making it easier to handle multi-step processes.

Key Features:

  • Reasoning: The agent thinks through the steps needed to achieve the goal.

  • Acting: It performs actions based on the reasoning, using available tools.

  • Iterative Process: The agent can iterate through multiple Thought/Action/Observation cycles to refine its approach and reach the final answer.

Example Workflow:

  1. Question: The input question the agent must answer.

  2. Thought: The agent thinks about what to do next.

  3. Action: The agent decides on an action to take.

  4. Action Input: The input to the action.

  5. Observation: The result of the action.

  6. Final Answer: The final answer to the original question.

CustomAgentExecutor is used to create custom agent executor to handle tool chaining.

from langchain.agents import create_react_agent, AgentExecutor
from langchain_core.prompts import PromptTemplate
from rich.markdown import Markdown

# Ensure you have tool names
tool_names = [tool.name for tool in [
    current_time_weather_tool, kangala_info_tool]]

# Create a custom prompt template with required variables
prompt_template = PromptTemplate.from_template(
    """You are a specialized Kangala care assistant.
Kangalas are imaginary animals whose behavior and care requirements change based on weather conditions and time of day. 

TOOLS:
{tools}

Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action (IMPORTANT: use ONLY the numeric float value for temperature)
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question. Provide current temperature. Print each sentence in a new line and use friendly and fun tone. Use the format below:
                            **The current temperature in <location> is <temperature> degrees Fahrenheit.** \n
                            - <care information 1> \n
                            - <care information 2> \n

Begin!

Question: {input}
Thought:{agent_scratchpad}""")

# Create a custom tool chain function


def prepare_kangala_info_input(input_dict):
    # First, get the weather information
    weather_result = current_time_weather_tool.invoke(input_dict["input"])

    # Extract the temperature as a float
    temperature = float(weather_result.get('temperature'))

    return {"temperature": temperature}


# Create the agent with the correct parameters
agent = create_react_agent(
    kangala_care_assistant_model,
    [current_time_weather_tool, kangala_info_tool],
    prompt=prompt_template
)

# Create a custom agent executor that handles tool chaining
class CustomAgentExecutor(AgentExecutor):
    def invoke(self, input_dict, config=None):
        # Prepare Kangala info input before invoking the agent
        kangala_input = prepare_kangala_info_input(input_dict)

        # Modify the input to include prepared Kangala info
        input_dict['kangala_input'] = kangala_input

        # Call the parent class's invoke method
        return super().invoke(input_dict, config)


# Create the agent executor
agent_executor = CustomAgentExecutor(
    agent=agent,
    tools=[current_time_weather_tool, kangala_info_tool],
    verbose=True,
    handle_parsing_errors=True
)

Invoke Agent Executor and Evaluate Model Responses

The response provides detailed logs of each step in the agent’s reasoning and actions.

This includes the inputs, outputs, and intermediate thoughts, making it easier to debug and understand the agent’s behavior.

This detailed logging is enabled by setting the verbose argument to True in the CustomAgentExecutor

# Invoke the agent
response = agent_executor.invoke({
    "input": 'London, UK',
    "chat_history": []  # Empty chat history for first interaction
})

print(Markdown(response["output"]))


> Entering new CustomAgentExecutor chain...
I should start by checking the current time and weather in London, UK to determine the Kangala care requirements.
Action: CurrentTimeWeather
Action Input: "London, UK"{'temperature': 44}The temperature in London, UK is 44 degrees Fahrenheit. Now I need to find out the Kangala care information based on this temperature.
Action: KangalaInfo
Action Input: 44Kangalas' fur changes to a darker shade below 50°F, indicating that it is too cold for them to survive hence, they need to stay indoors at a comfortable temperature
Kagalas need to be fed every 4 hours when the temperature is below 60°F.Kagalas need to be fed every 6 hours when the temperature is between 60°F-75°FI now know the final answer

Final Answer: 
**The current temperature in London, UK is 44 degrees Fahrenheit.** 

- Kangalas' fur changes to a darker shade below 50°F, indicating that it is too cold for them to survive hence, they need to stay indoors at a comfortable temperature

- Kagalas need to be fed every 4 hours when the temperature is below 60°F.

> Finished chain.
The current temperature in London, UK is 44 degrees Fahrenheit.                                                    

Kangalas' fur changes to a darker shade below 50°F, indicating that it is too cold for them to survive hence,   
   they need to stay indoors at a comfortable temperature                                                          
Kagalas need to be fed every 4 hours when the temperature is below 60°F.                                        
# Invoke the agent
response = agent_executor.invoke({
    "input": 'Hyderabad, India',
    "chat_history": []  # Empty chat history for first interaction
})

print(Markdown(response["output"]))


> Entering new CustomAgentExecutor chain...
I should start by checking the current time and weather in Hyderabad, India to determine the Kangala care requirements.
Action: CurrentTimeWeather
Action Input: Hyderabad, India{'temperature': 98}The temperature in Hyderabad, India is 98 degrees Fahrenheit. Now I can retrieve Kangala care information based on this temperature.
Action: KangalaInfo
Action Input: 98. Kagalas need to be fed every 8 hours when the temperature is above 75°F.
When temperatures exceed 80°F, Kangalas need to be misted with water every 2 hours to prevent overheating and need plenty of water.
. Kangalas' fur changes to a lighter shade above 80°F, indicating they are starting to get overheated hence, they need to stay indoors at a comfortable temperature.I now know the final answer.

**The current temperature in Hyderabad, India is 98 degrees Fahrenheit.** 

- Kagalas need to be fed every 8 hours when the temperature is above 75°F.
- When temperatures exceed 80°F, Kangalas need to be misted with water every 2 hours to prevent overheating and need plenty of water.
- Kangalas' fur changes to a lighter shade above 80°F, indicating they are starting to get overheated hence, they need to stay indoors at a comfortable temperature.Invalid Format: Missing 'Action:' after 'Thought:'I need to provide the final answer in the correct format.
Final Answer: 
**The current temperature in Hyderabad, India is 98 degrees Fahrenheit.** 

- Kagalas need to be fed every 8 hours when the temperature is above 75°F.
- When temperatures exceed 80°F, Kangalas need to be misted with water every 2 hours to prevent overheating and need plenty of water.
- Kangalas' fur changes to a lighter shade above 80°F, indicating they are starting to get overheated hence, they need to stay indoors at a comfortable temperature.

> Finished chain.
The current temperature in Hyderabad, India is 98 degrees Fahrenheit.                                              

Kagalas need to be fed every 8 hours when the temperature is above 75°F.                                        
When temperatures exceed 80°F, Kangalas need to be misted with water every 2 hours to prevent overheating and   
   need plenty of water.                                                                                           
Kangalas' fur changes to a lighter shade above 80°F, indicating they are starting to get overheated hence, they 
   need to stay indoors at a comfortable temperature.                                                              
# Invoke the agent
response = agent_executor.invoke({
    "input": 'Singapore, Singapore',
    "chat_history": []  # Empty chat history for first interaction
})

print(Markdown(response["output"]))


> Entering new CustomAgentExecutor chain...
I should start by getting the current time and weather in Singapore to determine the Kangala care information based on the temperature.
Action: CurrentTimeWeather
Action Input: Singapore, Singapore{'temperature': 81}The temperature in Singapore is 81 degrees Fahrenheit. Now I can use this information to retrieve the Kangala care details.
Action: KangalaInfo
Action Input: 81. Kagalas need to be fed every 8 hours when the temperature is above 75°F.
When temperatures exceed 80°F, Kangalas need to be misted with water every 2 hours to prevent overheating and need plenty of water.
. Kangalas' fur changes to a lighter shade above 80°F, indicating they are starting to get overheated hence, they need to stay indoors at a comfortable temperature.I now know the final answer.

Final Answer: 
**The current temperature in Singapore is 81 degrees Fahrenheit.** 

- Kagalas need to be fed every 8 hours when the temperature is above 75°F.
- When temperatures exceed 80°F, Kangalas need to be misted with water every 2 hours to prevent overheating and need plenty of water.
- Kangalas' fur changes to a lighter shade above 80°F, indicating they are starting to get overheated hence, they need to stay indoors at a comfortable temperature.

> Finished chain.
The current temperature in Singapore is 81 degrees Fahrenheit.                                                     

Kagalas need to be fed every 8 hours when the temperature is above 75°F.                                        
When temperatures exceed 80°F, Kangalas need to be misted with water every 2 hours to prevent overheating and   
   need plenty of water.                                                                                           
Kangalas' fur changes to a lighter shade above 80°F, indicating they are starting to get overheated hence, they 
   need to stay indoors at a comfortable temperature.