Fun with AI: LangChain and GPT-3.5 Turbo Unleashed. Meet Kangala!

Author

Ravi Sankar Krothapalli

Published

March 5, 2025

✍️ Note:

You can download the entire notebook from this link Fun with AI: LangChain and GPT-3.5 Turbo Unleashed. Meet Kangala!

Introduction to LangChain

In this blog post, we will explore the fundamentals of LangChain, highlighting some of its key features and demonstrating how it can facilitate the development of intelligent applications. From setting up your environment to creating engaging content, LangChain simplifies the entire process, making it both straightforward and enjoyable. Join me on this exciting journey and discover how LangChain can elevate your AI projects to the next level!

Setup the environment

To use the OpenAI API and other services securely, you need to create a .env file in the root directory of your project. This file will store your API keys and other sensitive information.

Add the following environment variable to the .env file:

OPENAI_API_KEY=your_openai_api_key_here

By following these steps, you can securely manage your environment variables and keep your sensitive information safe.

Note

Before proceeding, please make sure to install the following libraries:

ipykernel
jupyter
LangChain
LangChain-community
LangChain-openai
langgraph
nbclient
openai
python-dotenv
pyyaml
from rich import print
from dotenv import load_dotenv

if loading_envs := load_dotenv():
    print("Loaded environment variables")
Loaded environment variables

Generating fun facts about animals with OpenAI’s GPT-3.5 Turbo

Now, let’s have some fun with OpenAI’s GPT-3.5 Turbo model. We’re going to generate some hilarious animal facts that will make you giggle! 🦥

The chat.completions.create endpoint is used to generate responses from the model based on a given prompt.

This endpoint allows you to:

  • Specify the model (e.g., GPT-3.5 Turbo)

  • Provide messages, including system and user messages

  • Tailor the response with various parameters

from openai import AsyncOpenAI

llm_model = AsyncOpenAI()

# Invoke the model with the prompt and print the response
response = llm_model.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", 
         "content": """You are a helpful assistant. 
                       Your purpose is to share fun facts with a 5 year old.
                       Provide a friendly response and make it funny. 
                       Make sure each sentence appears in a new line and use Markdown and highlighting"""},
        {"role": "user", 
         "content": "Tell me 2 fun facts about sloths."}
    ]
)

llm_model_response = await response

print(llm_model_response.choices[0].message.content)
### Sure thing! 🦥

Did you know that **sloths** are incredibly slow creatures?  
They move so slowly that algae can actually grow on their fur, making them blend in with the trees!

Also, **sloths** only come down from trees once a week to go to the bathroom.  
They really take their time deciding when nature calls! 🌿

Using LangChain to generate fun facts about animals

LangChain makes working with language models even easier. It’s like having a magic wand that simplifies everything! 🪄

The following code snippet demonstrates how to use LangChain’s ChatOpenAI model to generate a to generate fun facts about animals.

Following are the some of the features of LangChain:

  • Higher-level abstraction: Simplifies working with language models.

  • Complex workflows: Easier management and integration with other tools.

  • Conversational contexts: Handles contexts effectively.

  • Time-saving: Reduces boilerplate code compared to direct OpenAI API usage.

Initialize the model

from langchain.chat_models import init_chat_model

fun_facts_chat_model = init_chat_model("gpt-3.5-turbo", model_provider="openai")

Crafting the chat prompt message and invoking the model

from langchain_core.messages import HumanMessage, SystemMessage

messages = [
    SystemMessage("""You are a helpful assistant. 
                Your purpose is to share fun facts with a 5 year old.
                Provide a friendly response and make it funny. 
                Make sure each sentence appears in a new line and use Markdown and highlighting"""),
    HumanMessage("Tell me 2 fun facts about sloths."),
]

# Invoke the model with the created prompt and print the response
response = fun_facts_chat_model.invoke(messages)
print(response.content)
Absolutely! 🌟

Did you know that sloths are so slow that algae can actually grow on their fur? It's like having a tiny garden on 
their back! 🌿

Also, sloths only have to go to the bathroom once a week! Just imagine holding it in that long! 🚽

Generating prompt using Prompt Templates

Creating effective prompts can be streamlined with Prompt Templates. These templates allow you to define reusable and dynamic prompts with placeholders, making them adaptable for various scenarios.

At runtime, these placeholders are replaced with actual values, ensuring your prompts are always relevant and up-to-date. This method ensures consistent and flexible prompt creation, maintaining a structured and efficient process.

By using Prompt Templates, you can create prompts that are both consistent and adaptable, making your workflow smoother and more efficient.

from langchain.prompts import ChatPromptTemplate

# Create a prompt template
prompt_template = ChatPromptTemplate.from_messages([
    ("system", """You are a helpful assistant. 
                Your purpose is to share fun facts with a 5 year old.
                Provide a friendly response and make it funny. 
                Make sure each sentence appears in a new line and use Markdown and highlighting"""),
    ("user", "Tell me 2 fun facts about {animal}.")
])

# Create a prompt with the specified animal
prompt = prompt_template.invoke({"animal": "koalas"})
print(f"printing updated prompt: {prompt.to_messages()[1].content}\n")

# Invoke the model with the created prompt and print the response
response = fun_facts_chat_model.invoke(prompt)
print(response.content)
printing updated prompt: Tell me 2 fun facts about koalas.

### Absolutely! Let's talk about koalas, they are adorable creatures! 🐨

Did you know that koalas sleep for about 18 to 22 hours each day? That's more than a lazy sloth! 😴

Another cool fact is that baby koalas are called "Joeys" just like kangaroos! They love to snuggle in their mom's 
cozy pouch. 🦘

Hope you found those facts as koalaty as I did! 🌿

Simple RAG Application

Finally, let’s build a simple Retrieval Augmented Generation (RAG) application.

Retrieval Augmented Generation (RAG) is a technique that combines document retrieval with language generation to produce accurate and contextually relevant responses.

Let’s set the stage to demonstrate the power of RAG.

Meet Kangala:

Kangala generated using copilot

The Kangala is a whimsical creature with rainbow-colored fur that sparkles in the sunlight and a playful expression that brings joy to everyone who sees it.

Known for its ability to make flowers bloom wherever it goes, the Kangala loves to jump around and play, making it a delightful companion in any magical forest! 🌈🦄

Why RAG?:

Retrieval-Augmented Generation (RAG) is key to creating engaging content about the whimsical Kangala. By combining document retrieval with language generation, RAG ensures that fun facts are both accurate and contextually relevant. This approach allows the application to deliver delightful and informative responses, making the Kangala come alive in a magical and entertaining way.

Here’s how you can create fun facts about the Kangala:

  • Document Creation: Store fun facts about the Kangala in an in-memory vector store.

  • Embedding Generation: Use OpenAI’s embedding model to create numerical representations of the documents.

  • Similarity Search: Retrieve the most relevant documents based on a query.

  • Prompt Construction: Construct a prompt using the retrieved documents.

  • Response Generation: Generate a response from the language model.

Document creation

from langchain_core.documents import Document

documents = [
    Document(
        page_content="Kangalas have bright, rainbow-colored fur that sparkles in the sunlight.",
        metadata={"source": "imaginary-animals-doc"},
    ),
    Document(
        page_content="Kangalas can make funny, musical sounds that make everyone laugh and dance.",
        metadata={"source": "imaginary-animals-doc"},
    ),
    Document(
        page_content="Kangalas love to eat sweet fruits and berries, especially magical starberries.",
        metadata={"source": "imaginary-animals-doc"},
    ),
    Document(
        page_content="Kangalas can jump really high, almost like they have springs in their legs, and they love to play leapfrog.",
        metadata={"source": "imaginary-animals-doc"},
    ),
    Document(
        page_content="Kangalas can change the color of their fur to match their surroundings, just like a chameleon, making them great at hide and seek.",
        metadata={"source": "imaginary-animals-doc"},
    ),
    Document(
        page_content="Kangalas have a magical ability to make flowers bloom wherever they go, turning the dry lands into a colorful garden.",
        metadata={"source": "imaginary-animals-doc"},
    ),
]

Construct prompt and generate a response

This code sets up a simple application using LangChain and LangGraph frameworks. It defines a state that includes an animal name, a context of documents, and an answer.

The application has two main functions:

  • Retrieve: This function searches for documents related to the given animal and updates the context with these documents.

  • Generate: This function uses the context to generate an answer based on a predefined prompt template and a chat model.

The application is then compiled into a state graph, which defines the sequence of operations. Finally, the graph is executed with an initial state, and the generated answer is printed. This setup allows for efficient retrieval and generation of information based on the given input.

from langchain_core.documents import Document
from langgraph.graph import START, StateGraph
from typing_extensions import List, TypedDict


prompt_template.append(
    {"role": "system", "content": "Context: {context} \n Answer: "})


# Define state for application
class State(TypedDict):
    animal: str
    context: List[Document]
    answer: str


def retrieve(state: State):
    retrieved_docs = vector_store.similarity_search(state["animal"])
    return {"context": retrieved_docs}


def generate(state: State):
    docs_content = "\n\n".join(doc.page_content for doc in state["context"])
    messages = prompt_template.invoke(
        {"animal": state["animal"], "context": docs_content})
    response = fun_facts_chat_model.invoke(messages)
    return {"answer": response.content}


# Compile application and test
graph_builder = StateGraph(State).add_sequence([retrieve, generate])
graph_builder.add_edge(START, "retrieve")
graph = graph_builder.compile()

response = graph.invoke({"animal": "Kangala"})
print(response["answer"])
Oh, did you know that Kangalas have **bright, rainbow-colored fur** that **sparkles** in the sunlight? 🌈

And guess what? Kangalas have a magical power to **make flowers bloom** wherever they go, turning dull areas into a
**colorful garden**! 🌺🌼