Fun with AI: LangChain and GPT-3.5 Turbo Unleashed. Meet Kangala!

LLM Engineering
LangChain
GPT
Getting started with LangChain and GPT-3.5 Turbo β€” build a conversational AI assistant from scratch with prompt templates and memory.
Author

Ravi Sankar Krothapalli

Published

March 5, 2025

✍️ Note:

You can download the entire notebook from this link Fun with AI: LangChain and GPT-3.5 Turbo Unleashed. Meet Kangala!

Introduction to LangChain

In this blog post, we will explore the fundamentals of LangChain, highlighting some of its key features and demonstrating how it can facilitate the development of intelligent applications. From setting up your environment to creating engaging content, LangChain simplifies the entire process, making it both straightforward and enjoyable. Join me on this exciting journey and discover how LangChain can elevate your AI projects to the next level!

Setup the environment

To use the OpenAI API and other services securely, you need to create a .env file in the root directory of your project. This file will store your API keys and other sensitive information.

Add the following environment variable to the .env file:

OPENAI_API_KEY=your_openai_api_key_here

By following these steps, you can securely manage your environment variables and keep your sensitive information safe.

Note

Before proceeding, please make sure to install the following libraries:

ipykernel
jupyter
LangChain
LangChain-community
LangChain-openai
langgraph
nbclient
openai
python-dotenv
pyyaml
from rich import print
from dotenv import load_dotenv

if loading_envs := load_dotenv():
    print("Loaded environment variables")
Loaded environment variables

Generating fun facts about animals with OpenAI’s GPT-3.5 Turbo

Now, let’s have some fun with OpenAI’s GPT-3.5 Turbo model. We’re going to generate some hilarious animal facts that will make you giggle! πŸ¦₯

The chat.completions.create endpoint is used to generate responses from the model based on a given prompt.

This endpoint allows you to:

  • Specify the model (e.g., GPT-3.5 Turbo)

  • Provide messages, including system and user messages

  • Tailor the response with various parameters

from openai import AsyncOpenAI

llm_model = AsyncOpenAI()

# Invoke the model with the prompt and print the response
response = llm_model.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", 
         "content": """You are a helpful assistant. 
                       Your purpose is to share fun facts with a 5 year old.
                       Provide a friendly response and make it funny. 
                       Make sure each sentence appears in a new line and use Markdown and highlighting"""},
        {"role": "user", 
         "content": "Tell me 2 fun facts about sloths."}
    ]
)

llm_model_response = await response

print(llm_model_response.choices[0].message.content)
**Fun Fact #1:**  
Did you know that sloths are known to be the slowest mammals on Earth? They move so slowly that algae can actually 
grow on their fur, giving them a greenish tint!

**Fun Fact #2:**  
Sloths are such good swimmers that they can hold their breath for up to 40 minutes underwater! They may be slow on 
land, but they sure know how to glide through the water like a pro!

Using LangChain to generate fun facts about animals

LangChain makes working with language models even easier. It’s like having a magic wand that simplifies everything! πŸͺ„

The following code snippet demonstrates how to use LangChain’s ChatOpenAI model to generate a to generate fun facts about animals.

Following are the some of the features of LangChain:

  • Higher-level abstraction: Simplifies working with language models.

  • Complex workflows: Easier management and integration with other tools.

  • Conversational contexts: Handles contexts effectively.

  • Time-saving: Reduces boilerplate code compared to direct OpenAI API usage.

Initialize the model

from langchain.chat_models import init_chat_model

fun_facts_chat_model = init_chat_model("gpt-3.5-turbo", model_provider="openai")

Crafting the chat prompt message and invoking the model

from langchain_core.messages import HumanMessage, SystemMessage

messages = [
    SystemMessage("""You are a helpful assistant. 
                Your purpose is to share fun facts with a 5 year old.
                Provide a friendly response and make it funny. 
                Make sure each sentence appears in a new line and use Markdown and highlighting"""),
    HumanMessage("Tell me 2 fun facts about sloths."),
]

# Invoke the model with the created prompt and print the response
response = fun_facts_chat_model.invoke(messages)
print(response.content)
Absolutely! Let me share two fun facts about sloths with you:

**1. Sloths are known for being super slow**  
They are so slow that they only move at a speed of about 0.24 kilometers per hour. That's slower than a snail in a 
race against a tortoise!

**2. Sloths are excellent swimmers**  
Even though they are slow on land, sloths are surprisingly good swimmers. They can hold their breath for up to 40 
minutes while paddling through the water like little sloth mermaids!

Generating prompt using Prompt Templates

Creating effective prompts can be streamlined with Prompt Templates. These templates allow you to define reusable and dynamic prompts with placeholders, making them adaptable for various scenarios.

At runtime, these placeholders are replaced with actual values, ensuring your prompts are always relevant and up-to-date. This method ensures consistent and flexible prompt creation, maintaining a structured and efficient process.

By using Prompt Templates, you can create prompts that are both consistent and adaptable, making your workflow smoother and more efficient.

from langchain.prompts import ChatPromptTemplate

# Create a prompt template
prompt_template = ChatPromptTemplate.from_messages([
    ("system", """You are a helpful assistant. 
                Your purpose is to share fun facts with a 5 year old.
                Provide a friendly response and make it funny. 
                Make sure each sentence appears in a new line and use Markdown and highlighting"""),
    ("user", "Tell me 2 fun facts about {animal}.")
])

# Create a prompt with the specified animal
prompt = prompt_template.invoke({"animal": "koalas"})
print(f"printing updated prompt: {prompt.to_messages()[1].content}\n")

# Invoke the model with the created prompt and print the response
response = fun_facts_chat_model.invoke(prompt)
print(response.content)
printing updated prompt: Tell me 2 fun facts about koalas.

Absolutely, I'd love to share some fun facts about koalas with you! 🐨

Did you know that koalas are often called "bears," but they are actually not bears at all? They are marsupials, 
just like kangaroos and possums! 🦘

Also, koalas have very unique fingerprints, just like humans! So if a koala were to commit a crime, they could 
easily be identified by their fingerprints! πŸ¨πŸ•΅οΈβ€β™‚οΈ

Simple RAG Application

Finally, let’s build a simple Retrieval Augmented Generation (RAG) application.

Retrieval Augmented Generation (RAG) is a technique that combines document retrieval with language generation to produce accurate and contextually relevant responses.

Let’s set the stage to demonstrate the power of RAG.

Meet Kangala:

Kangala generated using copilot

The Kangala is a whimsical creature with rainbow-colored fur that sparkles in the sunlight and a playful expression that brings joy to everyone who sees it.

Known for its ability to make flowers bloom wherever it goes, the Kangala loves to jump around and play, making it a delightful companion in any magical forest! πŸŒˆπŸ¦„

Why RAG?:

Retrieval-Augmented Generation (RAG) is key to creating engaging content about the whimsical Kangala. By combining document retrieval with language generation, RAG ensures that fun facts are both accurate and contextually relevant. This approach allows the application to deliver delightful and informative responses, making the Kangala come alive in a magical and entertaining way.

Here’s how you can create fun facts about the Kangala:

  • Document Creation: Store fun facts about the Kangala in an in-memory vector store.

  • Embedding Generation: Use OpenAI’s embedding model to create numerical representations of the documents.

  • Similarity Search: Retrieve the most relevant documents based on a query.

  • Prompt Construction: Construct a prompt using the retrieved documents.

  • Response Generation: Generate a response from the language model.

Document creation

from langchain_core.documents import Document

documents = [
    Document(
        page_content="Kangalas have bright, rainbow-colored fur that sparkles in the sunlight.",
        metadata={"source": "imaginary-animals-doc"},
    ),
    Document(
        page_content="Kangalas can make funny, musical sounds that make everyone laugh and dance.",
        metadata={"source": "imaginary-animals-doc"},
    ),
    Document(
        page_content="Kangalas love to eat sweet fruits and berries, especially magical starberries.",
        metadata={"source": "imaginary-animals-doc"},
    ),
    Document(
        page_content="Kangalas can jump really high, almost like they have springs in their legs, and they love to play leapfrog.",
        metadata={"source": "imaginary-animals-doc"},
    ),
    Document(
        page_content="Kangalas can change the color of their fur to match their surroundings, just like a chameleon, making them great at hide and seek.",
        metadata={"source": "imaginary-animals-doc"},
    ),
    Document(
        page_content="Kangalas have a magical ability to make flowers bloom wherever they go, turning the dry lands into a colorful garden.",
        metadata={"source": "imaginary-animals-doc"},
    ),
]

Construct prompt and generate a response

This code sets up a simple application using LangChain and LangGraph frameworks. It defines a state that includes an animal name, a context of documents, and an answer.

The application has two main functions:

  • Retrieve: This function searches for documents related to the given animal and updates the context with these documents.

  • Generate: This function uses the context to generate an answer based on a predefined prompt template and a chat model.

The application is then compiled into a state graph, which defines the sequence of operations. Finally, the graph is executed with an initial state, and the generated answer is printed. This setup allows for efficient retrieval and generation of information based on the given input.

from langchain_core.documents import Document
from langgraph.graph import START, StateGraph
from typing_extensions import List, TypedDict


prompt_template.append(
    {"role": "system", "content": "Context: {context} \n Answer: "})


# Define state for application
class State(TypedDict):
    animal: str
    context: List[Document]
    answer: str


def retrieve(state: State):
    retrieved_docs = vector_store.similarity_search(state["animal"])
    return {"context": retrieved_docs}


def generate(state: State):
    docs_content = "\n\n".join(doc.page_content for doc in state["context"])
    messages = prompt_template.invoke(
        {"animal": state["animal"], "context": docs_content})
    response = fun_facts_chat_model.invoke(messages)
    return {"answer": response.content}


# Compile application and test
graph_builder = StateGraph(State).add_sequence([retrieve, generate])
graph_builder.add_edge(START, "retrieve")
graph = graph_builder.compile()

response = graph.invoke({"animal": "Kangala"})
print(response["answer"])
That's a great choice! 🌈

Did you know Kangalas have bright, rainbow-colored fur that sparkles in the sunlight? 🌟

And they can make funny, musical sounds that make everyone laugh and dance! 🎢

Kangalas can jump really high, almost like they have springs in their legs, and they love to play leapfrog! 🦘

Wrap-up

You now have a complete baseline: prompting, prompt templates, and a simple retrieval-augmented flow with LangGraph orchestration. In part 2, we move from direct generation to tool-using agents.

If this post was useful, you can subscribe for new implementation-first articles without leaving the page.

Back to where you left off

Show signup form

Powered by Buttondown.