GenAIHub
Back to Labs
Beginner

Chatbot Basics

Download Lab Files

Learn to build chatbots with LangChain, Ollama and RAG. From simple conversational bots to complete agents with tools.

Why Chatbot Basics?

Progressive Learning

5 scripts from simple to advanced, each building on the previous

100% Open Source

Uses Ollama with local LLMs - no API keys required

Tools & Agents

Learn how LLMs use tools through agent architecture

RAG Integration

Build a knowledge base with FAISS and embeddings

Learning Path

graph LR A[File 1: Simple Chatbot] --> B[File 2: Tools & Agents] B --> C[File 3: RAG - Vector DB] C --> D[File 4: RAG - Search] D --> E[File 5: Full Agent]
File 1
Basic Conv
File 2
Tool Usage
File 3
Embeddings
File 4
Search
File 5
Full Agent

Step 1: Environment Setup

You can run this lab using Docker, Poetry, or Pip. All methods use Ollama for 100% open-source local LLMs.

Open Source LLM: Uses Ollama (llama3.1) - free and runs locally!

# 1. Make sure Ollama is running on your host
ollama serve

# 2. Pull the required models
ollama pull llama3.1

# 3. Navigate to the directory
cd Handson/ChatBot-Files

# 4. Build and run with docker-compose
docker-compose up

The simplest chatbot using LangChain + Ollama. Demonstrates conversation memory and prompt templates.

from langchain_ollama import OllamaLLM
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory

# Initialize the model (100% local, free)
llm = OllamaLLM(model="llama3.1", temperature=0.0)

# Create memory to maintain context
memory = ConversationBufferMemory()

# Build the conversation chain
conversation = ConversationChain(llm=llm, memory=memory)

# Chat!
response = conversation.predict(input="Hello, how are you?")
print(response)

Key Concept: ConversationBufferMemory stores the full chat history.

Add tools to your chatbot so the LLM can perform actions like calculations and text summarization.

from langchain.tools import StructuredTool
from langchain.agents import initialize_agent, AgentType

# Define custom tools
def calculate_square(number: int) -> str:
    """Calculate the square of a number."""
    return str(number * number)

square_tool = StructuredTool.from_function(
    func=calculate_square,
    name="calculate_square",
    description="Calculates the square of a number"
)

# Create agent with tools
agent = initialize_agent(
    tools=[square_tool],
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)

# Agent decides when to use tools
result = agent.run("Calculate the square of 7")

Build a knowledge base using embeddings and FAISS for semantic search.

from sentence_transformers import SentenceTransformer
import faiss
import numpy as np

# Load embedding model
model = SentenceTransformer('all-MiniLM-L6-v2')

# Your knowledge base
texts = [
    "LangChain is a framework for LLM applications.",
    "FAISS enables fast similarity search.",
    "RAG reduces hallucinations by grounding responses."
]

# Generate embeddings
embeddings = model.encode(texts)
embeddings = embeddings / np.linalg.norm(embeddings, axis=1, keepdims=True)

# Create FAISS index
index = faiss.IndexFlatIP(embeddings.shape[1])
index.add(embeddings)

# Save for later use
faiss.write_index(index, "faiss_index/faiss.index")
def search_in_faiss(query: str, top_k: int = 2):
    """Search the knowledge base using semantic similarity."""
    # Encode query
    query_embedding = model.encode([query])
    query_embedding = query_embedding / np.linalg.norm(query_embedding)
    
    # Search in FAISS
    scores, indices = index.search(query_embedding, top_k)
    
    # Return results
    results = []
    for score, idx in zip(scores[0], indices[0]):
        if score > 0.35:  # Minimum threshold
            results.append({
                "text": texts[idx],
                "score": float(score)
            })
    return results

# Example search
results = search_in_faiss("What is RAG?")
print(results)

The final implementation combines conversation, tools, and RAG into one powerful agent.

# This agent has 3 tools:
# 1. calculate_square - Math operations
# 2. generate_text_summary - Text summarization  
# 3. search_in_faiss - Knowledge base retrieval

agent = initialize_agent(
    tools=[square_tool, summary_tool, rag_search_tool],
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    memory=memory
)

# The agent decides which tool to use
agent.run("Search in RAG for quality principles")
agent.run("Calculate the square of 15")
agent.run("Summarize this text: ...")

Expected Output

πŸ•’ Execution time: 2.10s

User: Search in RAG for quality principles

πŸ”Ž Query: quality principles
Top 2 similar results (cosine)
============================================================
πŸ… Rank 1 (Score: 0.8123)
We ensure delivery of high-quality, reliable, and resilient 
software solutions...

Agent: Based on the knowledge base, the quality principles include 
delivering high-quality software with reliability and resilience...

Learning Checklist

Quick Quiz

What is the main purpose of RAG (Retrieval-Augmented Generation)?

Correct! RAG retrieves relevant documents to augment the LLM's knowledge.
Not quite. RAG is about grounding responses in external knowledge.