Launch a Chatbot in Minutes
- Tanu Varshney

- Aug 25
- 5 min read
Updated: Sep 7

You can Build a Streamlit + LangGraph Chatbot (Groq) — Step‑by‑Step Guide
If you want a lightweight, production‑friendly chatbot with a modern UI, Streamlit makes the front end easy, and LangGraph gives you tool‑using, stateful agents. This guide walks you through a clean setup using Groq for fast, low‑latency inference.
You’ll get:
A ChatGPT‑style Streamlit app powered by Groq (via langchain-groq).
A minimal LangGraph agent that streams tokens to the UI.
Clear instructions to manage secrets, run locally, and keep conversation memory.
Prerequisites
Python 3.10+ recommended
A terminal (PowerShell on Windows, bash/zsh on macOS/Linux)
A Groq API key (we’ll show you how to get one below)
Tip: Don’t commit your .env file to Git. Add it to .gitignore.
1) Create a project folder
mkdir chat-app
cd chat-app
2) Create and activate a virtual environment
Windows (PowerShell):
python -m venv .venv
.venv\Scripts\Activate.ps1
macOS/Linux:
python3 -m venv .venv
source .venv/bin/activate
Why a venv? Keeps project dependencies isolated so other Python projects don’t break.
3) Add a requirements.txt
Create a file named requirements.txt with:
streamlit>=1.36
python-dotenv>=1.0
langgraph>=0.2.15
langchain>=0.2
langchain-openai>=0.1.14
langchain-groq>=0.1.6
These versions work well together at the time of writing. If you’re mixing in newer packages later, upgrade thoughtfully.
4) Install dependencies
pip install -r requirements.txt
If you see compiler errors on Windows, ensure you’re using a recent Python and have the VC++ Build Tools installed.
5) Create a .env file (store API keys)
In the project root, create a file named .env and add:
GROQ_API_KEY=your_groq_key_here
How to get a Groq API key
Open your browser and go to https://groq.com.
Click Console (top right) or go directly to https://console.groq.com.
Sign up or Log in (Google/GitHub).
If prompted, verify your account/email.
In the console, click your profile avatar → API Keys.
Click Create API Key and copy it.
Important: The key is shown once. Paste it into your .env.
Security best practice: Never hard‑code keys in code. Use .env and environment variables.
6) Create chat-app.py (ChatGPT‑like Streamlit app)
This app uses langchain-groq to stream responses from a Groq model and renders a familiar chat UI in Streamlit.
import os
import streamlit as st
from dotenv import load_dotenv
from langchain_groq import ChatGroq
from langchain_core.messages import HumanMessage, SystemMessage, AIMessage
# Load environment variables
load_dotenv()
# Page config
st.set_page_config(
page_title="ChatGPT-like Chat",
page_icon="🤖",
layout="wide",
)
# Seed a friendly first message
if "messages" not in st.session_state:
st.session_state.messages = [
{"role": "assistant", "content": "Hello Humans! How can I help you today?"}
]
# Sidebar controls
with st.sidebar:
st.title("Settings")
model_name = st.selectbox(
"Choose a model",
["llama-3.3-70b-versatile", "mixtral-8x7b-32768", "gemma-7b-it"],
)
temperature = st.slider("Creativity", 0.0, 1.0, 0.7, 0.1)
# API key presence check
if not os.getenv("GROQ_API_KEY"):
st.error("⚠️ GROQ_API_KEY not found in .env file", icon="⚠️")
else:
st.success("✅ GROQ API Key is set", icon="✅")
st.title("🤖 Chatbot App")
st.caption("Powered by Netsetos")
# Render prior messages
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Input bar
if prompt := st.chat_input("Type your message..."):
# Show user message immediately
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
# Stream assistant response
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
try:
chat = ChatGroq(
model_name=model_name,
temperature=temperature,
streaming=True, # enables .stream(...)
)
# Build a history-aware list of messages for the model
messages = [SystemMessage(content="You are a helpful AI assistant.")]
for msg in st.session_state.messages:
if msg["role"] == "user":
messages.append(HumanMessage(content=msg["content"]))
elif msg["role"] == "assistant":
messages.append(AIMessage(content=msg["content"]))
for chunk in chat.stream(messages): # token-wise chunks
full_response += chunk.content or ""
message_placeholder.markdown(full_response + "▌")
message_placeholder.markdown(full_response)
st.session_state.messages.append({"role": "assistant", "content": full_response})
except Exception as e:
error_msg = f"Error: {str(e)}"
st.error(error_msg)
fallback = "I'm sorry, I encountered an error. Please try again."
message_placeholder.markdown(fallback)
st.session_state.messages.append({"role": "assistant", "content": fallback})
# Minor style polish
st.markdown(
"""
<style>
.stChatFloatingInputContainer { bottom: 20px; }
.stChatMessage { padding: 1rem; border-radius: 0.5rem; margin-bottom: 0.5rem; }
[data-testid="stSidebar"] { background-color: #f8f9fa; }
</style>
""",
unsafe_allow_html=True,
)
What this does
Gotcha: In langchain>=0.2, import messages from langchain_core.messages (not langchain.schema).
7) Create a LangGraph agent file (streammlitt.py)
This shows a minimal ReAct agent that you’ll later extend with tools. We’ll use InMemorySaver for ephemeral state.
The filename can be anything (e.g., agent_app.py). We’ll stick with streammlitt.py as a separate demo app.
import os
from dotenv import load_dotenv
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.prebuilt import create_react_agent
from langchain_groq import ChatGroq
import streamlit as st
load_dotenv()
st.title("Chatbot App (LangGraph)")
if "messages" not in st.session_state:
st.session_state.messages = [] # list of tuples (role, content)
checkpointer = InMemorySaver()
# Use a Groq LLM inside the LangGraph agent
llm = ChatGroq(model_name="llama-3.3-70b-versatile", streaming=True)
agent = create_react_agent(
model=llm, # pass the instantiated model, not a string
tools=[], # add tools later (search, code exec, etc.)
checkpointer=checkpointer,
prompt="You are a helpful assistant.",
)
def stream_graph_updates(user_input: str):
"""Stream the agent's response and update the UI"""
assistant_response = ""
with st.chat_message("assistant"):
message_placeholder = st.empty()
for event in agent.stream(
{"messages": [{"role": "user", "content": user_input}]},
{"configurable": {"thread_id": 1}},
):
for value in event.values():
# get the latest chunk
new_text = value["messages"][-1].content
assistant_response += new_text
message_placeholder.markdown(assistant_response)
st.session_state.messages.append(("assistant", assistant_response))
return assistant_response
# Render history
for role, message in st.session_state.messages:
with st.chat_message(role):
st.markdown(message)
prompt = st.chat_input("What is your question?")
if prompt:
with st.chat_message("user"):
st.markdown(prompt)
st.session_state.messages.append(("user", prompt))
stream_graph_updates(prompt)
What this does
Wraps a Groq LLM inside a LangGraph ReAct agent.
Streams agent output to the UI.
Uses a simple thread_id so you can add memory/checkpoints later.
8) Run the app(s)
Pick one to start with:
streamlit run chat-app.py
Or the LangGraph agent demo:
streamlit run streammlitt.py
Streamlit will open your default browser or print a local URL like http://localhost:8501.
9) Test a prompt
Type a question in the chat input and confirm you see streamed assistant text and that earlier turns remain visible.
If you see latency, try a different model or reduce temperature.
10) Keep conversation memory (beyond the UI)
Right now, st.session_state keeps messages for the UI, but the model only “sees” what you send it. To make answers context‑aware:
In chat-app.py, we already build a messages list from the full chat history before calling .stream(...). That gives the model the entire conversation every time.
In the LangGraph app, you can preserve state across turns by using the checkpointer with unique thread_ids or by storing past exchanges and passing them as input to the graph.
Scaling tip: If chats get long, consider summarizing older turns or using a memory node in your graph.
Bonus: Add tools to your LangGraph agent
Once you’re comfortable, add tools (search, code exec, RAG) and pass them to create_react_agent(tools=[...]). Each tool is a function with a name, description, and schema so the agent can call it.
Example (skeleton):
from langchain.tools import tool
@tool
def ping(host: str) -> str:
"""Ping a host and return basic reachability results."""
# implement safely; this is just a placeholder
return f"Pinging {host}..."
agent = create_react_agent(model=llm, tools=[ping], checkpointer=checkpointer)

Troubleshooting
1) Error: 401/403 UnauthorizedDouble‑check GROQ_API_KEY is in .env and that you called load_dotenv(). Restart the terminal after editing .env.
2) 429: rate limit or quota errorsYou may be hitting free‑tier or org limits. Wait a moment, try a smaller model, or check your Groq console quota.
3) ModuleNotFoundErrorConfirm you’re in the venv (which python / Get-Command python) and run pip install -r requirements.txt again.
4) ImportError: langchain.schemaUse from langchain_core.messages import ... with langchain>=0.2.
5) Blank page or stuck spinnerCheck your terminal for Python exceptions. Comment out custom CSS to rule out rendering issues.
Production tips
Secrets: Use a secrets manager (Streamlit Community Cloud secrets, GitHub Actions secrets, or env vars on your host).
Logging: Add logging and capture request/response metadata (but never log raw secrets or user PII).
Validation: Validate tool inputs. Agents will call tools with whatever they think is right.
Observability: Consider tracing (e.g., LangSmith) to see chains, tokens, and latency.
What’s next?
Deploy to Streamlit Community Cloud: push to GitHub, click New app, select repo/branch/file (chat-app.py), add GROQ_API_KEY under Secrets.
Add RAG: wire in a vector DB (FAISS/Chroma) and create a retrieve‑then‑answer tool.
Multi‑agent workflows: chain specialized agents with LangGraph edges.
Recap
You now have:
A Streamlit chat UI using Groq (chat-app.py), with history‑aware prompts and streaming.
A minimal LangGraph ReAct agent (streammlitt.py) that’s ready for tools and memory.
From here, you can layer in tool use, RAG, evaluation, and finally one‑click deploys. Happy building! 🚀



Comments