Building a Simple Chatbot Using Python’s OpenAI API Wrapper
Introduction
Building chatbots with Python has become remarkably accessible thanks to OpenAI’s API and its official Python wrapper. In this guide, we’ll walk through creating a simple chatbot that communicates through the terminal, processes user input, integrates with external data, and produces AI-generated responses. By the end, you’ll understand how to structure chatbot logic, handle prompts effectively, and design for both performance and scalability.
Section 1: Setting Up Your Environment
Before writing any code, ensure that you have a working Python environment. We’ll use the openai package, which provides a convenient interface for calling OpenAI APIs.
# Create and activate a virtual environment
python -m venv chatbot_env
source chatbot_env/bin/activate # On Windows, use chatbot_env\Scripts\activate
# Install dependencies
pip install openai python-dotenv
Next, create a .env file and add your OpenAI API key:
OPENAI_API_KEY=your_api_key_here
We load this environment variable securely using python-dotenv in our script. This ensures your API key isn’t hard-coded or exposed in version control.
Section 2: Creating a Basic Command-Line Chatbot
Now that the environment is ready, let’s build a simple chatbot that processes user input from the terminal and responds using the OpenAI API.
import os
from openai import OpenAI
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Initialize the OpenAI client
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def chat():
print("Welcome to the Python Chatbot! Type 'exit' to quit.")
while True:
user_input = input("You: ")
if user_input.lower() == 'exit':
break
# Generate chatbot response
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": user_input}]
)
print("Bot:", response.choices[0].message.content)
if __name__ == "__main__":
chat()
This snippet uses chat.completions.create() to send user input as a prompt to the GPT model and print the response. The messages parameter allows us to simulate a conversation history if extended later.
Section 3: Maintaining Conversation History
A chatbot feels more lifelike when it remembers conversation context. Let’s modify the script to track interactions using a simple message history list.
def chat_with_memory():
print("Smart Chatbot with Memory — type 'exit' to quit.")
messages = []
while True:
user_input = input("You: ")
if user_input.lower() == 'exit':
break
messages.append({"role": "user", "content": user_input})
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages
)
bot_reply = response.choices[0].message.content
messages.append({"role": "assistant", "content": bot_reply})
print("Bot:", bot_reply)
By maintaining a messages list that alternates between user and assistant roles, we give the model conversational memory, leading to more coherent and context-aware replies.
Section 4: Integrating External Data Sources
To make our chatbot more useful, we can integrate external data such as weather information, finance data, or internal APIs. Here’s an example that fetches live weather data before sending it to the chatbot.
import requests
def get_weather(city):
API_KEY = "your_weather_api_key"
url = f"https://api.openweathermap.org/data/2.5/weather?q={city}&appid={API_KEY}&units=metric"
response = requests.get(url)
data = response.json()
if response.status_code == 200:
return f"Current temperature in {city} is {data['main']['temp']}°C with {data['weather'][0]['description']}."
else:
return "Sorry, I couldn’t fetch the weather data right now."
def weather_chat():
messages = []
while True:
user_input = input("You: ")
if user_input.lower() == 'exit':
break
if 'weather' in user_input.lower():
city = user_input.split()[-1]
weather_info = get_weather(city)
user_input += f"\nExtra Info: {weather_info}"
messages.append({"role": "user", "content": user_input})
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages
)
bot_reply = response.choices[0].message.content
messages.append({"role": "assistant", "content": bot_reply})
print("Bot:", bot_reply)
This technique adds dynamic data to your AI response pipeline, enabling the chatbot to pull real-time context—perfect for weather updates, pricing queries, or support scenarios.
Section 5: Performance and Optimization Tips
Although our chatbot works, there are several optimization and best-practice steps we can apply:
- Model Selection: Use lightweight models like
gpt-3.5-turbofor low-latency tasks, and switch togpt-4for complex reasoning. - Streaming Responses: For better UX, enable streaming in the API call so users see responses as they’re generated.
- Prompt Engineering: Add system messages to define chatbot personality or task behavior.
- Logging and Monitoring: Track latency, token usage, and error rates to manage costs and detect performance issues.
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
stream=True
)
for chunk in response:
print(chunk.choices[0].delta.get("content", ""), end="", flush=True)
Streaming responses not only improve responsiveness but also provide a more natural chat experience. Additionally, caching repeated responses for common questions can significantly reduce API usage costs.
Conclusion
We’ve walked through building a basic Python chatbot powered by OpenAI’s API, with features including conversation memory, external data integration, and performance optimizations. From this foundation, you can evolve your chatbot into a web-based assistant, customer support bot, or an automation interface integrated with your internal APIs. With just a few enhancements, you can create a powerful conversational AI suited to real-world applications.
Useful links:

