Gemini-Powered Chatbot Project code in Python

← Back to Projects

Gemini-Powered Chatbot in Python

About the project: This is a project that uses a powerful large language model (LLM), It will create a Python command-line chatbot that utilizes the Gemini API (the platform's preferred LLM) to provide the conversational intelligence.

The single file, gemini_chatbot.py, contains everything you need to run a persistent, multi-turn conversation right from your terminal.

Project Level: Advance


How to Run the Chatbot

  1. Save the Code: Save the content above into a file named gemini_chatbot.py.
  2. Install Requests: This script requires the requests library. If you don't have it, install it using pip:
    
        pip install requests
        
  3. **Run:** Execute the script from your terminal:
    
    python gemini_chatbot.py
    
        


You can start chatting with this project! The bot will remember what you said earlier in the session, allowing for coherent, multi-turn conversations.

Project Structure

This project consists of a single Python script. You can name it gemini_chatbot.py.

Copy the following code into your file. The code is well-commented to help you understand each part of the chatbot functionality.


 #gemini_chatbot.py 
 
import requests
import json
import time

# --- Configuration ---
# NOTE: The apiKey is left as an empty string as per environment requirements. 
# In a real-world scenario, you would replace this with your actual API key.
API_KEY = ""
API_URL = "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-05-20:generateContent"
MODEL_NAME = "gemini-2.5-flash-preview-05-20"
MAX_RETRIES = 5

# The conversation history stores previous messages to maintain context
chat_history = []

def call_gemini_api(prompt):
    """
    Calls the Gemini API with the current chat history and user prompt.
    Implements exponential backoff for handling rate limit or transient errors.
    """
    global chat_history
    
    # 1. Add the new user prompt to the history
    chat_history.append({
        "role": "user", 
        "parts": [{"text": prompt}]
    })

    # 2. Construct the API payload
    payload = {
        "contents": chat_history,
        "systemInstruction": {
            "parts": [{"text": "You are a friendly, helpful, and concise AI assistant. Keep your responses clear and easy to understand."}]
        }
    }

    headers = {'Content-Type': 'application/json'}
    
    # 3. Implement exponential backoff for robust API calls
    for attempt in range(MAX_RETRIES):
        try:
            # Make the POST request
            response = requests.post(f"{API_URL}?key={API_KEY}", 
                                     headers=headers, 
                                     data=json.dumps(payload))
            response.raise_for_status() # Raise an HTTPError for bad responses (4xx or 5xx)

            data = response.json()
            
            # Extract generated text
            candidate = data.get('candidates', [{}])[0]
            if candidate and 'text' in candidate.get('content', {}).get('parts', [{}])[0]:
                ai_response_text = candidate['content']['parts'][0]['text']
                
                # 4. Add the successful AI response to the history
                chat_history.append({
                    "role": "model", 
                    "parts": [{"text": ai_response_text}]
                })
                
                return ai_response_text
            
            else:
                print("\n[AI Error] Could not extract text from the response.")
                print(f"Full response: {data}")
                return "Sorry, I encountered an issue generating a response."

        except requests.exceptions.RequestException as e:
            # Handle connection errors or bad HTTP statuses
            print(f"Attempt {attempt + 1}/{MAX_RETRIES} failed. Error: {e}")
            
            # If this is not the last attempt, wait before retrying
            if attempt < MAX_RETRIES - 1:
                wait_time = 2 ** attempt  # Exponential backoff (1s, 2s, 4s, 8s, 16s)
                print(f"Retrying in {wait_time} seconds...")
                time.sleep(wait_time)
            else:
                return "The API is unreachable or failed after several retries. Please check your connection or API key."
    
    return "Failed to get a response from the AI after maximum retries."

def main():
    """
    Main function to run the command-line chat interface.
    """
    print("🤖 Gemini Chatbot CLI")
    print("-----------------------------------")
    print(f"Model: {MODEL_NAME}")
    print("Type 'exit' or 'quit' to end the session.")
    print("-----------------------------------\n")

    while True:
        user_input = input("You: ")
        
        if user_input.lower() in ['quit', 'exit']:
            print("\nGoodbye! Conversation history cleared.")
            break
        
        if not user_input.strip():
            continue

        print("AI: Thinking...")
        
        # Get response from the AI
        response = call_gemini_api(user_input)
        
        # Display the formatted response
        print(f"AI: {response}\n")


if __name__ == "__main__":
    main()

    
  



← Back to Projects