Exercise 1 - Your First Gemini Call
Goal: Get the Gemini API working and compare its output to
llama3.2on the same prompt.
Before you start
You need an API key from Google AI Studio. It is free. Store it in your environment - never in code:
export GEMINI_API_KEY="your-key-here"
Install the SDK:
pip install google-generativeai
Assignment
Open 01_gemini_call.py.
- Write a
generate_gemini(prompt)function usinggoogle.generativeaiwith modelgemini-2.0-flash. - Write a
generate_ollama(prompt)function using the Ollama HTTP API withllama3.2. - Run both on the same three prompts:
- A factual question with a clear answer
- A request to summarise a short paragraph
- An out-of-context question with no good answer
- Print both responses side by side for each prompt.
- Time each call. Print the latency.
Thinking questions
- Did the two models give the same answer? Different answers? Which felt more accurate?
- Gemini was faster or slower than your local
llama3.2? What does that depend on? - Your API key is in an environment variable. What would happen if you accidentally committed a file that contained the key as a string?