How to Use ChatGPT Agent Mode — Plus a Python Agent Starter
Thu Oct 30 2025
ChatGPT’s Agent Mode lets you delegate multi-step tasks — from summarizing research to generating code — using reasoning, tools, and file access.
In this tutorial, we’ll explore both sides:
- How to use Agent Mode inside the ChatGPT UI
- How to build your own Python-powered agent using OpenAI’s Responses API.
Why It Matters
Agentic systems represent a major shift from traditional chat models — they don’t just answer questions; they plan, call tools, and reason their way through goals.
With Agent Mode, you can automate multi-step workflows while retaining visibility and control.
As a developer, you can use the same approach programmatically with Python — defining tools, reading files, and letting the model orchestrate steps autonomously.
Part 1: Using Agent Mode in ChatGPT (No Code)
- Open ChatGPT (GPT-4) and enable Agent Mode from the tools menu (or type
/agent). - Describe a task — for example:
“Collect 5 recent AI hardware launches and draft a 3-paragraph summary.”
- ChatGPT will plan and execute each step, narrating its reasoning and tool usage.
- You can pause, approve, or edit steps anytime for safety and transparency.
💡 Tip: Start small — e.g., “Summarize three OpenAI blog posts and format in Markdown” — before automating bigger workflows.
Part 2: Build a Python Agent with Tools
Prerequisites
Install the OpenAI client and dotenv for environment variables:
pip install openai python-dotenv
Create a .env file with:
OPENAI_API_KEY=sk-...
Step 1: Create a Basic Agent Loop
# agent_basic.py
import os
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
client = OpenAI()
SYSTEM = """You are an autonomous assistant.
When tools are available and relevant, call them.
Explain your plan briefly before answering."""
def ask(model, messages, tools=None):
return client.responses.create(
model=model,
input=messages,
tools=tools or [],
)
if __name__ == "__main__":
messages = [
{"role": "system", "content": SYSTEM},
{"role": "user", "content": "Plan a 2-day AI developer meetup in Chennai."},
]
resp = ask("gpt-4.1-mini", messages)
print(resp.output_text)
The responses.create() method is the new unified API for chat + reasoning + tool usage.
Step 2: Add a Function Tool (Weather Example)
# tools_weather.py
def get_weather(city: str) -> str:
# Replace this with your own API logic
return f"Sunny 31°C in {city} (demo)"
WEATHER_TOOL = {
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a city.",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name, e.g., Chennai"}
},
"required": ["city"]
},
},
}
Then, connect it inside your agent:
# agent_with_tool.py
import json
from openai import OpenAI
from tools_weather import get_weather, WEATHER_TOOL
client = OpenAI()
def run_agent(user_msg: str):
messages = [
{"role": "system", "content": "You are a helpful agent. Use tools when relevant."},
{"role": "user", "content": user_msg},
]
# Model decides if it needs a tool
resp = client.responses.create(
model="gpt-4.1-mini",
input=messages,
tools=[WEATHER_TOOL],
)
tool_calls = []
for item in resp.output:
if getattr(item, "type", None) == "tool_call":
tool_calls.append(item)
# If the model called a tool, execute and send back result
if tool_calls:
tool_messages = []
for call in tool_calls:
fn = call.tool_call.function
name = fn.name
args = json.loads(fn.arguments or "{}")
if name == "get_weather":
result = get_weather(**args)
else:
result = f"Unknown tool: {name}"
tool_messages.append({
"role": "tool",
"tool_call_id": call.tool_call.id,
"content": result,
})
# Send tool results back for the final reasoning step
resp2 = client.responses.create(
model="gpt-4.1-mini",
input=messages + tool_messages,
)
return resp2.output_text
return resp.output_text
if __name__ == "__main__":
print(run_agent("What's the weather in Chennai and suggest a weekend plan?"))
Step 3: Add File Context (Memory)
Agents can use uploaded files as context:
# agent_files.py
from openai import OpenAI
client = OpenAI()
# Upload file to assistant context
upload = client.files.create(
purpose="assistants",
file=open("agenda_template.md", "rb")
)
file_id = upload.id
messages = [
{"role": "system", "content": "Use the attached file as a template."},
{"role": "user", "content": [
{"type": "input_text", "text": "Create a 2-day AI meetup plan using this template."},
{"type": "input_file", "file_id": file_id},
]},
]
resp = client.responses.create(model="gpt-4.1-mini", input=messages)
print(resp.output_text)
Now your agent can read documents, follow templates, and reference data contextually.
Step 4: Add Guardrails & Observability
-
Limit tool scope — define clear JSON schemas.
-
Add confirmations for sensitive actions.
-
Log tool calls for debugging.
-
Use “plan → fetch → summarize” steps for better reliability.
Apptastic Insight
Agent Mode represents the future of work orchestration — where models can plan, call APIs, and collaborate autonomously. Start small: give your agent one or two tools, test it with real data, and observe how it reasons before scaling.
As OpenAI’s Responses API matures, expect deeper integration with memory, file retrieval, and chained reasoning — turning every developer into a system designer for intelligent agents.
Thu Oct 30 2025


