What this is
A tiny Bash client for OpenAI's Chat Completions API. You give it a prompt (via args, a file, or stdin), it calls the API with a chosen model, prints the assistant's reply to stdout, and prints token usage to stderr. It can also stream tokens as they arrive, still reporting usage at the end.
Key behavior, step by step
-
Safety & env
- set -euo pipefail → exit on errors, unset vars, or failed pipes.
- Requires curl and jq.
- Requires OPENAI_API_KEY in the environment; exits if missing.
-
Arguments
- -model : pick the model (default gpt-4o-mini).
- --list-models: list available models (via GET /v1/models) and exit.
- -prepend : prefix text added before the main prompt (with a blank line separation).
- -content-from-file : read the prompt from a file.
- --stream: stream the response tokens (Server-Sent Events).
- Prompt text can also be passed as plain arguments, or piped via stdin if no args/file.
-
Prompt building
- Collects the main content from (priority):
-
-content-from-file, else 2) remaining command-line words, else 3) stdin (if piped).
- If -prepend is set, it's added above the main content with two newlines.
- The combined text is JSON-escaped via jq -Rs . and used as the user message.
-
Non‑streaming mode (default)
- POSTs to https://api.openai.com/v1/chat/completions with:
{ "model": "...", "messages": [{"role": "user", "content": "..."}] } - Prints the assistant's reply (.choices[0].message.content) to stdout.
- If present, prints token usage (prompt/completion/total) to stderr as:
tokens: prompt=... completion=... total=...
- POSTs to https://api.openai.com/v1/chat/completions with:
-
Streaming mode (--stream)
- Sends the same request but with "stream": true and
"stream_options": {"include_usage": true} so the final SSE chunk contains usage. - Reads data: lines from the SSE stream:
- Prints incremental content chunks (.choices[0].delta.content) to stdout as they arrive (no newline until done).
- When a chunk contains a .usage object, it prints a newline to stdout, then the usage line to stderr.
- Ensures output ends with a newline even if usage wasn't received for some reason.
- Sends the same request but with "stream": true and
-
Exit & errors
- Clear error messages for missing deps, missing key, missing file, or empty prompt.
- Returns non‑zero on those errors due to set -e.
Usage examples:
Simple call:
OPENAI_API_KEY=... ./askChatGPT.sh "Explain WebSockets to a 10-year-old"
With prepend (e.g., a system-ish instruction):
./askChatGPT.sh -prepend "You are concise." "Summarize Bitcoin halvings."
From file:
./askChatGPT.sh -content-from-file prompt.txt
From stdin:
echo "Translate this to Finnish:" | ./askChatGPT.sh
Streaming (prints tokens live, usage at end):
./askChatGPT.sh --stream "Write a limerick about Helsinki"
List models:
./askChatGPT.sh --list-models
Notes & gotchas
- API family: It uses the /v1/chat/completions endpoint. That's fine, but OpenAI also has a newer Responses API; feature parity and latencies may differ.
- Usage-to-stderr: Handy for logging—e.g., you can capture text with >reply.txt while keeping usage separate.
- Model names: Ensure gpt-4o-mini is available to your account; otherwise use --list-models to check and pass -model ....
- SSE parsing: The script filters lines starting with data: and stops at [DONE]. It's tolerant of lines without content.
If you want, I can add a -system flag to let you send a separate system message, or add support for temperature / max_tokens.
Description
Languages
Shell
100%