askChatGPT/README.md
2025-08-12 13:25:43 +01:00

81 lines
3.4 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# What this is
A tiny Bash client for OpenAI's Chat Completions API. You give it a prompt (via args, a file, or stdin), it calls the API with a chosen model, prints the assistant's reply to stdout, and prints token usage to stderr. It can also stream tokens as they arrive, still reporting usage at the end.
## Key behavior, step by step
1. Safety & env
- set -euo pipefail → exit on errors, unset vars, or failed pipes.
- Requires curl and jq.
- Requires OPENAI_API_KEY in the environment; exits if missing.
2. Arguments
- -model <name>: pick the model (default gpt-4o-mini).
- --list-models: list available models (via GET /v1/models) and exit.
- -prepend <text>: prefix text added before the main prompt (with a blank line separation).
- -content-from-file <file>: read the prompt from a file.
- --stream: stream the response tokens (Server-Sent Events).
- Prompt text can also be passed as plain arguments, or piped via stdin if no args/file.
3. Prompt building
- Collects the main content from (priority):
4. -content-from-file, else 2) remaining command-line words, else 3) stdin (if piped).
- If -prepend is set, it's added above the main content with two newlines.
- The combined text is JSON-escaped via jq -Rs . and used as the user message.
5. Nonstreaming mode (default)
- POSTs to https://api.openai.com/v1/chat/completions with:```{ "model": "...", "messages": [{"role": "user", "content": "..."}] } ```
- Prints the assistant's reply (.choices[0].message.content) to stdout.
- If present, prints token usage (prompt/completion/total) to stderr as: ```tokens: prompt=... completion=... total=...```
6. Streaming mode (--stream)
- Sends the same request but with "stream": true and```"stream_options": {"include_usage": true} so the final SSE chunk contains usage.```
- Reads data: lines from the SSE stream:
- Prints incremental content chunks (.choices[0].delta.content) to stdout as they arrive (no newline until done).
- When a chunk contains a .usage object, it prints a newline to stdout, then the usage line to stderr.
- Ensures output ends with a newline even if usage wasn't received for some reason.
7. Exit & errors
- Clear error messages for missing deps, missing key, missing file, or empty prompt.
- Returns nonzero on those errors due to set -e.
---
## Usage examples:
#### Simple call:
```bash
OPENAI_API_KEY=... ./askChatGPT.sh "Explain WebSockets to a 10-year-old"
```
#### With prepend (e.g., a system-ish instruction):
```bash
./askChatGPT.sh -prepend "You are concise." "Summarize Bitcoin halvings."
```
#### From file:
```bash
./askChatGPT.sh -content-from-file prompt.txt
```
#### From stdin:
```bash
echo "Translate this to Finnish:" | ./askChatGPT.sh
```
#### Streaming (prints tokens live, usage at end):
```bash
./askChatGPT.sh --stream "Write a limerick about Helsinki"
```
#### List models:
```bash
./askChatGPT.sh --list-models
```
## Notes & gotchas
- API family: It uses the /v1/chat/completions endpoint. That's fine, but OpenAI also has a newer Responses API; feature parity and latencies may differ.
- Usage-to-stderr: Handy for logging—e.g., you can capture text with >reply.txt while keeping usage separate.
- Model names: Ensure gpt-4o-mini is available to your account; otherwise use --list-models to check and pass -model ....
- SSE parsing: The script filters lines starting with data: and stops at [DONE]. It's tolerant of lines without content.