diff --git a/README.md b/README.md index 1b16bc4..58f7be9 100644 --- a/README.md +++ b/README.md @@ -1,83 +1,81 @@ -What this is +# What this is A tiny Bash client for OpenAI's Chat Completions API. You give it a prompt (via args, a file, or stdin), it calls the API with a chosen model, prints the assistant's reply to stdout, and prints token usage to stderr. It can also stream tokens as they arrive, still reporting usage at the end. -Key behavior, step by step - 1. Safety & env - • set -euo pipefail → exit on errors, unset vars, or failed pipes. - • Requires curl and jq. - • Requires OPENAI_API_KEY in the environment; exits if missing. - 2. Arguments - • -model : pick the model (default gpt-4o-mini). - • --list-models: list available models (via GET /v1/models) and exit. - • -prepend : prefix text added before the main prompt (with a blank line separation). - • -content-from-file : read the prompt from a file. - • --stream: stream the response tokens (Server-Sent Events). - • Prompt text can also be passed as plain arguments, or piped via stdin if no args/file. - 3. Prompt building - • Collects the main content from (priority): - 1. -content-from-file, else 2) remaining command-line words, else 3) stdin (if piped). - • If -prepend is set, it's added above the main content with two newlines. - • The combined text is JSON-escaped via jq -Rs . and used as the user message. - 4. Non‑streaming mode (default) - • POSTs to https://api.openai.com/v1/chat/completions with: +## Key behavior, step by step -{ "model": "...", "messages": [{"role": "user", "content": "..."}] } +1. Safety & env + - set -euo pipefail → exit on errors, unset vars, or failed pipes. + - Requires curl and jq. + - Requires OPENAI_API_KEY in the environment; exits if missing. +2. Arguments + - -model : pick the model (default gpt-4o-mini). + - --list-models: list available models (via GET /v1/models) and exit. + - -prepend : prefix text added before the main prompt (with a blank line separation). + - -content-from-file : read the prompt from a file. + - --stream: stream the response tokens (Server-Sent Events). + - Prompt text can also be passed as plain arguments, or piped via stdin if no args/file. - • Prints the assistant's reply (.choices[0].message.content) to stdout. - • If present, prints token usage (prompt/completion/total) to stderr as: +3. Prompt building + - Collects the main content from (priority): -tokens: prompt=... completion=... total=... +4. -content-from-file, else 2) remaining command-line words, else 3) stdin (if piped). + - If -prepend is set, it's added above the main content with two newlines. + - The combined text is JSON-escaped via jq -Rs . and used as the user message. +5. Non‑streaming mode (default) + - POSTs to https://api.openai.com/v1/chat/completions with:```{ "model": "...", "messages": [{"role": "user", "content": "..."}] } ``` + - Prints the assistant's reply (.choices[0].message.content) to stdout. + - If present, prints token usage (prompt/completion/total) to stderr as: ```tokens: prompt=... completion=... total=...``` - 5. Streaming mode (--stream) - • Sends the same request but with "stream": true and -"stream_options": {"include_usage": true} so the final SSE chunk contains usage. - • Reads data: lines from the SSE stream: - • Prints incremental content chunks (.choices[0].delta.content) to stdout as they arrive (no newline until done). - • When a chunk contains a .usage object, it prints a newline to stdout, then the usage line to stderr. - • Ensures output ends with a newline even if usage wasn't received for some reason. - 6. Exit & errors - • Clear error messages for missing deps, missing key, missing file, or empty prompt. - • Returns non‑zero on those errors due to set -e. - -Practical examples - • Simple call: +6. Streaming mode (--stream) + - Sends the same request but with "stream": true and```"stream_options": {"include_usage": true} so the final SSE chunk contains usage.``` + - Reads data: lines from the SSE stream: + - Prints incremental content chunks (.choices[0].delta.content) to stdout as they arrive (no newline until done). + - When a chunk contains a .usage object, it prints a newline to stdout, then the usage line to stderr. + - Ensures output ends with a newline even if usage wasn't received for some reason. +7. Exit & errors + - Clear error messages for missing deps, missing key, missing file, or empty prompt. + - Returns non‑zero on those errors due to set -e. +--- +## Usage examples: +#### Simple call: +```bash OPENAI_API_KEY=... ./askChatGPT.sh "Explain WebSockets to a 10-year-old" +``` - - • With prepend (e.g., a system-ish instruction): - +#### With prepend (e.g., a system-ish instruction): +```bash ./askChatGPT.sh -prepend "You are concise." "Summarize Bitcoin halvings." +``` - - • From file: - +#### From file: +```bash ./askChatGPT.sh -content-from-file prompt.txt +``` - - • From stdin: - +#### From stdin: +```bash echo "Translate this to Finnish:" | ./askChatGPT.sh +``` - - • Streaming (prints tokens live, usage at end): - +#### Streaming (prints tokens live, usage at end): +```bash ./askChatGPT.sh --stream "Write a limerick about Helsinki" +``` - - • List models: - +#### List models: +```bash ./askChatGPT.sh --list-models +``` - -Notes & gotchas - • API family: It uses the /v1/chat/completions endpoint. That's fine, but OpenAI also has a newer Responses API; feature parity and latencies may differ. - • Usage-to-stderr: Handy for logging—e.g., you can capture text with >reply.txt while keeping usage separate. - • Model names: Ensure gpt-4o-mini is available to your account; otherwise use --list-models to check and pass -model .... - • SSE parsing: The script filters lines starting with data: and stops at [DONE]. It's tolerant of lines without content. +## Notes & gotchas +- API family: It uses the /v1/chat/completions endpoint. That's fine, but OpenAI also has a newer Responses API; feature parity and latencies may differ. +- Usage-to-stderr: Handy for logging—e.g., you can capture text with >reply.txt while keeping usage separate. +- Model names: Ensure gpt-4o-mini is available to your account; otherwise use --list-models to check and pass -model .... +- SSE parsing: The script filters lines starting with data: and stops at [DONE]. It's tolerant of lines without content. If you want, I can add a -system flag to let you send a separate system message, or add support for temperature / max_tokens. \ No newline at end of file