Update README.md
This commit is contained in:
parent
470b7ba69a
commit
87a28bb605
114
README.md
114
README.md
@ -1,83 +1,81 @@
|
|||||||
What this is
|
# What this is
|
||||||
|
|
||||||
A tiny Bash client for OpenAI's Chat Completions API. You give it a prompt (via args, a file, or stdin), it calls the API with a chosen model, prints the assistant's reply to stdout, and prints token usage to stderr. It can also stream tokens as they arrive, still reporting usage at the end.
|
A tiny Bash client for OpenAI's Chat Completions API. You give it a prompt (via args, a file, or stdin), it calls the API with a chosen model, prints the assistant's reply to stdout, and prints token usage to stderr. It can also stream tokens as they arrive, still reporting usage at the end.
|
||||||
|
|
||||||
Key behavior, step by step
|
## Key behavior, step by step
|
||||||
1. Safety & env
|
|
||||||
• set -euo pipefail → exit on errors, unset vars, or failed pipes.
|
|
||||||
• Requires curl and jq.
|
|
||||||
• Requires OPENAI_API_KEY in the environment; exits if missing.
|
|
||||||
2. Arguments
|
|
||||||
• -model <name>: pick the model (default gpt-4o-mini).
|
|
||||||
• --list-models: list available models (via GET /v1/models) and exit.
|
|
||||||
• -prepend <text>: prefix text added before the main prompt (with a blank line separation).
|
|
||||||
• -content-from-file <file>: read the prompt from a file.
|
|
||||||
• --stream: stream the response tokens (Server-Sent Events).
|
|
||||||
• Prompt text can also be passed as plain arguments, or piped via stdin if no args/file.
|
|
||||||
3. Prompt building
|
|
||||||
• Collects the main content from (priority):
|
|
||||||
1. -content-from-file, else 2) remaining command-line words, else 3) stdin (if piped).
|
|
||||||
• If -prepend is set, it's added above the main content with two newlines.
|
|
||||||
• The combined text is JSON-escaped via jq -Rs . and used as the user message.
|
|
||||||
4. Non‑streaming mode (default)
|
|
||||||
• POSTs to https://api.openai.com/v1/chat/completions with:
|
|
||||||
|
|
||||||
{ "model": "...", "messages": [{"role": "user", "content": "..."}] }
|
1. Safety & env
|
||||||
|
- set -euo pipefail → exit on errors, unset vars, or failed pipes.
|
||||||
|
- Requires curl and jq.
|
||||||
|
- Requires OPENAI_API_KEY in the environment; exits if missing.
|
||||||
|
|
||||||
|
2. Arguments
|
||||||
|
- -model <name>: pick the model (default gpt-4o-mini).
|
||||||
|
- --list-models: list available models (via GET /v1/models) and exit.
|
||||||
|
- -prepend <text>: prefix text added before the main prompt (with a blank line separation).
|
||||||
|
- -content-from-file <file>: read the prompt from a file.
|
||||||
|
- --stream: stream the response tokens (Server-Sent Events).
|
||||||
|
- Prompt text can also be passed as plain arguments, or piped via stdin if no args/file.
|
||||||
|
|
||||||
• Prints the assistant's reply (.choices[0].message.content) to stdout.
|
3. Prompt building
|
||||||
• If present, prints token usage (prompt/completion/total) to stderr as:
|
- Collects the main content from (priority):
|
||||||
|
|
||||||
tokens: prompt=... completion=... total=...
|
4. -content-from-file, else 2) remaining command-line words, else 3) stdin (if piped).
|
||||||
|
- If -prepend is set, it's added above the main content with two newlines.
|
||||||
|
- The combined text is JSON-escaped via jq -Rs . and used as the user message.
|
||||||
|
|
||||||
|
5. Non‑streaming mode (default)
|
||||||
|
- POSTs to https://api.openai.com/v1/chat/completions with:```{ "model": "...", "messages": [{"role": "user", "content": "..."}] } ```
|
||||||
|
- Prints the assistant's reply (.choices[0].message.content) to stdout.
|
||||||
|
- If present, prints token usage (prompt/completion/total) to stderr as: ```tokens: prompt=... completion=... total=...```
|
||||||
|
|
||||||
5. Streaming mode (--stream)
|
6. Streaming mode (--stream)
|
||||||
• Sends the same request but with "stream": true and
|
- Sends the same request but with "stream": true and```"stream_options": {"include_usage": true} so the final SSE chunk contains usage.```
|
||||||
"stream_options": {"include_usage": true} so the final SSE chunk contains usage.
|
- Reads data: lines from the SSE stream:
|
||||||
• Reads data: lines from the SSE stream:
|
- Prints incremental content chunks (.choices[0].delta.content) to stdout as they arrive (no newline until done).
|
||||||
• Prints incremental content chunks (.choices[0].delta.content) to stdout as they arrive (no newline until done).
|
- When a chunk contains a .usage object, it prints a newline to stdout, then the usage line to stderr.
|
||||||
• When a chunk contains a .usage object, it prints a newline to stdout, then the usage line to stderr.
|
- Ensures output ends with a newline even if usage wasn't received for some reason.
|
||||||
• Ensures output ends with a newline even if usage wasn't received for some reason.
|
|
||||||
6. Exit & errors
|
|
||||||
• Clear error messages for missing deps, missing key, missing file, or empty prompt.
|
|
||||||
• Returns non‑zero on those errors due to set -e.
|
|
||||||
|
|
||||||
Practical examples
|
|
||||||
• Simple call:
|
|
||||||
|
|
||||||
|
7. Exit & errors
|
||||||
|
- Clear error messages for missing deps, missing key, missing file, or empty prompt.
|
||||||
|
- Returns non‑zero on those errors due to set -e.
|
||||||
|
---
|
||||||
|
## Usage examples:
|
||||||
|
#### Simple call:
|
||||||
|
```bash
|
||||||
OPENAI_API_KEY=... ./askChatGPT.sh "Explain WebSockets to a 10-year-old"
|
OPENAI_API_KEY=... ./askChatGPT.sh "Explain WebSockets to a 10-year-old"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### With prepend (e.g., a system-ish instruction):
|
||||||
• With prepend (e.g., a system-ish instruction):
|
```bash
|
||||||
|
|
||||||
./askChatGPT.sh -prepend "You are concise." "Summarize Bitcoin halvings."
|
./askChatGPT.sh -prepend "You are concise." "Summarize Bitcoin halvings."
|
||||||
|
```
|
||||||
|
|
||||||
|
#### From file:
|
||||||
• From file:
|
```bash
|
||||||
|
|
||||||
./askChatGPT.sh -content-from-file prompt.txt
|
./askChatGPT.sh -content-from-file prompt.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
#### From stdin:
|
||||||
• From stdin:
|
```bash
|
||||||
|
|
||||||
echo "Translate this to Finnish:" | ./askChatGPT.sh
|
echo "Translate this to Finnish:" | ./askChatGPT.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Streaming (prints tokens live, usage at end):
|
||||||
• Streaming (prints tokens live, usage at end):
|
```bash
|
||||||
|
|
||||||
./askChatGPT.sh --stream "Write a limerick about Helsinki"
|
./askChatGPT.sh --stream "Write a limerick about Helsinki"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### List models:
|
||||||
• List models:
|
```bash
|
||||||
|
|
||||||
./askChatGPT.sh --list-models
|
./askChatGPT.sh --list-models
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Notes & gotchas
|
||||||
Notes & gotchas
|
- API family: It uses the /v1/chat/completions endpoint. That's fine, but OpenAI also has a newer Responses API; feature parity and latencies may differ.
|
||||||
• API family: It uses the /v1/chat/completions endpoint. That's fine, but OpenAI also has a newer Responses API; feature parity and latencies may differ.
|
- Usage-to-stderr: Handy for logging—e.g., you can capture text with >reply.txt while keeping usage separate.
|
||||||
• Usage-to-stderr: Handy for logging—e.g., you can capture text with >reply.txt while keeping usage separate.
|
- Model names: Ensure gpt-4o-mini is available to your account; otherwise use --list-models to check and pass -model ....
|
||||||
• Model names: Ensure gpt-4o-mini is available to your account; otherwise use --list-models to check and pass -model ....
|
- SSE parsing: The script filters lines starting with data: and stops at [DONE]. It's tolerant of lines without content.
|
||||||
• SSE parsing: The script filters lines starting with data: and stops at [DONE]. It's tolerant of lines without content.
|
|
||||||
|
|
||||||
If you want, I can add a -system flag to let you send a separate system message, or add support for temperature / max_tokens.
|
If you want, I can add a -system flag to let you send a separate system message, or add support for temperature / max_tokens.
|
||||||
Loading…
x
Reference in New Issue
Block a user