make-llm-files/README.md
Frederico @ VilaRosa02 1d3d6b6584 added description file
2025-08-26 09:49:00 +00:00

34 lines
1.1 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# LLaMA Chat Runner
This project provides a simple way to build and run conversational prompts with [llama.cpp](https://github.com/ggerganov/llama.cpp).
It uses PHP templates to format system and user messages into the correct structure for inference, and a Makefile to glue everything together.
## How it Works
- **`llm_chat.php`**: Collects the system message and user/assistant messages, then renders them with `llama_chat_template.php`.
- **`llama_chat_template.php`**: Outputs a structured chat template for LLaMA models.
- **`Makefile`**:
- Combines `.sys` files into `consolidated_system_message.txt`
- Pipes messages into `llama.cpp` (`llama-cli`) for inference
- Saves the models reply in `agent_message.txt`
## Usage
1. Place your `.sys` files (system instructions) in the repo folder.
2. Create `user_message.txt` with your prompt.
3. Run:
```bash
make agent_message.txt
```
4. The models response will be saved in ```agent_message.txt.```
Requirements
• llama.cpp built with llama-cli
• PHP CLI
• A compatible LLaMA model (set in the Makefile)
Cleaning Up
To remove generated files:
```
make clean
```