1.1 KiB
1.1 KiB
LLaMA Chat Runner
This project provides a simple way to build and run conversational prompts with llama.cpp.
It uses PHP templates to format system and user messages into the correct structure for inference, and a Makefile to glue everything together.
How it Works
llm_chat.php: Collects the system message and user/assistant messages, then renders them withllama_chat_template.php.llama_chat_template.php: Outputs a structured chat template for LLaMA models.Makefile:- Combines
.sysfiles intoconsolidated_system_message.txt - Pipes messages into
llama.cpp(llama-cli) for inference - Saves the model’s reply in
agent_message.txt
- Combines
Usage
- Place your
.sysfiles (system instructions) in the repo folder. - Create
user_message.txtwith your prompt. - Run:
make agent_message.txt
- The model’s response will be saved in
agent_message.txt.Requirements • llama.cpp built with llama-cli • PHP CLI • A compatible LLaMA model (set in the Makefile)
Cleaning Up
To remove generated files:
make clean