Frederico @ VilaRosa02 1d3d6b6584 added description file
2025-08-26 09:49:00 +00:00
2025-08-26 09:43:48 +00:00
2025-08-26 09:43:48 +00:00
2025-08-26 09:43:48 +00:00
2025-08-26 09:43:48 +00:00
2025-08-26 09:43:48 +00:00
2025-08-26 09:49:00 +00:00

LLaMA Chat Runner

This project provides a simple way to build and run conversational prompts with llama.cpp.
It uses PHP templates to format system and user messages into the correct structure for inference, and a Makefile to glue everything together.

How it Works

  • llm_chat.php: Collects the system message and user/assistant messages, then renders them with llama_chat_template.php.
  • llama_chat_template.php: Outputs a structured chat template for LLaMA models.
  • Makefile:
    • Combines .sys files into consolidated_system_message.txt
    • Pipes messages into llama.cpp (llama-cli) for inference
    • Saves the models reply in agent_message.txt

Usage

  1. Place your .sys files (system instructions) in the repo folder.
  2. Create user_message.txt with your prompt.
  3. Run:
   make agent_message.txt
  1. The models response will be saved in agent_message.txt. Requirements • llama.cpp built with llama-cli • PHP CLI • A compatible LLaMA model (set in the Makefile)

Cleaning Up

To remove generated files:

make clean
Description
No description provided
Readme 33 KiB
Languages
PHP 67.9%
Shell 16.6%
Makefile 15.5%