# llama.any server This is a server for the llama.any protocol. ## Clients - **llama.any:** (CLI) https://gitlab.com/niansa/llama_any - **llama.nds:** (GUI) https://gitlab.com/niansa/llama_nds ## Building and use 1. Install a recent version of boost asio and CMake, as well as a recent C++ compiler 2. Obtain the gpt4all unfiltered llama model 3. Use CMake to configure and build the project 4. Enjoy!