1
0
Fork 0
mirror of https://gitlab.com/niansa/libjustlm.git synced 2025-03-06 20:49:17 +01:00
Super easy to use library for doing LLaMA/GPT-J stuff!
Find a file
2023-05-09 22:23:34 +02:00
gptj Removed some assertions in GPT-J 2023-05-07 12:43:23 +02:00
include Actually implemented error return values 2023-05-09 22:20:01 +02:00
llama.cpp@0e018fe008 Added GPT-J serialization/deserialization 2023-05-07 12:02:04 +02:00
.gitignore Initial commit 2023-03-30 07:03:33 -05:00
.gitmodules Initial commit 2023-03-30 07:03:33 -05:00
CMakeLists.txt Added LM_NOEXCEPT cmake option 2023-05-09 21:33:49 +02:00
justlm.cpp Initial GPT-J support 2023-04-26 16:32:45 +02:00
justlm_gptj.hpp Fixed compilation error with exceptions enabled 2023-05-09 22:23:34 +02:00
justlm_llama.hpp Fixed compilation error with exceptions enabled 2023-05-09 22:23:34 +02:00
justlm_pool.cpp Added missing co_await 2023-05-05 00:28:04 +02:00
LICENSE Add LICENSE 2023-04-28 16:08:19 +00:00
pybind.cpp Added GPT-J serialization/deserialization 2023-05-07 12:02:04 +02:00
README.md Add README.md 2023-04-28 16:16:41 +00:00

JustLM

Super easy to use library for doing LLaMA/GPT-J stuff!

Overview

This library implements an easy to use interface to both LLaMa and GPT-J, with optional Python bindings.

Context scrolling is automatic and supports a top window bar.

Additionally, "pooling" is implemented to support keeping x inference instances in RAM and automatically moving least recently used ones to disk, ready for retrieval.

Documentation

Literally, just read the 2 header files in include/! The interface couldn't be simpler.

Credits

Thanks to Georgi Gerganov (ggerganov) for having written ggml and llama.cpp C libraries, which are both extremely important parts of this project! Also thanks to Nomic AI for having heavily helped me drive this project forward.