1
0
Fork 0
mirror of https://gitlab.com/niansa/libjustlm.git synced 2025-03-06 20:49:17 +01:00
Super easy to use library for doing LLaMA/GPT-J stuff!
Find a file
2023-08-31 19:37:33 +02:00
gptj Fixed compile errors because of previous commit 2023-05-31 20:22:18 +02:00
include Implemented grammar sampling and zero-temperature sampling 2023-08-31 19:37:33 +02:00
llama.cpp-alibi@03ceb39c1e Properly implemented MPT 2023-05-15 14:46:19 +02:00
llama.cpp-mainline@e8422de39e Updated llama-mainline and deleted old llama versions 2023-08-31 16:52:38 +02:00
mpt Fixed compile errors because of previous commit 2023-05-31 20:22:18 +02:00
.gitignore Initial commit 2023-03-30 07:03:33 -05:00
.gitmodules Improved llama.cpp version naming scheme 2023-05-20 16:53:03 +02:00
CMakeLists.txt Check for correct magic value in llama 2023-08-31 17:57:56 +02:00
dlhandle.hpp Minor improvemens in CMakeFiles and dlhandle 2023-05-18 22:32:06 +02:00
g4a_common.cpp Renamed llama-mainline to llama_old 2023-05-19 15:57:17 +02:00
g4a_common.hpp Renamed llama-mainline to llama_old 2023-05-19 15:57:17 +02:00
gptj.cpp Fixed file type detection 2023-05-19 17:45:32 +02:00
justlm.cpp Fixed file type detection 2023-05-19 17:45:32 +02:00
justlm_gptj.hpp Fixed exception if pre_tick is nullptr 2023-08-31 18:07:42 +02:00
justlm_llama.hpp Implemented grammar sampling and zero-temperature sampling 2023-08-31 19:37:33 +02:00
justlm_mpt.hpp Fixed exception if pre_tick is nullptr 2023-08-31 18:07:42 +02:00
justlm_pool.cpp Added missing co_await 2023-05-05 00:28:04 +02:00
LICENSE Add LICENSE 2023-04-28 16:08:19 +00:00
llama.cpp Check for correct magic value in llama 2023-08-31 17:57:56 +02:00
llama.cpp.cmake Implemented grammar sampling and zero-temperature sampling 2023-08-31 19:37:33 +02:00
mpt.cpp Fixed file type detection 2023-05-19 17:45:32 +02:00
msvc_compat_unistd.h add msvc support -polyfill unistd 2023-05-31 19:56:40 +02:00
pybind.cpp Added pre_tick 2023-06-15 18:14:09 +02:00
README.md MPT works now! 2023-05-17 09:33:16 +02:00

JustLM

Super easy to use library for doing LLaMA/GPT-J/MPT stuff!

Overview

This library implements an easy to use interface to LLaMa, GPT-J and MPT, with optional Python bindings.

Context scrolling is automatic and supports a top window bar.

Additionally, "pooling" is implemented to support keeping x inference instances in RAM and automatically moving least recently used ones to disk, ready for retrieval.

Documentation

Literally, just read the 2 header files in include/! The interface couldn't be simpler.

Credits

Thanks to Georgi Gerganov (ggerganov) for having written ggml and llama.cpp C libraries, which are both extremely important parts of this project! Also thanks to Nomic AI for having heavily helped me drive this project forward.