1
0
Fork 0
mirror of https://gitlab.com/niansa/libjustlm.git synced 2025-03-06 20:49:17 +01:00
Commit graph

152 commits

Author SHA1 Message Date
449a25c360 Use option() in CMake 2024-03-25 01:19:05 +01:00
90e54d66d0 Removed cosched support 2024-03-25 01:18:37 +01:00
niansa
1a04a0e6d9 Updated llama.cpp-mainline 2023-12-25 16:55:30 +01:00
ef5df1dc31 Updated llama.cpp-mainline 2023-11-09 12:51:53 +01:00
niansa
fc5e4f5aa1 Updated llama.cpp-mainline 2023-10-04 22:13:48 +02:00
215db6b9b7 Fully implemented grammar sampling 2023-09-05 10:22:42 +02:00
f5314a0dde Added python bindings for grammar 2023-09-05 09:27:45 +02:00
niansa
79cf49faae Implemented grammar sampling and zero-temperature sampling 2023-08-31 19:37:33 +02:00
niansa
3a953ed13a Convert tokens to text correctly in llama 2023-08-31 18:23:55 +02:00
niansa
907cea7f9d Fixed exception if pre_tick is nullptr 2023-08-31 18:07:42 +02:00
niansa
7cd3899dd0 Check for correct magic value in llama 2023-08-31 17:57:56 +02:00
niansa
cb683aa8fc Updated llama.cpp.cmake 2023-08-31 17:00:50 +02:00
niansa
5d818e31aa Call llama_backend_init()/llama_backend_free() 2023-08-31 16:56:10 +02:00
niansa
e3d52c42b7 Updated llama-mainline and deleted old llama versions 2023-08-31 16:52:38 +02:00
niansa
d8f4efb0c9 Cut off ending from run() result properly 2023-06-25 01:20:56 +02:00
niansa
08ff1e72e7 Update llama.cpp-mainline 2023-06-25 01:18:57 +02:00
niansa
01b0d059ed Added pre_tick 2023-06-15 18:14:09 +02:00
niansa
bcacfc3d54 Minor CMake fixes 2023-06-10 02:04:50 +02:00
niansa
0199db02b7 Added GPU support 2023-06-10 00:49:21 +02:00
niansa
e2f7da65e4 Fixed llama.cpp not generating symbols 2023-06-10 00:38:38 +02:00
niansa
94953cd174 Improve some error handling macros 2023-06-09 23:53:01 +02:00
niansa
24849804b6 Major CMake improvements 2023-06-09 20:01:49 +02:00
niansa
b3bd78b350 Fixups in llama.cpp.cmake 2023-06-09 19:43:29 +02:00
niansa
a03558ae89 Expose options 2023-06-09 19:39:24 +02:00
niansa
38b229dab5 Updated to latest functional llama version 2023-06-09 12:01:41 +02:00
niansa
09e59a9536 Fixed compile errors because of previous commit 2023-05-31 20:22:18 +02:00
niansa
0142db3f7c Renamed operator ""_MB -> operator ""_MiB 2023-05-31 20:20:31 +02:00
niansa
2d57ade1b8 add msvc support -polyfill unistd 2023-05-31 19:56:40 +02:00
4b19bc49a5 Fixed llama.cpp.cmake 2023-05-26 13:44:26 +02:00
niansa
53a4623aef Added mirostat support 2023-05-26 00:43:07 +02:00
ad0b7e3c71 Updated llama.cpp-mainline 2023-05-23 13:41:30 +02:00
niansa
24ff52919f Renamed justlm_llama_old to justlm_llama_230511 2023-05-21 16:13:51 +02:00
niansa
fe850337df Pass context to llama_sample_repetition_penalty 2023-05-21 15:40:49 +02:00
niansa
e69157764b Fixed capitalization of justLM_LLAMA_OLD target 2023-05-21 15:39:27 +02:00
niansa
85eb2047cb Improved llama.cpp version naming scheme 2023-05-20 16:53:03 +02:00
niansa
9a3952597a Another abort fix 2023-05-20 03:09:25 +02:00
niansa
30a0a77cb2 Fixed an abort() 2023-05-20 02:53:32 +02:00
niansa
5feca59be7 Fixed linebreaks and support latest llama.cpp 2023-05-20 02:25:46 +02:00
niansa
c9dac7cb89 Fixed file type detection 2023-05-19 17:45:32 +02:00
niansa
a608135bf7 Removed new llama sampling stub 2023-05-19 16:39:09 +02:00
niansa
ad1e8a3368 Completed mainline llama implementation 2023-05-19 16:35:55 +02:00
niansa
b17cc6ffbd Final fixup step #3 2023-05-19 16:20:51 +02:00
niansa
4974338e41 Fixup step #2 2023-05-19 16:18:26 +02:00
niansa
9bf70e3f5d Renamed llama-mainline to llama_old 2023-05-19 15:57:17 +02:00
niansa
b5e10d1fa3 Use magic to identify llama models 2023-05-19 02:40:57 +02:00
niansa
f279b31d5f Minor improvemens in CMakeFiles and dlhandle 2023-05-18 22:32:06 +02:00
niansa
e489f0f53c Removed now-dead allocation from mpt_model 2023-05-18 22:32:06 +02:00
niansa
8fbbf58622 Removed magic_match for llama.cpp 2023-05-18 17:49:22 +00:00
niansa
88e35fd25d Fixed output directory 2023-05-18 17:48:54 +00:00
niansa
a0ec6f8a11 Fixed . being used instead of ${DIRECTORY} in llama.cpp.cmake 2023-05-17 19:01:08 +00:00