|
90e54d66d0
|
Removed cosched support
|
2024-03-25 01:18:37 +01:00 |
|
|
ef5df1dc31
|
Updated llama.cpp-mainline
|
2023-11-09 12:51:53 +01:00 |
|
niansa
|
fc5e4f5aa1
|
Updated llama.cpp-mainline
|
2023-10-04 22:13:48 +02:00 |
|
|
215db6b9b7
|
Fully implemented grammar sampling
|
2023-09-05 10:22:42 +02:00 |
|
niansa
|
79cf49faae
|
Implemented grammar sampling and zero-temperature sampling
|
2023-08-31 19:37:33 +02:00 |
|
niansa
|
3a953ed13a
|
Convert tokens to text correctly in llama
|
2023-08-31 18:23:55 +02:00 |
|
niansa
|
907cea7f9d
|
Fixed exception if pre_tick is nullptr
|
2023-08-31 18:07:42 +02:00 |
|
niansa
|
7cd3899dd0
|
Check for correct magic value in llama
|
2023-08-31 17:57:56 +02:00 |
|
niansa
|
5d818e31aa
|
Call llama_backend_init()/llama_backend_free()
|
2023-08-31 16:56:10 +02:00 |
|
niansa
|
e3d52c42b7
|
Updated llama-mainline and deleted old llama versions
|
2023-08-31 16:52:38 +02:00 |
|
niansa
|
d8f4efb0c9
|
Cut off ending from run() result properly
|
2023-06-25 01:20:56 +02:00 |
|
niansa
|
01b0d059ed
|
Added pre_tick
|
2023-06-15 18:14:09 +02:00 |
|
niansa
|
0199db02b7
|
Added GPU support
|
2023-06-10 00:49:21 +02:00 |
|
niansa
|
94953cd174
|
Improve some error handling macros
|
2023-06-09 23:53:01 +02:00 |
|
niansa
|
53a4623aef
|
Added mirostat support
|
2023-05-26 00:43:07 +02:00 |
|
niansa
|
fe850337df
|
Pass context to llama_sample_repetition_penalty
|
2023-05-21 15:40:49 +02:00 |
|
niansa
|
85eb2047cb
|
Improved llama.cpp version naming scheme
|
2023-05-20 16:53:03 +02:00 |
|
niansa
|
9a3952597a
|
Another abort fix
|
2023-05-20 03:09:25 +02:00 |
|
niansa
|
30a0a77cb2
|
Fixed an abort()
|
2023-05-20 02:53:32 +02:00 |
|
niansa
|
5feca59be7
|
Fixed linebreaks and support latest llama.cpp
|
2023-05-20 02:25:46 +02:00 |
|
niansa
|
a608135bf7
|
Removed new llama sampling stub
|
2023-05-19 16:39:09 +02:00 |
|
niansa
|
ad1e8a3368
|
Completed mainline llama implementation
|
2023-05-19 16:35:55 +02:00 |
|
niansa
|
4974338e41
|
Fixup step #2
|
2023-05-19 16:18:26 +02:00 |
|
niansa
|
9bf70e3f5d
|
Renamed llama-mainline to llama_old
|
2023-05-19 15:57:17 +02:00 |
|
|
abbb35c6a9
|
Minor improvements on EOS handling
|
2023-05-17 10:51:20 +02:00 |
|
|
4ec47699f0
|
Repeat penalty fixes
|
2023-05-17 08:44:25 +02:00 |
|
niansa
|
60fe6b9c55
|
Load implemenations as shared objects
|
2023-05-16 19:10:05 +00:00 |
|
niansa
|
59a6a8b1d1
|
Check for errors during llama evaluation properly
|
2023-05-11 18:46:41 +02:00 |
|
niansa
|
087fe1396b
|
Fixed all other known compilation issues
|
2023-05-10 21:50:37 +02:00 |
|
niansa
|
b61c751d33
|
Reverted last commit. but fixed invalid ssize_t typedef on msvc
|
2023-05-10 16:39:44 +02:00 |
|
niansa
|
bdb87534e8
|
Eliminate use of ssize_t
|
2023-05-10 16:37:43 +02:00 |
|
niansa
|
6968f3459a
|
Made last commit more beautiful
|
2023-05-09 22:25:04 +02:00 |
|
niansa
|
a43c5e64ce
|
Fixed compilation error with exceptions enabled
|
2023-05-09 22:23:34 +02:00 |
|
niansa
|
5e666d83db
|
Actually implemented error return values
|
2023-05-09 22:20:01 +02:00 |
|
niansa
|
0d5cba0530
|
Added LM_NOEXCEPT cmake option
|
2023-05-09 21:33:49 +02:00 |
|
niansa
|
7076f863d4
|
Check for task termination
|
2023-05-05 19:02:28 +02:00 |
|
|
5a57db5e75
|
Added CoSched support
|
2023-05-04 15:22:32 +02:00 |
|
niansa
|
d236e36d26
|
Implemented proper scrolling
|
2023-04-28 18:04:07 +02:00 |
|
|
493186509a
|
Renamed function and updated Python bindings
|
2023-04-27 09:48:44 +02:00 |
|
|
ca4ad5f096
|
Added context window scrolling with top bar
|
2023-04-27 09:45:37 +02:00 |
|
|
219186f4b6
|
Take const string reference instead of string view in append()
|
2023-04-27 09:31:22 +02:00 |
|
|
4e74517bb5
|
Changed parameter types to some that make more sense
|
2023-04-27 09:27:09 +02:00 |
|
|
0661b2e33d
|
Updated for latest llama.cpp and working gpt-j implementation
|
2023-04-27 08:21:02 +02:00 |
|
|
566f8227fd
|
Should be functional now
|
2023-04-27 08:00:08 +02:00 |
|
|
316e8cbf18
|
Initial GPT-J support
|
2023-04-26 16:32:45 +02:00 |
|
|
aad1bd9ae4
|
Made Inference class virtual
|
2023-04-26 10:59:24 +02:00 |
|