Skip to content

Releases: 3Simplex/llama.cpp

b4125

18 Nov 17:14
531cb1c
Compare
Choose a tag to compare
Skip searching root path for cross-compile builds (#10383)

b4100

16 Nov 18:29
bcdb7a2
Compare
Choose a tag to compare
server: (web UI) Add samplers sequence customization (#10255)

* Samplers sequence: simplified and input field.

* Removed unused function

* Modify and use `settings-modal-short-input`

* rename "name" --> "label"

---------

Co-authored-by: Xuan Son Nguyen <[email protected]>

b4067

12 Nov 14:48
54ef9cf
Compare
Choose a tag to compare
vulkan: Throttle the number of shader compiles during the build step.…

b4061

09 Nov 16:16
6423c65
Compare
Choose a tag to compare
metal : reorder write loop in mul mat kernel + style (#10231)

* metal : reorder write loop

* metal : int -> short, style

ggml-ci

b4042

07 Nov 17:15
5107e8c
Compare
Choose a tag to compare
DRY: Fixes clone functionality (#10192)

b4007

01 Nov 16:31
d865d14
Compare
Choose a tag to compare
server : fix smart selection of available slot (#10120)

* Fix smart selection of available slot

* minor fix

* replace vectors of tokens with shorthands

b3987

28 Oct 22:35
61715d5
Compare
Choose a tag to compare
llama : Add IBM granite template (#10013)

* Add granite template to llama.cpp

* Add granite template to test-chat-template.cpp

* Update src/llama.cpp

Co-authored-by: Xuan Son Nguyen <[email protected]>

* Update tests/test-chat-template.cpp

Co-authored-by: Xuan Son Nguyen <[email protected]>

* Added proper template and expected output

* Small change to \n

Small change to \n

* Add code space &

Co-authored-by: Xuan Son Nguyen <[email protected]>

* Fix spacing

* Apply suggestions from code review

* Update src/llama.cpp

---------

Co-authored-by: Xuan Son Nguyen <[email protected]>

b3959

22 Oct 13:32
c421ac0
Compare
Choose a tag to compare
lora : warn user if new token is added in the adapter (#9948)

b3949

21 Oct 13:46
d5ebd79
Compare
Choose a tag to compare
rpc : pack only RPC structs (#9959)

b3943

20 Oct 14:00
cda0e4b
Compare
Choose a tag to compare
llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745)

* refactor llama_batch_get_one

* adapt all examples

* fix simple.cpp

* fix llama_bench

* fix

* fix context shifting

* free batch before return

* use common_batch_add, reuse llama_batch in loop

* null terminated seq_id list

* fix save-load-state example

* fix perplexity

* correct token pos in llama_batch_allocr