forked from ggerganov/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
infill : add new example + extend server API (ggerganov#3296)
* vvhg-code-infill (ggerganov#1) * infill in separate example (ggerganov#2) * reverted changes to main and added infill example * cleanup * naming improvement * make : add missing blank line * fix missing semicolon * brought infill up to current main code * cleanup --------- Co-authored-by: Cebtenzzre <[email protected]>
- Loading branch information
1 parent
f5ef5cf
commit c97f01c
Showing
11 changed files
with
1,067 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -40,6 +40,7 @@ models-mnt | |
/embedding | ||
/gguf | ||
/gguf-llama-simple | ||
/infill | ||
/libllama.so | ||
/llama-bench | ||
/main | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
set(TARGET infill) | ||
add_executable(${TARGET} infill.cpp) | ||
install(TARGETS ${TARGET} RUNTIME) | ||
target_link_libraries(${TARGET} PRIVATE common llama ${CMAKE_THREAD_LIBS_INIT}) | ||
target_compile_features(${TARGET} PRIVATE cxx_std_11) | ||
if(TARGET BUILD_INFO) | ||
add_dependencies(${TARGET} BUILD_INFO) | ||
endif() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
# llama.cpp/example/infill | ||
|
||
This example shows how to use the infill mode with Code Llama models supporting infill mode. | ||
Currently the 7B and 13B models support infill mode. | ||
|
||
Infill supports most of the options available in the main example. | ||
|
||
For further information have a look at the main README.md in llama.cpp/example/main/README.md | ||
|
||
## Common Options | ||
|
||
In this section, we cover the most commonly used options for running the `infill` program with the LLaMA models: | ||
|
||
- `-m FNAME, --model FNAME`: Specify the path to the LLaMA model file (e.g., `models/7B/ggml-model.bin`). | ||
- `-i, --interactive`: Run the program in interactive mode, allowing you to provide input directly and receive real-time responses. | ||
- `-n N, --n-predict N`: Set the number of tokens to predict when generating text. Adjusting this value can influence the length of the generated text. | ||
- `-c N, --ctx-size N`: Set the size of the prompt context. The default is 512, but LLaMA models were built with a context of 2048, which will provide better results for longer input/inference. | ||
|
||
## Input Prompts | ||
|
||
The `infill` program provides several ways to interact with the LLaMA models using input prompts: | ||
|
||
- `--in-prefix PROMPT_BEFORE_CURSOR`: Provide the prefix directly as a command-line option. | ||
- `--in-suffix PROMPT_AFTER_CURSOR`: Provide the suffix directly as a command-line option. | ||
- `--interactive-first`: Run the program in interactive mode and wait for input right away. (More on this below.) | ||
|
||
## Interaction | ||
|
||
The `infill` program offers a seamless way to interact with LLaMA models, allowing users to receive real-time infill suggestions. The interactive mode can be triggered using `--interactive`, and `--interactive-first` | ||
|
||
### Interaction Options | ||
|
||
- `-i, --interactive`: Run the program in interactive mode, allowing users to get real time code suggestions from model. | ||
- `--interactive-first`: Run the program in interactive mode and immediately wait for user input before starting the text generation. | ||
- `--color`: Enable colorized output to differentiate visually distinguishing between prompts, user input, and generated text. | ||
|
||
### Example | ||
|
||
```bash | ||
./infill -t 10 -ngl 0 -m models/codellama-13b.Q5_K_S.gguf -c 4096 --temp 0.7 --repeat_penalty 1.1 -n 20 --in-prefix "def helloworld():\n print(\"hell" --in-suffix "\n print(\"goodbye world\")\n " | ||
``` |
Oops, something went wrong.