Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add possibility to offload inactive KV Caches to RAM #33

Open
davmacario opened this issue May 20, 2024 · 0 comments
Open

Add possibility to offload inactive KV Caches to RAM #33

davmacario opened this issue May 20, 2024 · 0 comments
Assignees
Labels
enhancement New feature or request extras Not directly related to the thesis, low priority idea A new idea - may or may not bring improvements

Comments

@davmacario
Copy link
Owner

Since with the Llama architecture came KV caching, and since each node has to cache the K and V matrices for each of the generated samples, this increases memory usage for the device used.
By storing the non-active KV caches in the RAM instead of the VRAM, it is possible to save memory, especially when the number of samples is high;

Implementation idea: --offload-kv flag.

This could potentially slow down inference, as it requires to transfer data from CPU to GPU memory at each local processing...

@davmacario davmacario added enhancement New feature or request extras Not directly related to the thesis, low priority idea A new idea - may or may not bring improvements labels May 20, 2024
@davmacario davmacario self-assigned this May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request extras Not directly related to the thesis, low priority idea A new idea - may or may not bring improvements
Projects
None yet
Development

No branches or pull requests

1 participant