Skip to content

Commit

Permalink
Merge pull request #2 from shane-huang/ipex-llm-doc
Browse files Browse the repository at this point in the history
add ipex-llm doc
  • Loading branch information
shane-huang authored Jun 12, 2024
2 parents 3b3e96a + 220c42e commit a4430c7
Showing 1 changed file with 6 additions and 0 deletions.
6 changes: 6 additions & 0 deletions fern/docs/pages/manual/llms.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -193,3 +193,9 @@ or

When the server is started it will print a log *Application startup complete*.
Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API.

### Using IPEX-LLM

For a fully private setup on Intel GPUs (such as a local PC with an iGPU, or discrete GPUs like Arc, Flex, and Max), you can use [IPEX-LLM](https://github.com/intel-analytics/ipex-llm).

To deploy Ollama and pull models using IPEX-LLM, please refer to [this guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/ollama_quickstart.html). Then, follow the same steps outlined in the [Using Ollama](#using-ollama) section to create a `settings-ollama.yaml` profile and run the private-GPT server.

0 comments on commit a4430c7

Please sign in to comment.