Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server #228

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from
Draft

Server #228

wants to merge 4 commits into from

Conversation

geroldmeisinger
Copy link
Contributor

@geroldmeisinger geroldmeisinger commented Jun 28, 2024

This pull request adds a taggui/run_server.py which starts a FastAPI server on port 11435 and provides a command-line and HTTP interface to make requests to the server. The command-line and API structure follows Ollama.

Ollama keeps a model loaded and serves it via a HTTP server. Default settings are loaded from modelfiles. Requests are made via /api/generate which also accepts optional settings as json. Settings are stateful and any change is reflected in subsequent calls. Ollama also implements some vision models (like Llava, Moondream...) but relies entirely on Llama.cpp as the model loader backend, which is very flexible and fast, but also why CogVLM2 is not supported yet.

This implementation only provides the bare minimum serve, run and /api/generate. I duplicated CaptioningThread into CaptioningCore, stripped everything Qt, while keeping the code as unchanged as possible. Because Ollama expects an array of base64 encoded image I also stripped any reference to img_path and img_tags (i.e. replace_template_variable). (To be precise, Ollama expects an array of base64 encoded images of which only the first one is used for most vision models. I duplicated this behaviour.)

$ python taggui/run_server.py serve
INFO:     Started server process [83587]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:11435 (Press CTRL+C to quit)
$ python taggui/run_server.py run THUDM/cogvlm2-llama3-chat-19B-int4
Loading THUDM/cogvlm2-llama3-chat-19B-int4...
Send a message path/to/image.png (/? for help)
>>> describe this image images/icon.png
The image depicts a stylized illustration of a landscape scene with two mountains in the foreground  a sun in the sky  and a cloud to the right of the sun. The mountains are rendered in shades of blue  suggesting they are made of stone or are covered in vegetation. The sun is a bright yellow circle  indicating it is shining. The cloud is white and fluffy  suggesting it is a cumulus cloud. The sky is a light blue  indicating a clear day. In the bottom right corner
$ curl http://localhost:11435/api/generate -d "{ \"model\": \"THUDM/cogvlm2-llama3-chat-19B-int4\", \"stream\": false, \"prompt\": \"describe this image\", \"images\": [\"$(base64 -w 0 ./images/icon.png)\"]}"
{"type":"generate","response":"The image depicts a stylized illustration of a landscape scene with two mountains in the foreground  a sun in the sky  and a cloud to the right of the sun. The mountains are rendered in shades of blue  suggesting they are made of stone or are covered in vegetation. The sun is a bright yellow circle  indicating it is shining. The cloud is white and fluffy  suggesting it is a cumulus cloud. The sky is a light blue  indicating a clear day. In the bottom right corner"}

(If you get an error Argument list too long it's probably because your image is too big and base64 expansion exceeds the character length in the terminal.)

The idea is to extract the model loader part from the rest and provide a minimal interface. Requests can be made entirely via HTTP which allows more flexibility for UI application and automatization while the server keeps the model loaded independent of any UI application as long as the server is running.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant