-
Notifications
You must be signed in to change notification settings - Fork 8
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
71446ff
commit a6b3e11
Showing
1 changed file
with
298 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,298 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"id": "0a957257-5632-49e7-9480-c9c15fb3cfe7", | ||
"metadata": { | ||
"editable": true, | ||
"slideshow": { | ||
"slide_type": "" | ||
}, | ||
"tags": [] | ||
}, | ||
"source": [ | ||
"# LISA v2 Demo\n", | ||
"\n", | ||
"In this notebook, we'll dive into using LISA for your own LLM applications. LISA supports the [OpenAI specification](https://platform.openai.com/docs/api-reference), so you can use it as a drop-in replacement for any other OpenAI-compatible models that your application already uses.\n", | ||
"\n", | ||
"This demo uses the [openai-python](https://github.com/openai/openai-python) library, so you may need to install the dependency with `pip install openai` if you have not done so already." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "088d9ee7-e943-4bbd-925f-ba137a63624a", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Import libraries that will be used in this demo:\n", | ||
"\n", | ||
"## Just for printing things nicely\n", | ||
"from IPython.lib.pretty import pretty\n", | ||
"\n", | ||
"## OpenAI library for communicating with LLMs\n", | ||
"from openai import OpenAI\n", | ||
"from openai.types.chat import ChatCompletionMessage" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "c9c91651-1dc0-4430-9a1a-b7ceadb8f869", | ||
"metadata": {}, | ||
"source": [ | ||
"## Configuration and Validation\n", | ||
"There are a few differences in how LISA handles its API tokens, so we will need some configuration to set up the OpenAI client, but after that, you will be free to use it like the rest of your OpenAI clients.\n", | ||
"\n", | ||
"**Notice**: This demo is dependent on the models that are deployed in your account. There are two more fields where you would have to replace values:\n", | ||
"- YOUR-TEXTGEN-MODEL-HERE - in the \"Chatting With Your Model\" cell, replace this text with a text generation model ID\n", | ||
"- YOUR-EMBEDDING-MODEL-HERE - In the \"Embeddings\" cell, replace this text with an embedding model ID" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "8834e55f-9746-43d7-bddc-623796dba41b", | ||
"metadata": { | ||
"editable": true, | ||
"slideshow": { | ||
"slide_type": "" | ||
}, | ||
"tags": [] | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"#### REQUIRED INFO, FILL THIS OUT FIRST\n", | ||
"\n", | ||
"# If you still need an API token, follow the instructions here to get one: https://github.com/awslabs/LISA?tab=readme-ov-file#programmatic-api-tokens\n", | ||
"api_token = \"YOUR-TOKEN-HERE\"\n", | ||
"\n", | ||
"# The base URL is the LISA Serve REST API load balancer, plus the \"/v2/serve\" path, as that is where LISA starts handling the OpenAI spec.\n", | ||
"# If you set up a custom domain with a certificate, use that domain here, otherwise, you may use the LISA Serve REST API ALB name directly.\n", | ||
"lisa_serve_base_url = \"https://YOUR-ALB-HERE/v2/serve\"\n", | ||
"\n", | ||
"#### END REQUIRED INFO\n", | ||
"\n", | ||
"# initialize OpenAI client\n", | ||
"client = OpenAI(\n", | ||
" api_key=\"ignored\", # LISA ignores this field, but it must be defined # pragma: allowlist-secret not a real key\n", | ||
" base_url=lisa_serve_base_url,\n", | ||
" default_headers={\"Api-Key\": api_token},\n", | ||
")\n", | ||
"\n", | ||
"# As an example, let's list models. If this succeeds, then we configured our client correctly.\n", | ||
"model_list = client.models.list().data\n", | ||
"print(pretty(model_list))" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "f8148947-f298-45b9-8508-64e23c7dc279", | ||
"metadata": {}, | ||
"source": [ | ||
"## Chatting With Your Model\n", | ||
"\n", | ||
"Let's query our model by using the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat/create). Because we've already configured our client and have confirmed that we see a list of models, our next steps are:\n", | ||
"1. Identify a model we want to use\n", | ||
"2. Set up initial messages context for talking with the model\n", | ||
"3. Record context if we want to ask followup questions\n", | ||
"\n", | ||
"The following cell does all three of these." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "7cc1aa5a-9950-42d3-914c-50fc3a869f35", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Let's continue using one of the models found in that list. Edit the following to match one in your response.\n", | ||
"model = \"YOUR-TEXTGEN-MODEL-HERE\"\n", | ||
"\n", | ||
"# Let's start a conversation!\n", | ||
"# Not all models support the \"system\" role, so for a more general purpose demo, this example uses the \"user\" role for the first message.\n", | ||
"messages = [\n", | ||
" {\"role\": \"user\", \"content\": \"You are a helpful and friendly AI assistant who answers questions succinctly and accurately. If you do not know the answer, you truthfully admit it instead of making up an answer. All following messages are a conversation between you, the AI asstant and a user. Acknowledge that you understand.\"},\n", | ||
" {\"role\": \"assistant\", \"content\": \"Understood.\"},\n", | ||
" {\"role\": \"user\", \"content\": \"How do I translate the following into Dutch? 'I have no bananas today.'\"},\n", | ||
"]\n", | ||
"chat_response = client.chat.completions.create(\n", | ||
" model=model,\n", | ||
" messages=messages\n", | ||
")\n", | ||
"assistant_message = chat_response.choices[0].message\n", | ||
"print(assistant_message.content.strip()) # Print how our model responded\n", | ||
"\n", | ||
"# Let's append that message to the context, and keep asking questions after\n", | ||
"messages.append(assistant_message)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "65ddc3b3-c64f-4274-b86e-4ffece54007d", | ||
"metadata": {}, | ||
"source": [ | ||
"### Chatting With Context\n", | ||
"\n", | ||
"Now that we've make a call to the model and received and recorded a response, we can continue the conversation as if we were talking to another human. The following cell asks a highly context-sensitive question that would not make sense without a conversation before it. By adding the model's response to the list of messages, and by adding our next query to that same list, we send the entire conversation history in the request, and the model is now capable of answering the question. In this case, the model will correctly replace the word \"banana\" with \"orange\" and fulfill a request to translate text into another language, which is only achievable with the context from the previous messages." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "56834ce5-3ef8-4f65-94c2-62783e07be04", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Given we asked about bananas, a fruit, and the messages contain context for translation with a fruit, we should still expect a translation response.\n", | ||
"messages.append(\n", | ||
" {\"role\": \"user\", \"content\": \"What about oranges instead?\"}\n", | ||
")\n", | ||
"chat_response = client.chat.completions.create(\n", | ||
" model=model,\n", | ||
" messages=messages\n", | ||
")\n", | ||
"assistant_message = chat_response.choices[0].message\n", | ||
"print(assistant_message.content.strip())\n", | ||
"# And add to context\n", | ||
"messages.append(assistant_message)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "1f5dfda1-ac30-434e-a686-f69573cd4272", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Let's add that most recent message to the context, and print out what we have so far:\n", | ||
"\n", | ||
"print(pretty(messages))" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "29b27311-1b48-43c0-96fb-a79e92f5643a", | ||
"metadata": {}, | ||
"source": [ | ||
"## Completions\n", | ||
"\n", | ||
"In case your application still has requirements to use the [legacy Completions API](https://platform.openai.com/docs/api-reference/completions/create), this can still be supported. Using the same client, we can use the API to generate text, like so:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "73981c0d-341e-4e51-b351-df7d9bbbf7d4", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"completions_response = client.completions.create(\n", | ||
" model=model,\n", | ||
" prompt=\"Generate a fully commented Python code block that can print the phrase 'Hello, World!' 4 times. This code block should be embedded in a markdown block so that it can be rendered in a Jupyter notebook\"\n", | ||
")\n", | ||
"print(completions_response.choices[0].text.strip())" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "0c551eb4-69fb-42cc-91d5-dd80aa9cd09d", | ||
"metadata": {}, | ||
"source": [ | ||
"## Streaming\n", | ||
"\n", | ||
"Because querying an LLM does not produce instantaneous output, you may want to consider streaming so that you get tokens as they become available. If your model supports it, then we can handle streaming with the LISA endpoint too.\n", | ||
"Using our same client, let's ask for a lot of text that we can stream instead of waiting for the entire response. Using this method, we should see text much sooner than we would for the previous requests, and we will also see the text\n", | ||
"scroll into view as it becomes available." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "ad23b276-0def-464c-bdf7-1ad34f935dac", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"chat_streaming_response = client.chat.completions.create(\n", | ||
" model=model,\n", | ||
" messages=[\n", | ||
" {\"role\": \"user\", \"content\": \"In as many words and as much detail as possible, describe how to create a peanut butter and jelly sandwich. Assume that the user is starting with fresh peanuts and fresh grapes.\"},\n", | ||
" ],\n", | ||
" stream=True\n", | ||
")\n", | ||
"for chunk in chat_streaming_response:\n", | ||
" print(chunk.choices[0].delta.content or \"\", end=\"\")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "dba9a9e1-4f86-4dfc-a945-dfaaf5abb163", | ||
"metadata": {}, | ||
"source": [ | ||
"## Embeddings\n", | ||
"\n", | ||
"If your application integrates with a vector datastore, then you are almost certainly going to need some form of vector generation. If LISA is serving an embedding model for you, then the [Embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create) is what you need.\n", | ||
"\n", | ||
"The following will show how to call an embedding model so that you can handle the vectors directly in your application." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "cfd14e09-00f1-47f5-a2ba-851b885355d7", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Change this model so that it matches your embedding model as listed in the models at the beginning of this demo\n", | ||
"embedding_model = \"YOUR-EMBEDDING-MODEL-HERE\"\n", | ||
"\n", | ||
"embeddings_response = client.embeddings.create(\n", | ||
" model=embedding_model,\n", | ||
" input=\"Hello, world!\",\n", | ||
")\n", | ||
"vector = embeddings_response.data[0].embedding\n", | ||
"\n", | ||
"# Print out some vector stats instead of showing a huge number of numbers\n", | ||
"print(f\"Vector length: {len(vector)}, Vector min: {min(vector)}, Vector max: {max(vector)}\")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "e39d0f51-54ab-4636-943a-fabd34bd35eb", | ||
"metadata": {}, | ||
"source": [ | ||
"## Conclusion\n", | ||
"\n", | ||
"From this demo, we have used the OpenAI client to perform the following tasks:\n", | ||
"- Chat Completions\n", | ||
"- Chat Completions with context\n", | ||
"- Chat Completions with streaming\n", | ||
"- Legacy Completions\n", | ||
"- Embeddings\n", | ||
"\n", | ||
"All of these operations are natively supported through the OpenAI library, and we have performed all of these operations by making a single client that accesses the LISA Serve endpoint. This demo heavily relied on the `openai-python` library, but LISA will work *anywhere* that an OpenAI client can be instantiated." | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3 (ipykernel)", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.10.5" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 5 | ||
} |