-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable passing secrets to action server. #11
Conversation
toolkit = ActionServerToolkit(url=kwargs["url"], api_key=kwargs["api_key"]) | ||
def _build_context(secrets: dict) -> str: | ||
ctx = json.dumps({"secrets": secrets}) | ||
return base64.b64encode(ctx.encode("utf-8")).decode("ascii") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be actually stored in the database? It would be maybe easier to assume that secrets is already in the right format and do the base64 encoding etc before storing the data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR takes the approach of config looking this way:
class ActionServerConfig(ToolConfig):
url: str
api_key: str
secrets: Optional[dict]
I followed these docs for context encoding.
Here's an example assistant config in the database:
{
"configurable": {
"type": "agent",
"type==agent/agent_type": "GPT 3.5 Turbo",
"type==agent/interrupt_before_action": false,
"type==agent/retrieval_description": "Can be used to look up information that was uploaded to this assistant.\nIf the user is referencing particular files, that is often a good hint that information may be here.\nIf the user asks a vague question, they are likely meaning to look up info from this retriever, and you should call it!",
"type==agent/system_message": "You are a helpful assistant.",
"type==agent/tools": [
{
"id": "f7015ef0-5a84-4da0-b2f1-f37ba09c548d",
"type": "action_server_by_sema4ai",
"name": "Action Server by Sema4.ai",
"description": "Run AI actions with [Sema4.ai Action Server](https://github.com/Sema4AI/actions).",
"config": {
"url": "https://sixty-five-chatty-rats.robocorp.link",
"api_key": "cbHz3L1yx8_0SMBsuqjcEtDvY4CZEm2HuJSMmT-U8oY",
"secrets": {
"mypwd": "green"
}
}
}
],
"type==chat_retrieval/llm_type": "GPT 3.5 Turbo",
"type==chat_retrieval/system_message": "You are a helpful assistant.",
"type==chatbot/llm_type": "GPT 3.5 Turbo",
"type==chatbot/system_message": "You are a helpful assistant."
}
}
@bakar-io How about if we add "additionalHeaders" or similar key to the config with key:value pairs as the value. The value could then be directly stored as it is sent in the db and handled on the calling side. This might also make it easier to later have testing capabilities related to Action Server calls that are not directly linked to OpenGPTs backend. |
That was one of the options considered. Spoke with Kari this morning and he was conflicted between these two options. |
"The value could then be directly stored as it is sent in the db and handled on the calling side." === I would like as little logic to OpenGPTs backend as possible. |
Related to Sema4AI/langchain#1