-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/llm responses #376
base: dev
Are you sure you want to change the base?
Feat/llm responses #376
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It appears this PR is a release PR (change its base from master
if that is not the case).
Here's a release checklist:
- Update package version
- Update
poetry.lock
- Change PR merge option
- Update template repo
- Search for objects to be deprecated
… type annotations
…alog_flow_framework into feat/llm_responses
I got an idea for more complex prompts: we can allow passing responses as prompts instead of just strings. And then it'd be possible to incorporate slots into a prompt: model = LLM_API(prompt=rsp.slots.FilledTemplate("You are an experienced barista in a local coffeshop."
"Answer your customers questions about coffee and barista work.\n"
"Customer data:\nAge {person.age}\nGender: {person.gender}\nFavorite drink: {person.habits.drink}"
)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've marked all resolved conversations as resolved (don't forget to put correct commit hash!).
There are still 25 unresolved conversations.
I've edited 9 of them with PROMPT REWORK
or POSTPONED
prefixes: the first ones are for me to resolve, the latter -- to be resolved in a later PR.
Please respond to the other 16 comments (plus the ones from this review): either with a commit hash of a commit that resolves it or with your comments regarding the suggestion.
chatsky/llm/filters.py
Outdated
raise NotImplemented | ||
|
||
def __call__(self, ctx, request, response, model_name): | ||
return self.call(ctx, request, model_name) + self.call(ctx, response, model_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs to be bitwise or:
return self.call(ctx, request, model_name) + self.call(ctx, response, model_name) | |
return self.call(ctx, request, model_name) | self.call(ctx, response, model_name) |
Add tests. They did not catch this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
chatsky/llm/filters.py
Outdated
if request is not None and request.misc is not None and request.misc.get("important", None): | ||
return self.Return.Request | ||
if response is not None and response.misc is not None and response.misc.get("important", None): | ||
return self.Return.Response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If both contain "important" this will return Request
instead of Turn
.
Implement this as MessageFilter
.
Same for FromModel
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
chatsky/slots/llm.py
Outdated
} | ||
return ExtractedGroupSlot(**res) | ||
|
||
def __flatten_llm_group_slot(self, slot, parent_key=""): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You missed at least one in
if isinstance(value, LLMGroupSlot):
items.update(self.__flatten_llm_group_slot(value, new_key))
Add tests that use nested LLMGroupSlots.
chatsky/llm/llm_api.py
Outdated
raise ValueError | ||
|
||
async def condition( | ||
self, ctx: Context, prompt: str, method: BaseMethod, return_schema: Optional[BaseModel] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does condition not support context history?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
chatsky/llm/llm_api.py
Outdated
result.annotations = {"__generated_by_model__": self.name} | ||
return result | ||
|
||
async def condition(self, prompt: str, method: BaseMethod, return_schema=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it not possible to use message schema with log probs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some lines are clearly not covered by the tests:
Slots are not tested at all.
Run poe quick_test_coverage
to generate html reports in htmlcov
directory.
You can then view them (by opening htmlcov/index.html
) to see which lines are not covered.
tutorials/llm/1_basics.py
Outdated
it will be reused across all the nodes and therefore it will store all dialogue history. | ||
This is not advised if you are short on tokens or if you do not need to store all dialogue history. | ||
Alternatively you can instantiate model object inside of RESPONSE field in the nodes you need. | ||
Via `history` parameter you can set number of dialogue _turns_ that the model will use as the history. Default value is `5`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is out of place.
This should be in filtering_history
or as a comment near the line where LLMResponse
is initialized with history=0
.
tutorials/llm/5_llm_slots.py
Outdated
}, | ||
"tell": { | ||
RESPONSE: rsp.FilledTemplate( | ||
"So you are {person.username} and your occupation is {person.job}, right?" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
person
group slot does not allow partial extraction.
Add the flag, mention it, link to partial extraction tutorial.
…ited from MessageFilter
…ter initialization
…gue pairs and reduce code redundancy
Description
Added functionality for calling LLMs via langchain API for utilizing them in responses and conditions.
Checklist
List here tasks to complete in order to mark this PR as ready for review.
To Consider