You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
From the memory processing, extract the category, or topics to search relevant links for.
If can't build a query from memory processing result, update the memory processing and include a parameter for "search" or smth like that, where it generates a query or list of queries from the memory.
In order to generate the memory processing params json instructions for LLM, use langchain parser.
Example:
classModel(BaseModel):
requires_context: str=Field(description='Based on the conversation, this tells if context is needed to answer',
default=False)
topics: List[str] =Field(description='If context is required, the topics to retrieve context from', default=[])
dates_range: Optional[Tuple[datetime, datetime]] =Field(description='The dates range to retrieve context from', default=())
parser=PydanticOutputParser(pydantic_object=Model)
print(parser.get_format_instructions())
Some help on the query generation for LLM instructions I was using on a previous project:
def generate_google_search_query(user_input: str):
return simple_prompt_request(f'''You are a Google Search Expert. You task is to convert unstructured user inputs to optimized Google search queries. Example: USER INPUT: 'Best places to visit in Colombia?' OPTIMIZED Google Search Query: 'Top 10 Colombia tourist destinations'.
Convert the following user query into a optimized Google Search query: "{user_input}"''')
The text was updated successfully, but these errors were encountered:
As suggested, what if we just include a queries field in the summarizeMemory promot itself? And modifying the prompt a bit to generate possible search words, and then pass them to https://serper.dev
Is your feature request related to a problem? Please describe.
From the memory processing, extract the category, or topics to search relevant links for.
If can't build a query from memory processing result, update the memory processing and include a parameter for "search" or smth like that, where it generates a query or list of queries from the memory.
Use https://serper.dev/
In order to generate the memory processing params json instructions for LLM, use langchain parser.
Example:
Some help on the query generation for LLM instructions I was using on a previous project:
The text was updated successfully, but these errors were encountered: