You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that the data references are no longer included in the query response, this is since commit 8bc7522
I wondered if you could explain the reason for removing these references; I think the context specific evidence is useful for the user to review when evaluating the response from the llm.
The text was updated successfully, but these errors were encountered:
Yeah, I implemented this feature at first.
But I found out that GraphRAG will attach the reference id, which is a unique identifier inside the storage. So for most of the users, that identifier is pretty meaningless because you have to look into the internals of nano-graphrag's storage to get the corresponding data.
I think the default version of nano-graphrag should output something basic, so I remove the data references. But I'm thinking adding the referred data as an output of query, so advanced users can pack their own answer styles.
I naively reverted the local_rag_response prompt to return the data references (i.e. "This is an example sentence supported by multiple data references [Data: (record ids); (record ids)]." ...) and then if I return the response with the context data, I can at least link from the response to the top level of data references that were used to form the response.
It might be useful to be able to dig into the graph further, for example to see sources that were used to form a community, but I imagine this could get confusing quite quickly. Maybe navigating the context data is more of a graph visualisation problem.
Hi there,
I noticed that the data references are no longer included in the query response, this is since commit 8bc7522
I wondered if you could explain the reason for removing these references; I think the context specific evidence is useful for the user to review when evaluating the response from the llm.
The text was updated successfully, but these errors were encountered: