Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interactive supervised research (like pair programming) #1011

Open
kripper opened this issue Dec 9, 2024 · 4 comments
Open

Interactive supervised research (like pair programming) #1011

kripper opened this issue Dec 9, 2024 · 4 comments

Comments

@kripper
Copy link

kripper commented Dec 9, 2024

Is your feature request related to a problem? Please describe.
GPT Researcher may focus on irrelevant topics.

Describe the solution you'd like
An interactive mode where the user is asked to provide feedback to avoid GPT Researcher to focus on irrelevant topics.

Describe alternatives you've considered
Control the research process interactively (pause, resume and cancel irrelevant research tasks befire they are executed).

Additional context
For example, when asking "a solution that uses methods (like A or B) to solve C", GPT Researcher may loose energy in comparing A with B, which is irrelevant, because they were only mentioned as references and not as solutions to be compared.
In this case, the user should be able to somehow express that the comparison between A and B is irrelevant and what is important is to find a method to solve C.

@kripper kripper changed the title Interactive supervised reearch (like pair programming) Interactive supervised research (like pair programming) Dec 9, 2024
@assafelovic
Copy link
Owner

@kripper this is super interesting! Would you consider adding this PR? Might be best in multi agents solution

@kripper
Copy link
Author

kripper commented Dec 9, 2024

@kripper this is super interesting! Would you consider adding this PR? Might be best in multi agents solution

I wish I could, but I'm very busy hacking on many other projects at the moment. That was also the reason I was trying to automate my research tasks :-).

@kfeeeeee
Copy link

I really like to second this. It would be absolutely helpful to have the ability to further enhance or narrow down the research maybe resulting in multiple revisions of the research result. Maybe this can be also an action invoked by chatting with an llm after the inital research is finished.

On a side note: I really like gpt-researcher - one of the most useful tools I've came across in the os llm community - Thank you!

@ElishaKay
Copy link
Collaborator

ElishaKay commented Dec 15, 2024

Welcome @kripper,

In case it helps:

Human feedback feature in multi-agents

We experimented with this idea very briefly a while back when we added a field to the multi_agents task.json file, titled:

  • include_human_feedback: true/false

In short, setting this value to true will enable you to provide feedback to the researcher after it generates subqueries to perform the research on.

The feedback you provide should enable the researcher to narrow in on your feedback.

You can explore this piece deeper here in the Editor Agent - search for feedback_instruction

This should be supported by the CLI and full-stack nextjs app on localhost:3000 when you run with docker

Possibility of adding a voice interface

This is an interesting area to explore.

We're open to ideas around this topic.

I think integrating with something like a Hume.ai Voice Interface could be very interesting.

This Voice Agent also has the ability to integrate with Tools - a tool in this context is a custom function that the Voice Agent can run at will - in other words, we could provide the Voice Agent with the ability to run the GPT-Researcher report at will after it gathers a deeper set of specs on your area of interest.

Jah bless 🛹

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants