-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prompt expansion feature #92
Comments
Starting point, reading this recent NAACL student workshop paper: https://aclanthology.org/2024.naacl-srw.2.pdf |
To start, an early version of a rephrasals pipeline could look something like the one that creates and runs a LLM-as-a-judge evaluation:
To implement:
We can try this out for a bit before implementing this into |
Are we expecting that the rephrasal script would generate a new file where for each line you have the rephrased prompt or that you will have the original prompt and in the following line the rephrased one? I think the second one would be more useful for our work, but maybe for general applications the first one is better? |
Is the only difference whether or not the original prompt is included in the new file? We can make that a flag in the command whether or not its duplicated to the new file, with the default option being that it is there |
Add functionality to obtain variants of prompts in order to better explore the input space.
Can potentially be linked to #82 for "chaining" prompto runs but for this idea rather than creating a new experiment after an initial run, we are maybe doing an initial "expansion" run to obtain a larger number of prompts to send.
The text was updated successfully, but these errors were encountered: