-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test / promptfoo Issues #872
Comments
I suspect we leak some logging in the console.out that breaks the console.log I will investigate. |
(The formatting is unfortunate in promptfoo but there is no way to customize how it is rendered in the cli) |
Troubleshooting Experiment with OllamaI tried another experiment this AM using The script runs and succeeds with 🦙 locally serving the default. The test running approaches though now have this message Experimented with setting my
and adding placeholders without success but my understanding is those aren't needed for Ollama
|
The issue is that rubrick or facts are LLM-as-judge type of assertions and they require further configuration of the LLM. This is not well supported by genaiscript yes (oops). The promptfoo docs are at https://www.promptfoo.dev/docs/configuration/expected-outputs/model-graded/llm-rubric/#overriding-the-llm-grader for info. To debug further the promptfoo issues, you can take a look at the .yml files dropped under |
I need to add an escape hatch to allow for configuration the promptfoo grader. |
@pl-shernandez 1.76.2 fixes the JSON parsing issue , which was caused from a "pollution" of the process.stdout when we pipe the JSON out. The issue of properly configuring promptfoo for rubric testing is still somewhat relevant. For example, I haven't see how promptfoo handles running Azure with Microsoft Entra yet. Maybe it's time to remove this dependency and run it all in genaiscript. |
@pelikhan Spent some time working with this today to better understand. I was able to call the Add these environment variables to
Put in the flag for the This is the command that it runs when right click to run tests or test explorer is used that I modified |
Thank you for the investigation. I'll be looking into the mods. |
|
Please try 1.77.3 and above. Cache is gone and the configuration in the generate .yml file is more precise. You'll still need the .env variables. I'll see about supporting them in GenAIScript as well so you don't have to duplicate them. |
Issue
I'm having some issues getting the
test
functionality to succeed so I stripped it down to a very simple script. The issue is when using any ofnpx genaiscript test add
, right-clicking the script and choosing "Run Tests" or via the VS Code Test Explorer. My teammate is experiencing the same issue.Script
I made this barebones script which works just fine when ran and I know that the model is deployed.
add.genaiscript.mjs
Errors
The errors I receive in the terminal when running via right-clicking the script or Test Explorer...
and in the PromptFoo UI
Similarly if i run
npx genaiscript test add
Troubleshooting
I tried adding additional variables for the default models in my
.env
but that didn't helpAZURE_OPENAI_API_KEY=hidden
AZURE_OPENAI_API_ENDPOINT=hidden
GENAISCRIPT_DEFAULT_MODEL="azure:gpt-4o"
GENAISCRIPT_DEFAULT_SMALL_MODEL="azure:gpt-4o"
GENAISCRIPT_DEFAULT_VISION_MODEL="azure:gpt-4o"
Tried adding
promptfoo
as a dependencynpm install -d promptfoo@latest
Tried un-installing and re-installing extension
Setup
genaiscript": "^1.76.0
vscode: Version: 1.95.3 (Universal)
mac os: 14.6.1
Going Further
Prior to this I was able to runs some more complex prompts with tests unsuccessfully using an augmented version of the poem example. But I received unterminated JSON errors even with 4096 tokens set. I felt I should try and ask for help on this first.
The text was updated successfully, but these errors were encountered: