| commit | cc9199603aca48551e2d42c3034ab84c5c0134b1 | [log] [tgz] |
|---|---|---|
| author | Brian Sheedy <[email protected]> | Thu Oct 09 22:31:58 2025 |
| committer | Copybara-Service <[email protected]> | Tue Nov 04 22:45:55 2025 |
| tree | 2a657dfef85d3b2d4a2c5f6ca38c443f7b5bbeca | |
| parent | fad9ca4cd9a6d8192f3c1d179ef495bfd95f8354 [diff] |
Refactor gemini_provider's call_api Refactors gemini_provider.py's call_api implementation to be less monolithic. Additionally, missing test coverage is added for all touched code. This should be a functional no-op except for these fixes: 1. The output thread is set to be daemonic so that it cannot accidentally block process shutdown 2. Failure to parse a timeout results in an error instead of silently using the default timeout Bug: 449818513 Change-Id: I248b6ddd48c539bbc4d7c3dc048e341b6581e46d Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/7030040 Reviewed-by: Struan Shrimpton <[email protected]> Auto-Submit: Brian Sheedy <[email protected]> Commit-Queue: Struan Shrimpton <[email protected]> Cr-Commit-Position: refs/heads/main@{#1527804} GitOrigin-RevId: fb20cfd020148f64bdd9872534d1632f395251d1
This directory contains an experimental script for running prompt evaluation tests on extensions and prompts under //agents. It currently only works locally and will make temporary changes to your Chromium repo.
Existing tests can be run via the //agents/testing/eval_prompts.py script. It should handle everything automatically, although it is advised to commit any changes before running this script. It will automatically retrieve a temporary copy of promptfoo, perform repo setup, run configured tests, and perform teardown.
By default, it will build promptfoo from ToT, but specific behavior can be configured via command line arguments, including use of stable releases via npm which will likely result in faster setup.
If you are running eval_prompts.py on a system without a container runtime like Docker or Podman, you will need to pass the --no-sandbox flag. This is because the script uses sandboxing by default to isolate the test environment.
The prompt eval is intended to be run with Chromium in a btrfs file system. The tests should still run in a normal checkout but will be significantly slower and take up significantly more disk space. These steps can be used to fetch a new Chromium solution in a virtual btrfs file system mounted in your home dir.
The following commands can be used to set up the environment:
# Ensure btrfs is installed sudo apt install btrfs-progs # Create the virtual image file truncate -s 500G ~/btrfs_virtual_disk.img # Format the image with btrfs mkfs.btrfs ~/btrfs_virtual_disk.img # Mount the image mkdir ~/btrfs sudo mount -o loop ~/btrfs_virtual_disk.img ~/btrfs # Update owner sudo chown $(whoami):$(id -ng) ~/btrfs # Create a btrfs subvolume for the checkout btrfs subvolume create ~/btrfs/chromium # Fetch a new Chromium checkout into the subvolume. # This will place the 'src' directory inside '~/btrfs/chromium/'. cd ~/btrfs/chromium fetch chromium # For an existing checkout, you would instead move the contents, e.g.: # mv ~/your_old_chromium/* ~/btrfs/chromium/ # (Optional) To make the mount permanent, add it to /etc/fstab. # It's wise to back up this critical file first. cp /etc/fstab ~/fstab.bak echo "$HOME/btrfs_virtual_disk.img $HOME/btrfs btrfs loop,defaults 0 0" | sudo tee -a /etc/fstab
After Chromium is checked out, agents/testing/eval_prompts.py can then be run from ~/btrfs/chromium/src/.
The script only installs the extensions in the EXTENSIONS_TO_INSTALL list at the top of the file. If an extension should be present for testing, add the extension name to this list.
Each independent test case should have its own promptfoo yaml config file. See the promptfoo documentation for more information on this. If multiple prompts are expected to result in the same behavior, and thus can be tested in the same way, the config file can contain multiple prompts. promptfoo will automatically test each prompt individually.
Config files should be placed in a tests/promptfoo/ subdirectory of the relevant prompt or extension directory. After they exist on disk, new yaml files will need to be added to the PROMPTFOO_CONFIG_COMPONENTS list at the top of the script for the tests to actually be run.
The gemini_provider.py supports several custom options for advanced testing scenarios, such as applying file changes or loading specific templates. Below is an example of a promptfoo.yaml file that demonstrates how to use the changes option to patch and stage files before a test prompt is run.
This example can be used as a template for writing tests that require a specific file state.
custom_options.promptfoo.yamlprompts: - "What is the staged content of the file `path/to/dummy.txt`?" providers: - id: "python:../../../testing/gemini_provider.py" config: extensions: - depot_tools changes: - apply: "path/to/add_dummy_content.patch" - stage: "path/to/dummy.txt" tests: - description: "Test with custom options" assert: # Check that the agent ran git diff and found the new content. - type: icontains value: "dummy content"
The changes field points to standard .patch files. The test runner will apply them.
add_dummy_content.patchdiff --git a/path/to/dummy.txt b/path/to/dummy.txt index e69de29..27332d3 100644 --- a/path/to/dummy.txt +++ b/path/to/dummy.txt @@ -0,0 +1 @@ +dummy content