Add Claude Sonnet 4 on Bedrock to Evaluation Runner & run evaluations
Everyone can contribute. Help move this issue forward while earning points, leveling up and collecting rewards.
Many features on .com are moving towards using Sonnet 4 as the default model. Because we want to bring support for Sonnet 4 on self-hosted Duo as well, the first step is to add the model on Evaluation Runner and figure out the evaluation scores.
Definition of Done
-
Each model can be used to support the feature on all supported platforms -
Examine individual inputs and outputs that scored poorly (1-2 scores); Look for and document any patterns of either poor feature performance or poor LLM judge callibration. Iterate on the model prompt to eradicate patterns of poor performance. -
Achieve less than 20% poor answers (defined as 1s and 2s from an LLM judge, or less than 0.8 cosine similarity) using each supported model for those areas in which we do have supporting validation datasets. -
Quality results, based on LLM Judge scores 1-4 and/or cosine similarity are recorded in this issue's comments as distributions. For LLM Judges this means buckets of 1s, 2s, 3s, 4s. For Cosine similarity scores, this means buckets of similarity scores 0.9 and above, 0.8-0.89, 0.7-0.79 and so on. -
The traffic light system for self-hosted models has been updated to include scores, and the documentation has been updated to reflect any changes
Edited by 🤖 GitLab Bot 🤖