-
Notifications
You must be signed in to change notification settings - Fork 333
Open
Description
One of the major issues with ai programming is the context window. Agents perform better with 128k+ window models compared to 4k or 8k. Does ACR have the same issue as well? Gemma-2-27b-it is great and performs as well as gpt-4 variants. However, it has an 8k context window. 30% score on swe-bench lite is with the gpt-4o, which is a 128k model. Is it possible to get +- 25% score level performance with Gemma an 8k model?
Metadata
Metadata
Assignees
Labels
No labels