Very wonderful work.
I notice that swe-bench evaluation requires files including
eval.sh: The evaluation script
patch.diff: The model's generated prediction
report.json: Summary of evaluation outcomes for this instance
run_instance.log: A log of SWE-bench evaluation steps
test_output.txt: An output of running eval.sh on patch.diff
And in auto code rover we only get the json and patch.diff
how can we get test_output.txt?
Thanks a lot!