-
Notifications
You must be signed in to change notification settings - Fork 333
Open
Description
Just wanted to say amazing work here! ACR is one of the few projects I saw which implemented the capability of leveraging test cases which I think is incredible and crucial to having truly autonomous code generation & tooling. I did have some questions/thoughts and wanted to know what the future looks like for ACR.
Questions
- What languages are supported for the AST? Is it just python or is there planned support for other languages as well? For reference repo maps where AST are constructed is done with aider https://aider.chat/2023/10/22/repomap.html which uses tree-sitter for supporting many languages
- Any plans for discriminators/MCTS?
- https://github.com/zhentingqi/rStar
- Actor Critic Regenerator pattern https://huggingface.co/LoneStriker/HelixNet-critic-4.0bpw-h6-exl2
- Tree of thought patterns via Chain of Thought https://x.com/Arcturus_f/status/1739762147859525835/photo/1 combined with Actor-Generator-Discriminators for Graph of Thought?
- From 10 mo ago but "reflecting" between two models improved quality https://www.reddit.com/r/LocalLLaMA/comments/180uz42/today_is_the_first_day_im_getting_results/
- Any plans for selecting a model based on language? It appears that some languages are better than others for certain languages (Gemma is quite good a typescript despite being a much smaller model than others)
Project Organization
- I had a few questions/ideas and I was wondering if this is on the roadmap, if maybe it would be possible to see the roadmap/if there is one on Github (can be done via github projects https://docs.github.com/en/issues/planning-and-tracking-with-projects/customizing-views-in-your-project/customizing-the-roadmap-layout )
Requests
- Can options be added to ensure that any code added (feature or fix) includes corresponding generated test cases? (Maybe generate tests based on spec and then code is generated on spec and matched against the code?)
- There are tools in the case of javascript where if you run
jest test --coverage(coverage in python does the same https://coverage.readthedocs.io/en/7.6.1/ ) you get a coverage report of all code where the lines not covered by tests are surfaced automatically. Can this be used as an input to increase test coverage of the codebase? (i.e. it can see all uncovered lines of code and then generate test cases, ensuring this way any new code added cannot introduce unknown bugs) - In the case of python
mypycan be used to add types and running mypy can ensure if code adheres to the type spec provided. This also exists within Typescript. Can the outputs of running mypy (for python) or eventually typescript/biome errors be used as inputs alongside the tests to improve code return quality? - Build/compile time errors used as inputs as well whether it's Swift/Rust etc? For anyone not familiar in many languages such as Swift or Rust, in the former build errors are provided by the language server/xcode/compiler that tells you why something won't work with an error alongside suggested changes to fix it.
Really appreciate all the work done here and excited to see if there is any way to contribute. If and when autocoderover advances more....it can be used to improve autocoderover itself :)
Metadata
Metadata
Assignees
Labels
No labels