AI in Software Testing
We're exploring an idea to tackle inefficiencies in our testing workflow and would love the HN community's thoughts.
Currently, our process involves writing test documentation/plans upfront, implementing tests, and then manually reporting failures back to the dev team for the backlog. This feels slow and adds overhead.
Our proposed solution is a tool, likely accessed via a Slack bot, that would:
Connect to a code repository.
Analyze dependencies.
Use tree-sitter to parse the code, understand function signatures, and map relationships.
Automatically generate initial test cases (e.g., boilerplate, basic path coverage) for the discovered functions.
The goal is to cut down the manual documentation step and potentially streamline failure reporting, creating a tighter feedback loop.Is this a worthwhile problem to solve? What are the obvious pitfalls or challenges we might be missing? Are there existing tools that already do this well? Can this be a standalone product?