Readit News logoReadit News
GTCHO commented on Actory AI – Autonomous QA for Fast-Moving Dev Teams   actory.ai/... · Posted by u/GTCHO
GTCHO · 3 months ago
Hey HN,

We just launched the V1 homepage for Actory AI, where we’re building autonomous software quality assurance agents for startups and scaleups that need to ship fast without sacrificing stability.

Our platform automatically analyzes builds, flags bugs, and provides intelligent test coverage recommendations—no need to write endless test cases or rely on fragile scripts. We’re focused on making QA invisible, intelligent, and integrated into your dev flow.

Check out our new site: https://actory.ai LinkedIn: https://www.linkedin.com/company/actory-ai

We’re in the early stages and would love your feedback: • What matters most to you when it comes to automated QA? • How do you currently handle QA in fast-release cycles? • What would make this valuable enough for you to adopt?

Thanks in advance for taking a look. Excited to share this journey with you.

— Daryl, CTO @ Actory AI

#ShowHN #QA #DevTools #AutonomousAgents #Startup #AI #SoftwareTesting

Dead Comment

Dead Comment

Dead Comment

GTCHO commented on What if your QA engineer never slept?    · Posted by u/GTCHO
ThrowawayR2 · 3 months ago
If your QA staff are no better than an "AI" agent, dump them and hire better QA staff.
GTCHO · 3 months ago
I hear you and to be clear, this isn’t about replacing talented QA teams. It’s about offloading the repetitive and pattern-based parts of QA so human testers can focus on more strategic, exploratory, and usability-driven work.

In the case I saw, the agent handled things like regression patterns, diff analysis, and known-risk detection across thousands of past issues. The QA team actually became more valuable because they weren’t stuck rerunning the same test plan for the fifth time that week. It was augmentation, not replacement.

That said, I totally agree if a team is just rubber-stamping PRs, the issue isn’t automation, it’s expectations and leadership.

Dead Comment

GTCHO commented on What if your QA engineer never slept?    · Posted by u/GTCHO
jakedlu · 3 months ago
I think it's an interesting idea, especially if it's just running on production or staging and constantly just trying new flows/testing edge cases. I would be curious about (1) the quality of testing compared to an actual human and (2) the cost involved. Obviously compared to a human salary the cost could get quite high before it became an impediment (also depending on quality). But running an agent 24/7 seems like costs could certainly pile up.
GTCHO · 3 months ago
Really good points. On quality it’s not replacing human insight, but it is exceptional at pattern recognition and coverage at scale. It catches edge cases that tend to get missed and never forgets past regressions. The best results I’ve seen come from pairing the agent with human QA. The agent does ambient monitoring and flags suspicious behavior. Humans then dig deeper.

Cost-wise, it’s surprisingly reasonable. The version I saw ran in containers that spun up based on commit activity or deploy frequency. So if no one is pushing code, it's idle. But during launches or busy dev cycles, it ramps up. Much cheaper than staffing a full team to maintain 24/7 vigilance.

GTCHO commented on What if your QA engineer never slept?    · Posted by u/GTCHO
turtleyacht · 3 months ago
QA receives whatever gets merged and (what they decide gets) deployed (to test); they cannot block PRs. It would be nice though to make some checks block merge, i.e. Required workflows.

Learning from bugs is amazing. Connect to production support tickets to link code changes to real incidents. When done manually by on-call, there is no other historical context.

Automate estimation with "this story reminds me of stories A, B, C, which were estimated to be X points and took Y days." A link lets folks drill down to code metrics, artifact version, etc.

A QA agent would be remarkable in that it has a complete and total timeline for everything, and can be queried in chat.

GTCHO · 3 months ago
Completely agree. Linking incidents back to code changes is one of the most valuable things a team can do but it's rarely done well. In this case, the agent actually learns from that full timeline production incidents, support tickets, commit diffs. It surfaces patterns you’d never catch manually, like an issue that only appears under high concurrency.

Also yes on chat querying. One of the most useful parts was letting PMs ask questions like “Has this bug happened since April?” and getting a full trace across releases. The idea of automating grooming using historical story similarity is spot on too. This could easily save teams hours per sprint.

u/GTCHO

KarmaCake day5July 2, 2019View Original