Amazon Is Using Specialized AI Agents for Deep Bug Hunting | EUROtoday

Get real time updates directly on you device, subscribe now.

As generative AI pushes the velocity of software program growth, it’s also enhancing the power of digital attackers to hold out financially motivated or state-backed hacks. This implies that safety groups at tech firms have extra code than ever to evaluation whereas coping with much more stress from dangerous actors. On Monday, Amazon will publish particulars for the primary time of an inner system often called Autonomous Threat Analysis (ATA), which the corporate has been utilizing to assist its safety groups proactively determine weaknesses in its platforms, carry out variant evaluation to rapidly seek for different, comparable flaws, after which develop remediations and detection capabilities to plug holes earlier than attackers discover them.

ATA was born out of an inner Amazon hackathon in August 2024, and safety staff members say that it has grown into an important software since then. The key idea underlying ATA is that it is not a single AI agent developed to comprehensively conduct safety testing and menace evaluation. Instead, Amazon developed a number of specialised AI brokers that compete towards one another in two groups to quickly examine actual assault methods and other ways they may very well be used towards Amazon’s programs—after which suggest safety controls for human evaluation.

“The initial concept was aimed to address a critical limitation in security testing—limited coverage and the challenge of keeping detection capabilities current in a rapidly evolving threat landscape,” Steve Schmidt, Amazon’s chief security officer, tells WIRED. “Limited coverage means you can’t get through all of the software or you can’t get to all of the applications because you just don’t have enough humans. And then it’s great to do an analysis of a set of software, but if you don’t keep the detection systems themselves up to date with the changes in the threat landscape, you’re missing half of the picture.”

As part of scaling its use of ATA, Amazon developed special “high-fidelity” testing environments that are deeply realistic reflections of Amazon’s production systems, so ATA can both ingest and produce real telemetry for analysis.

The company’s security teams also made a point to design ATA so every technique it employs, and detection capability it produces, is validated with real, automatic testing and system data. Red team agents that are working on finding attacks that could be used against Amazon’s systems execute actual commands in ATA’s special test environments that produce verifiable logs. Blue team, or defense-focused agents, use real telemetry to confirm whether the protections they are proposing are effective. And anytime an agent develops a novel technique, it also pulls time-stamped logs to prove that its claims are accurate.

This verifiability reduces false positives, Schmidt says, and acts as “hallucination management.” Because the system is constructed to demand sure requirements of observable proof, Schmidt claims that “hallucinations are architecturally impossible.”

https://www.wired.com/story/amazon-autonomous-threat-analysis/