Amazon uses specialized artificial intelligence agents to hunt deep insects
As generative artificial intelligence It increases the speed of software development, as well as the ability of digital attackers to carry out financially motivated or government-sponsored hacks. This means security teams at tech companies have more code to review than ever before, while also facing more pressure from bad actors. On Monday, Amazon released for the first time details of an internal system called Automated Threat Analysis (ATA), which the company will release to help its security teams proactively identify weaknesses in its platforms, perform variant analysis to quickly look for other similar flaws, and then develop fixes and detection capabilities to close holes before attackers find them.
ATA was born out of an internal Amazon hackathon in August 2024, and members of the security team say it has since become a critical tool. The key concept underlying ATA is that it is not an artificial intelligence agent that comprehensively performs security testing and threat analysis. Instead, Amazon developed several specialized AI agents that compete in teams of two to quickly investigate real-world attack techniques and different ways they could be used against Amazon’s systems — and then recommend security controls for human review.
“The initial concept was aimed at addressing a critical limitation in security testing — limited coverage and the challenge of keeping detection capabilities current in a rapidly evolving threat landscape,” Steve Schmidt, Amazon’s chief security officer, told WIRED. “Limited coverage means you can’t use all the software or you can’t access all the programs because you just don’t have enough humans. And then analyzing a suite of software is great, but if you don’t keep your detection systems up to date with changes in the threat landscape, you’re missing half the picture.”
As part of ATA’s deployment scale, Amazon developed special “high-fidelity” test environments that closely mirror Amazon’s production systems, so ATA can both consume and produce real-world telemetry for analysis.
The company’s security teams also pointed to ATA’s design so that every technique it uses, and the detection capability it produces, is validated with real-world, automated tests and system data. Working to find attacks that can be used against Amazon systems, red team agents run real commands in ATA-specific test environments that generate verifiable reports. The blue team, or defense-focused agents, use real-time telemetry to verify whether the protections they propose are effective. And whenever an agent develops a new technique, it also pulls time-stamped logs to prove the validity of its claims.
This verification capability reduces false positives and acts as “illusion management,” says Schmidt. Because the system is constructed to demand certain standards of observable evidence, Schmitt claims that “illusions are architecturally impossible.”