AWS re:Inforce 2023 in Anaheim in June focused significantly on how generative AI changes the security landscape: both the new attack surfaces AI creates and the AI-augmented security tools AWS is building.

AI-generated phishing and its limits

One of the threat categories discussed most at re:Inforce was AI-enhanced spear phishing. LLMs can produce personalised, grammatically correct phishing emails at scale without the manual effort that previously limited spear phishing campaigns. The practical mitigation is not catching bad grammar (AI phishing will not have it) but validating the request through a second channel, regardless of how legitimate the email looks.

Amazon GuardDuty and AI anomaly detection

AWS announced GuardDuty improvements using machine learning to detect anomalous API call patterns. The premise is that legitimate AWS usage has patterns, and deviations from those patterns, lateral movement, unusual IAM assumption, unexpected data exfiltration, are detectable by models trained on normal behaviour. The false positive rate is the engineering challenge: too many alerts and the security team ignores them; too few and you miss genuine threats.

The shared responsibility model under AI

When AI services process your data on cloud infrastructure, the shared responsibility model becomes more complex. AWS is responsible for the security of the cloud infrastructure running the AI model. You are responsible for what data you send to the model, who has access to the API, and what the model does with the output. For regulated industries handling PII or PHI, the question of whether AI-processed data retains the same classification as the source data is being worked out through legal counsel, not just cloud security teams.

Supply chain attacks targeting AI

AI model weights and fine-tuning pipelines are a new attack surface. Malicious code injected into a publicly downloaded model's weights can survive the model loading and execute in your inference environment. Hugging Face scanning, reproducible model checkpoints, and hash verification of model files are the mitigation practices emerging. This is analogous to supply chain security for open source packages, applied to model files.