Series: Browser-Safe AI Systems
Series: Browser-Safe AI Systems
Browser-safe AI systems are becoming part of the modern security control plane because the browser is where users authenticate, open SaaS, move files, follow links, scan QR codes, and make trust decisions.
This series treats browser-safe AI as a controlled security pipeline, not as a magic model. The central position is that hostile browser content should be treated as adversarial input, AI verdicts should be constrained, policy should remain outside the model, and every important decision should produce evidence that analysts, red teams, developers, and stakeholders can review.
The series is written for four audiences:
- Security analysts who need evidence-rich alerts.
- Red team members who need repeatable validation methods.
- Developers who need secure input, output, and policy boundaries.
- Technical stakeholders who need measurable risk reduction.
Main Series
- Part 01: Executive Summary
- Part 02: Why Browser-Safe AI Systems Matter Now
- Part 03: From Browser Isolation to AI-Assisted Browser Defense
- Part 04: What the SafeBreach Gemini Calendar Research Demonstrates
- Part 05: Why This Research Applies to Browser-Safe AI Systems
- Part 06: The Core Risk: Untrusted Web Content Entering an AI Context
- Part 07: Defining Poison Packets for Browser AI
- Part 08: Practical Attack Classes Against AI-Backed Browser Security
- Part 09: Indirect Prompt Injection Through Web Pages
- Part 10: Hostile DOM, Hidden Text, and Metadata Manipulation
- Part 11: Screenshot-Based Prompt Injection and Visual Deception
- Part 12: DOM Versus Rendered Page Mismatch
- Part 13: QR Phishing, Brand Impersonation, and Multistage Lures
- Part 14: Unicode, Homograph, and Visual Spoofing Attacks
- Part 15: Delayed Content, Region-Gated Pages, and Evasive Phishing
- Part 16: AI Verdict Manipulation and False Negative Risk
- Part 17: False Positives, Alert Fatigue, and Trust Erosion
- Part 18: Data Handling Risks: Screenshots, DOM, URLs, and User Context
- Part 19: Privacy, Retention, Redaction, and Tenant Isolation
- Part 20: Model Output Handling: Why AI Verdicts Must Be Constrained
- Part 21: Fail-Open Versus Fail-Closed Security Decisions
- Part 22: Feedback-Loop Poisoning and Exception Abuse
- Part 23: Secure Architecture Principles for Browser-Safe AI
- Part 24: Red-Team Testing Methodology for AI Browser Controls
- Part 25: Building a Practical Python Test Harness
- Part 26: Evidence Collection: What Must Be Logged and Verified
- Part 27: SOC Usefulness: Turning AI Decisions Into Actionable Evidence
- Part 28: Governance Questions for Vendors and Customers
- Part 29: Practical Recommendations for Security Teams
- Part 30: Practical Recommendations for Vendors and Developers
- Part 31: How This Research Changes Browser Security Validation
- Part 32: Conclusion: Treat AI as an Untrusted Classifier Inside a Controlled Security Pipeline
Supporting Documents
- Appendix B: Vendor Due-Diligence Questionnaire
- Appendix C: Rules of Engagement Template
- Appendix D: Glossary
Series Principle
Treat AI as an untrusted classifier inside a controlled security pipeline.