Browser-Safe AI Systems, Part 28: Governance Questions for Vendors and Customers
Series: Browser-Safe AI Systems, Part 28 of 32.
This post continues the Browser-Safe AI Systems series by focusing on governance questions for vendors and customers. The goal is to keep the discussion useful for analysts who investigate alerts, red teams who validate controls, developers who build the pipeline, and technical stakeholders who own risk decisions.
| Series navigation: Previous: Part 27 | Series index | Next: Part 29 |
28. Governance Questions for Vendors and Customers
Browser-safe AI systems require governance.
They inspect sensitive browser activity.
They may send data to models.
They may influence policy enforcement.
They may retain screenshots and DOM artifacts.
They may produce SOC alerts.
They may support exceptions.
They may use feedback to improve future detection.
Governance is not paperwork.
Governance defines the security boundary.
28.1 Data Collection Questions
Customers should ask:
- What browser artifacts are collected?
- Are screenshots collected?
- Are DOM snapshots collected?
- Is OCR performed?
- Are URLs collected?
- Are query strings retained?
- Are QR-code targets decoded?
- Are redirect chains captured?
- Are iframe trees captured?
- Are form fields captured?
- Are file names captured?
- Is user context collected?
- Is device context collected?
- Is policy context collected?
28.2 AI Processing Questions
Customers should ask:
- What data is sent to AI models?
- Where does inference occur?
- Which model providers are used?
- Are prompts stored?
- Are model responses stored?
- Are prompts and responses used for training?
- Are prompts and responses used for tuning?
- Is zero-retention available?
- Is customer data used to improve shared detection?
- Can customers opt out of shared learning?
- Is model output schema-constrained?
- Is model output validated?
- Can model output directly affect enforcement?
28.3 Redaction, Minimization, and Retention Questions
Customers should ask:
- What data is redacted before model submission?
- What data is redacted before storage?
- What data is redacted before analyst display?
- What data is redacted before SIEM export?
- Are tokens detected?
- Are credentials detected?
- Are reset links detected?
- Are OAuth codes detected?
- Are screenshots masked?
- Are DOM fields masked?
- Are query parameters stripped?
- Can redaction policies be configured?
- How long are screenshots retained?
- How long are DOM snapshots retained?
- How long are prompts and responses retained?
- Can raw evidence be deleted?
- How is deletion verified?
28.4 Tenant Isolation Questions
Customers should ask:
- How is tenant data isolated?
- Are tenant artifacts encrypted separately?
- Can one tenant’s feedback affect another tenant?
- Can one tenant’s exceptions influence shared rules?
- Are prompts tenant-scoped?
- Are model responses tenant-scoped?
- Is support access tenant-restricted?
- Are debug logs tenant-isolated?
- Are SIEM exports tenant-scoped?
- Are training or tuning datasets tenant-isolated?
Tenant isolation must include artifacts, policies, feedback, and support workflows.
28.5 Policy and Enforcement Questions
Customers should ask:
- What can the AI verdict influence?
- Is policy enforcement deterministic?
- Can free-form model output affect enforcement?
- What happens on low confidence?
- What happens on model timeout?
- What happens on invalid model output?
- What happens when evidence conflicts?
- Does the system fail open or fail closed?
- Can customers configure fallback behavior?
- Are high-risk workflows handled differently?
- Are policy changes audited?
28.6 Evidence and SOC Questions
Customers should ask:
- What evidence appears in alerts?
- Are reason codes included?
- Are screenshots available?
- Are DOM artifacts available?
- Are artifacts redacted?
- Can analysts replay events?
- Are model verdicts visible?
- Is confidence visible?
- Are policy decisions visible?
- Are exceptions visible?
- Is SIEM integration structured?
- Are raw artifacts copied into tickets?
- Can analysts access raw evidence with approval?
28.7 Exception and Feedback Questions
Customers should ask:
- Who can approve exceptions?
- Are exceptions scoped?
- Do exceptions expire?
- Are exceptions audited?
- Can exceptions bypass AI inspection?
- Can exceptions suppress alerts?
- Is feedback reviewed?
- Can user feedback tune detection?
- Can analyst feedback tune detection?
- Are feedback changes regression-tested?
- Can tuning be rolled back?
- Are stale exceptions reported?
28.8 Vendor Operational Questions
Customers should ask vendors:
- Who can access customer evidence?
- Is support access audited?
- Is access just-in-time?
- Is customer approval required?
- Are support bundles redacted?
- Are subprocessors documented?
- Is data residency documented?
- Are incident notification timelines defined?
- Are model changes communicated?
- Are major detection changes communicated?
- Are customer audit logs available?
28.9 Defensive Principle
Browser-safe AI governance is about making trust explicit.
The safest rule is:
Do not deploy AI browser security as a black box. Require clear answers on data flow, model use, retention, redaction, tenant isolation, enforcement, exceptions, and evidence.