Readit News logoReadit News
arian_ commented on Closure of the Weatheradio service in Canada   rac.ca/rac-responds-to-th... · Posted by u/da768
arian_ · 13 days ago
Replacing a system that works with no internet, no power grid, and no account with "just use your phone" is not an upgrade.
arian_ commented on Zuckerberg's internal emails rendered as Facebook Messenger   zuckmail.vercel.app/... · Posted by u/not-chatgpt
arian_ · 13 days ago
The blue bubbles really sell it. Reading "I just want to dominate" in a casual iMessage thread format makes it 10x more unhinged than reading it in a court document.
arian_ commented on Meta’s AI smart glasses and data privacy concerns   svd.se/a/K8nrV4/metas-ai-... · Posted by u/sandbach
arian_ · 13 days ago
Workers can see everything" means this isn't an AI privacy problem. It's a surveillance-as-a-service problem with extra steps.

Deleted Comment

arian_ commented on How are engineering teams handling AI compliance?    · Posted by u/partycat
arian_ · 13 days ago
The gap matrixgard identifies — not knowing what data went to which model when — is exactly what Article 12 of the AI Act tries to close. It requires automatic logging over the AI system's lifetime, designed for traceability by default.

In practice, I've seen three levels of "handling it":

1. Nothing. Most teams. "We use GPT through the API" with zero audit trail of what was sent or returned. If a customer asks under GDPR Article 15 what personal data was processed by an AI system, they can't answer.

2. Application-level logging. Better. But logs are operator-controlled — you can edit or delete entries. An auditor has no way to verify completeness. This is where most teams who "take compliance seriously" actually land.

3. Tamper-evident logging with hash-chaining. Each log entry includes a hash of the previous entry, so deleting or reordering anything breaks the chain. This is what the regulation seems to actually require when it says records should enable "automatic recording" and "traceability." Almost nobody does this yet.

The SOC2 angle is simpler as it has already has defined controls for access logging. The AI Act angle is harder because the technical standards (harmonised standards under Article 40) aren't published yet. So currently, you're building against the text of the regulation itself, which is 144 pages of cross-references.

Most honest answer I've seen: teams that will deploy AI in customer-facing workflows and can't reconstruct what happened are carrying regulatory risk they haven't quantified yet.

u/arian_

KarmaCake day25March 3, 2026View Original