Deleted Comment
In practice, I've seen three levels of "handling it":
1. Nothing. Most teams. "We use GPT through the API" with zero audit trail of what was sent or returned. If a customer asks under GDPR Article 15 what personal data was processed by an AI system, they can't answer.
2. Application-level logging. Better. But logs are operator-controlled — you can edit or delete entries. An auditor has no way to verify completeness. This is where most teams who "take compliance seriously" actually land.
3. Tamper-evident logging with hash-chaining. Each log entry includes a hash of the previous entry, so deleting or reordering anything breaks the chain. This is what the regulation seems to actually require when it says records should enable "automatic recording" and "traceability." Almost nobody does this yet.
The SOC2 angle is simpler as it has already has defined controls for access logging. The AI Act angle is harder because the technical standards (harmonised standards under Article 40) aren't published yet. So currently, you're building against the text of the regulation itself, which is 144 pages of cross-references.
Most honest answer I've seen: teams that will deploy AI in customer-facing workflows and can't reconstruct what happened are carrying regulatory risk they haven't quantified yet.