It feels like an AI cousin to the Python error steamroller (https://github.com/ajalt/fuckitpy).
Whenever I see this sort of thing I think that there might be a non-evil application for it. But then I think ... where's the fun in that?
CORS is implemented by browsers based on standardized HTTP headers. It’s a web-standard browser-level mechanism.
CSRF protection is implemented server-side (plus parts of the client-side code) based on tokens and/or custom headers. It’s an application-specific mechanism that the browser is agnostic about.
CORS today is just an annoying artifact of a poorly conceived idea about domain names somehow being a meaningful security boundary. It never amounted to anything more than a server asking the client not to do something with no mechanism to force the client to comply and no direct way for the server to tell if the client is complying. It has never offered any security value, workarounds were developed before it even became a settled standard. It's so much more likely to prevent legitimate use than protect against illegitimate use that browsers typically include a way to turn it off.
With CSRF the idea is that the server wants to be able verify that a request from a client is one it invited (most commonly that a POST comes from a form that it served in an earlier GET). It's entirely up to the server to design the mechanism for that, the client typically has no idea its happening (it's just feeding back to the server on a later request something it got from the server on a previous request). Also notable is despite the "cross-site" part of the name it doesn't really have any direct relationship to "sites" or domains, servers can and do use the exact same mechanisms to detect or prevent issues like accidentally submitting the same form twice.
This is whiny and just wrong. Best behavior by default is always the right choice for an SDK. Libraries/tools/clients/SDKs break backwards compatibility all the time. That's exactly what semver version pinning is for, and that's a fundamental feature of every dependency management system.
AWS handled this exactly right IMO. Change was introduced in Python SDK version 1.36.0 which clearly indicatesbreaking API changes, and their changelog also explicitly mentions this new default
api-change:``s3``: [``botocore``] This change enhances integrity protections for new SDK requests to S3. S3 SDKs now support the CRC64NVME checksum algorithm, full object checksums for multipart S3 objects, and new default integrity protections for S3 requests.
https://github.com/boto/boto3/blob/2e2eac05ba9c67f0ab285efe5...> The photography sessions for patients with ASD took place in a space dedicated to their needs, distinct from a general ophthalmology examination room. This space was designed to be warm and welcoming, thus creating a familiar environment for patients. Retinal photographs of typically developing (TD) individuals were obtained in a general ophthalmology examination room. Each eye required an average of 10–30 s for photography, although some cases involved longer periods to help the patient calm down, sometimes exceeding 5–10 min. All images were captured in a dark room to optimize their quality. Retinal photographs of both patients with ASD and TD were obtained using non-mydriatic fundus cameras, including EIDON (iCare), Nonmyd 7 (Kowa), TRC-NW8 (Topcon), and Visucam NM/FA (Carl Zeiss Meditec).
So two questions:
1. Are we positive that the difference in rooms does not effect these images?
2. If we are in a dark room, and ASD patients are in it for 5-10 minutes longer, are we sure this doesn't effect the retina?
3. Were all cameras used for both ASD and TD images?
Want to make sure the AI is being trained to detect autism, and wasn't accidentally trained to identify camera models, length-in-dark-room or room-welcomingness.
Hopefully not, but I assume you have to be so careful with these sort of things when the model is entirely black-box and you can't actually validate what it's actually doing inside.
Yes, 100% this. That's why I said that this lawsuit has a major remedy problem. Even if the court agrees this is anticompetitive, how do you fix it?
It looks like it's an after thought but at least on their mind now, which is very fair with respect to Bloomberg's wants/needs. It'd be nice if they had a bit of a warning about using this until it has some basic auth(n) and TLS since they're releasing it to the public. I think it is, relativley speaking, rude to release insecure networked software without giving users a notice as to what sorts of insecurity is at least known/expected.