Readit News logoReadit News
DoingIsLearning · a year ago
I am not expert but there seems to be an overlap in the article between 'AI' and well ... just software, or signal processing:

- AI that collects “real time” biometric data in public places for the purposes of law enforcement.

- AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.

- AI that uses biometrics to infer a person’s characteristics

- AI that collects “real time” biometric data in public places for the purposes of law enforcement.

All of the above can be achieved with just software, statistics, old ML techniques, i.e. 'non hype' AI kind of software.

I am not familiar with the detail of the EU AI pact but it seems like the article is simplifying important details.

I assume the ban is on the purpose/usage rather than whatever technology is used under the hood, right?

spacemanspiff01 · a year ago
From the laws text:

For the purposes of this Regulation, the following definitions apply:

(1) ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; Related: Recital 12

https://artificialintelligenceact.eu/article/3/https://artificialintelligenceact.eu/recital/12/ So, it seems like yes, software, if it is non-deterministic enough would qualify. My impression is that software that simply takes "if your income is below this threshold, we deny you a credit card." Would be fine, but somewhere along the line when your decision tree grows large enough, it probably changes.

btown · a year ago
Notably, Recital 12 says the definition "should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations."

https://uk.practicallaw.thomsonreuters.com/Glossary/UKPracti... describes a bit of how recitals interact with the operating law; they're explicitly used for disambiguation.

So your hip new AI startup that's actually just hand-written regexes under the hood is likely safe for now!

(Not a lawyer, this is neither legal advice nor startup advice.)

abdullahkhalids · a year ago
Seems very reasonable. Not all software has the same risk profile, and autonomous+adaptive software certainly have a more dangerous profile than simpler software, and should be regulated differently.
cabalamat · a year ago
> and that may exhibit adaptiveness after deployment

So if an AI can't change its weights after deployment, it's not really an AI? That doesn't make sense.

As for the other criteria, they're so vague I think a thermostat might apply.

zelphirkalt · a year ago
Hm, not a too bad definition. Seems like written by some people who know what machine learning is.
surfingdino · a year ago
Good. You cannot have a functioning society where decisions are made in a non-deterministic way. Especially when those decision deviate from agreed protocols (laws, bylaws, contracts, etc.).
gamedever · a year ago
So no more predicting the weather with sensors?
uniqueuid · a year ago
Unfortunately yes, the article is a simplification, in part because the AI act delegates some regulation to existing other acts. So to know the full picture of AI regulation one needs to look at the combination of multiple texts.

The precise language on high risk is here [1], but some enumerations are placed in the annex, which (!!!) can be amended by the commission, if I am not completely mistaken. So this is very much a dynamic regulation.

[1] https://artificialintelligenceact.eu/article/6/

zelphirkalt · a year ago
Is the regulation itself AI, due to bein adaptive after deployment?

Just joking, but I think it is a funny parallel. Also because of it being probably solely human made rules.

dathinab · a year ago
> just software, statistics, old ML techniques

yes, and with the same problems if applied to the same use cases in the same way

in turn they get regulated, too

it would be strange to limited a law to some specific technical implementation, this isn't some let's fight the hype regulation but a serious long term effort to regulate automatized decision making and classification processes which pose a increased or high risk to society

impossiblefork · a year ago
I wouldn't be surprised if it does cover all software. After all, chess solvers are AI.
oneeyedpigeon · a year ago
Chess solvers are more AI than 90% of the things currently being touted as AI!
Muromec · a year ago
that's what DORA the explora of your unit tests Act is
teekert · a year ago
Have been having a lot of laughs about all the things we call AI nowadays. Now it’s becoming less funny.

To me it’s just generative AI, LLMs, media generation. But I see the CNN folks suddenly getting “AI” attention. Anything deep learning really. It’s pretty weird. Even our old batch processing, SLURM based clusters with GPU nodes are now “AI Factories”.

xdennis · a year ago
> To me it’s just generative AI, LLMs, media generation.

That's not what AI is.

Artificial Intelligence has decades of use in academia. Even a script which plays Tic Tac Toe is AI. LLMs have advanced the field profoundly and gained widespread use. But that doesn't mean that a Tic Tac Toe bot is no longer AI.

When a term passes to the mainstream people manufacture their own idea of what it means. This has happened to the term "hacker". But that doesn't mean decades of AI papers are wrong because the public uses a different definition.

It's similar to the professional vs the public understanding of the term "prop" in movie making. People were criticizing Alec Baldwin for using a real gun on the set of Rust instead of a "prop" gun. But as movie professionals explained, a real gun is a prop gun. Prop in theater/movies just means property. It's anything that's used in the production. Prop guns can be plastic replicas, real guns which have been disabled, or actually firing guns. Just because the public thinks "prop" means "fake", doesn't mean movie makers have to change their terms.

sethd · a year ago
Even the A* search algorithm is technically AI.
pmontra · a year ago
As somebody told me recently, now AI means any program that does something that people think is AI, even if programs doing that thing have been with us for ten years or more with the same degree of accuracy.
layer8 · a year ago
Yes, you are better off reading the actual act, like the linked article 5: https://artificialintelligenceact.eu/article/5/

This is not about data collection (GDPR already takes care of that), but about AI-based categorization and identification.

"AI system" and other terms are defined in article 3: https://artificialintelligenceact.eu/article/3/

belter · a year ago
Yes its simplifying. There are more details here: https://news.ycombinator.com/item?id=42916414
frantzmiccoli · a year ago
Statistics and old ML are AI in the sense of that regulation.
PeterStuer · a year ago
Deliniating "AI" from other software is one of the tricky parts of the act and ultimately left as an excercise to the courts.

Trying to define it for scope was IMHO a mistake.

amelius · a year ago
You can even replace AI by humans. For example, it is not legal for e.g. police officers to engage in racial profiling.
uniqueuid · a year ago
No you cannot, see Article 2: Scope
HenryBemis · a year ago
I've worked with the bureaucrats in Brussels on tech/privacy topics.

Their deep meaning is "we don't want machines to make decisions". A key point for them has always been "explainability".

GDPR has a provision about "profiling" and "automated decision making" for key aspects of life. E.g. if you ask for a mortgage (pretty important life changing/affecting decision) and the bank rejects it you a) can ask them "why" and they MUST explain, in writing, and b) if the decision was made in a system that was fed your data (demographic & financial) you can request that a Human to repeat the 'calculations'.

Good luck having ChatGTP explaining.

They are trying to avoid having the dystopian nightmare of the (apologies - I don't mean to disrespect the dead, I mean to disrespect the industry) Insurance & Healthcare in the US, where a system gets to decide 'your claim is denied' against humans' (doctors in this case)(sometimes imperfect) consultations because one parameter writes "make X amount of profit above all else" (perhaps not coded with this precise parameter but somehow else).

Now, understanding the (personal) data collection and send to companies in the US (or other countries) that don't fall under the Adequacy Decisions [0] and combining that with the aforementioned (decision-making) risks, using LLMs in Production is 'very risky'.

Using Copilot for writing code is very much different because there the control of "converting the code to binaries, and moving said binaries to Prod env." (they used to call them Librarians back in the day...), so Human Intervention is required to do code review, code test, etc (just in case SkyNet wrote code to export the data 'back home' to OpenAI, xAI, or any other AI company it came from).

I haven't read the regulation lately/in its final text (I contributed and commented some when it was still being drafted), and/but I remember the discussions on the matter.

[0]: https://commission.europa.eu/law/law-topic/data-protection/i...

EDIT: ultimately we want humans to have the final word, not machines.

Deleted Comment

nobodywillobsrv · a year ago
The EU and other organizations will be using these to ban data collection and anything to do with protection of the EU.

They will interpret "predict" as merely "report" or "act on".

This is terrible.

frereubu · a year ago
I always sigh when I see these threads on HN because many of the comments (although not all, thankfully) devolve into US / EU name-calling and wild overgeneralisations.

I would really love to see a Q&A thread like https://news.ycombinator.com/item?id=42770125 from someone who's actually read the documents, practices law in the area, and also understands the difference between US and EU law.

frantzmiccoli · a year ago
Not a lawyer, not versed in US and EU Law, but ... I read (part) of the regulation.

https://outofthecomfortzone.frantzmiccoli.com/thoughts/2024/... and here is my shameless plug.

2rsf · a year ago
Not a lawyer, only an engineer starting to assess our AI models.

Your comparison to GDPR seems to be correct in a way, both are quite vague and wide. The implementation of GDPR is still unclear in certain situations and it was even worse when it was launched, the EU AI act have very little references to work with and except for very obvious area it is still a lot of a guesswork

Deleted Comment

riedel · a year ago
My take is that nobody actually practices law in this area at this time. Tons of stuff will again need to go to court before you can be sure if these regulations actually apply to you. Actually tons of the cases that are relevant for smaller enterprises will actually never go to court like with GDPR leaving uncertainty for years. Having said this the good thing about the AI act is that it force injects some principles for evaluation into existing standards.

Disclaimer: i am advising a company that sells AI act related compliance tooling

theptip · a year ago
Seems like a mostly reasonable list of things to not let AI do without better safety evals.

> AI that tries to infer people’s emotions at work or school

I wonder how broadly this will be construed. For example if an agent uses CoT and they needs emotional state as part of that, can it be used in a work or school setting at all?

layer8 · a year ago
This quote is inaccurate. The actual wording is: "the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;" and it links to https://artificialintelligenceact.eu/recital/44/ for rationale.

So, this targets the use case of a third party using AI to detect the emotional state of a person.

unification_fan · a year ago
We need to profile your every thought and emotion. Don't worry though, it's for medical or safety reasons only. Same for your internet history... You know, terrorism and all. Can't have that.
dmix · a year ago
Is this just based on hypothetical scenario they sat in a room coming up with or has such a thing been tried and harmed people?
sofixa · a year ago
> Seems like a mostly reasonable list of things to not let AI do without better safety evals.

Yes. This is how you know that all the people screaming about the EU overregulating and how the EU will miss all that AI innovation haven't even bothered to Google or ask their preferred LLM about the legislation. It's mostly just common sense to avoid EU citizens having their rights or lives decided by blackbox algorithms nobody can explain, be it in a Post Office (UK) scandal style, or US healthcare style.

stavros · a year ago
The EU generally (so far) has passed reasonable legislation about these things. I'd be surprised if it was taken more broadly than the point where a reasonable person would feel comfortable with it.
Zenst · a year ago
I would imagine that such a tool to infer emotional states would be most useful for autistic people who are as I can attest, somewhat handicapped upon that front. Maybe that will get challenged as disability discrimination by some Autistic group. Which would be interesting. As with most things, there are rules, and exceptions to those rules - no shoe fits everyone, though forcing people to wear the wrong shoe size, can do more harm than good.
danielheath · a year ago
> I would imagine that such a tool to infer emotional states would be most useful for autistic people who are as I can attest, somewhat handicapped upon that front.

It might well be a useful tool to point at yourself.

It's an entirely inappropriate one to point at someone else. If you can't imagine having someone estimate your emotional state (usually incorrectly), and use that as a basis to disregard your opinion, you've lived a very different life to mine. Don't let them hide behind "the AI agreed with my assessment".

cwillu · a year ago
On the other hand, as someone who's emotional state is routinely incorrectly assessed by people, I can't imagine a worse hell than having that misassessment codified into an ai that I am required to interact with.
Mordisquitos · a year ago
> I would imagine that such a tool to infer emotional states would be most useful for autistic people who are as I can attest, somewhat handicapped upon that front.

The regulation explicitly provides an exception for medical reasons:

    Article 5:
    
    1. The following AI practices shall be prohibited: 
    [...]
    (f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;

pjc50 · a year ago
I can definitely find you autistic people who would hate having such a device pointed at them, because they don't mask the ""correct"" emotional state well enough.
__MatrixMan__ · a year ago
A difficulty I have with customer service folk is that usually I'm just trying to report a bug. I'm not upset. Please stop trying to give me coupons. I'm not trying to cancel my account I just want to help your engineers fix this bug (and later on, I want to see that it has actually gone away).

If I must intact with an AI for this, I'd prefer that it infer my emotions correctly.

Deleted Comment

hcfman · a year ago
Laws that are open to interpretation with drastic consequences if it's interpreted against your favour pose unacceptable risk to business investors and stifle innovation.
kakadu · a year ago
Me and my euro mates are not interested in this kid of "innovation".

The "business investors" and "innovators" can take this kind of business elsewhere.

This kind of talk where regulators are assaulted by free marketeers and freedom fighters is unacceptable here.

Let us not misinterpret business people as "innovators", if what they do is not net positive for the society, they do not belong here.

cmenge · a year ago
I'm not sure where "here" is and who you think you speak for, but as a European, I am strictly against regulation, in particular vague regulation made by non-elected EU bureaucrats. And no, freedom of speech and a discussion about the pros and cons is also not "unacceptable". It is part of the democratic process.
clarionbell · a year ago
You seem to very strict about the kind of political discourse that you would allow. And I'm going to even elaborate on how problematic is your "net positive or society" and who would possibly be in charge of determining that.
jeffdotdev · a year ago
There is no law that isn't open to interpretation. There is a reason for the judicial branch of government.
caseyy · a year ago
Well, the laws in civil law countries that practice legal literalism are not open to interpretation. Eastern Europe, much of which is a part of the EU, is quite literalist.

The understanding is that interpreting laws leads to bias, partiality, and injustice; while following the letter of the law equally in each situation is the most just approach.

BrenBarn · a year ago
A heck of a lot of what passes for "innovation" these days is stuff I absolutely want to stifle.
dns_snek · a year ago
People said that about GDPR. Laws that don't leave any room for interpretation are bound to have loopholes that pose unacceptable risk to the population.
daedrdev · a year ago
I think its quite clear gdpr has indeed lead to lower investment and delayed or cancelled products in Europe
hcfman · a year ago
I would like to see a new law that puts any member of government found obstructing justice is put in jail.

Except that the person responsible for travesty of justice framing 9 innocent people in this Dutch series is currently the president of the court of Maastricht.

https://npo.nl/start/serie/de-villamoord

Remember. The courts have the say as to who wins and looses in these new vague laws. The ones running the courts have to not be corrupt. But the case above shows that this situation is in fact not the case.

_bin_ · a year ago
surely EU courts will not unfairly penalize US-developed models...
bmicraft · a year ago
Yes sometimes stuff like this happens. Still, I'd like to think the EU is a prime example for how "reasonable" legislation has benefits over extremely specific legislation. Reasonable wins almost every time with how it fares under changing circumstances and how it's pretty much loop hole proof by design.

Deleted Comment

cactusplant7374 · a year ago
Is there somewhere I can read more about this?
_heimdall · a year ago
What I don't see here is how the EU is actually defining what is and is not considered AI.

> AI that manipulates a person’s decisions subliminally or deceptively.

That can be a hugely broad category that covers any algorithmic feed or advertising platform.

Or is this limited specifically to LLMs, as OpenAI has so successfully convinced us that LLMs really are Aai and previous ML tools weren't?

dijksterhuis · a year ago
the actual text in the ~act~ guidance states:

> Exploitation of vulnerabilities of persons, manipulation and use of subliminal techniques

techcrunch simplified it.

from my reading, it counts if you are intentionally setting out to build a system to manipulate or deceive people.

edit — here’s the actual text from the act, which makes more clear it’s about whether the deception is purposefully intended for malicious reasons

> the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm

Bjartr · a year ago
Seems like even a rudimentary ML model powering ad placements would run afoul of this.
dist-epoch · a year ago
so "sex sells" kind of ads are now illegal?
vitehozonage · a year ago
Exactly what i thought too.

Right now, for 10 years at least, with targeted advertising, it has been completely normalised and typical to use machine learning to intentionally subliminally manipulate people. I was taught less than 10 years at a top university that machine learning was classified as AI.

It raises many questions. Is it covered by this legislation? Other comments make it sound like they created an exception, so it is not. But then I have to ask, why make such an exception? What is the spirit and intention of the law? How does it make sense to create such an exception? Isn't the truth that the current behaviour of the advertising industry is unacceptable but it's too inconvenient to try to deal with that problem?

Placing the line between acceptable tech and "AI" is going to be completely arbitrary and industry will intentionally make their tech tread on that line.

troupo · a year ago
> What I don't see here is how the EU is actually defining what is and is not considered AI.

Because instead of reading the source, you're reading a sensationalist article.

> That can be a hugely broad category that covers any algorithmic feed or advertising platform.

Again, read the EU AI Act. It's not like it's hidden, or hasn't been available for several years already.

----

We're going to get a repeat of GDPR aren't we? Where 8 years in people arguing about it have never read anything beyond twitter hot takes and sensationalist articles?

_heimdall · a year ago
Sure, I get that reading the act is more important than the article.

And in reading the act, I didn't see any clear definitions. They have broad references to what reads much like any ML algorithm, with carve outs for areas where manipulating or influencing is expected (like advertising).

Where in the act does it actually define the bar for a technology to be considered AI? A link or a quote would be really helpful here, I didn't see such a description but it is easy to miss in legal texts.

robertlagrant · a year ago
The briefing on the Act talks about the risk of overly broad definitions. Why don't you just engage in good faith? What's the point of all this performative "oh this is making me so tired"?
scarface_74 · a year ago
Maybe if the GDPR was a simple law instead of 11 chapters and 99 sections and all anyone got as a benefit from it is cookie banners it would be different.
pessimizer · a year ago
> Again, read the EU AI Act. It's not like it's hidden, or hasn't been available for several years already.

You could point out a specific section or page number, instead of wasting everyone's time. The vast majority of people who have an interest in this subject do not have a strong enough interest to do what you have claim to have done.

You could have shared, right here, the knowledge that came from that reading. At least a hundred interested people who would have come across the pointing out of this clear definition within the act in your comment will now instead continue ignorantly making decisions you disagree with. Victory?

octacat · a year ago
Some of the unacceptable activities include:

    AI used for social scoring (e.g., building risk profiles based on a person’s behavior) - Oh, so insurance, and credit score is banned now? And background checks.

    AI that manipulates a person’s decisions subliminally or deceptively. - Oh, so no more ads?

    AI that exploits vulnerabilities like age, disability, or socioeconomic status. - Oh, are we banning facebook now?

    AI that attempts to predict people committing crimes based on their appearance. - pretty sure that exists somewhere too.

    AI that uses biometrics to infer a person’s characteristics, like their sexual orientation. - oh, my, tiktok does not even needs biometrics, just a couple of swipes. Google actually too, just where you visit.

    AI that collects “real time” biometric data in public places for the purposes of law enforcement. - but cameras everywhere are ok.

    AI that tries to infer people’s emotions at work or school. - like every social network, right? or a company with toxic marketing, but without ai (hello, apple with green bubbles)

    AI that creates — or expands — facial recognition databases by scraping images online or from security cameras. - oh, this also probably exists. So companies could track clients.

fredoliveira · a year ago
I fail to see where you stand based on your line by line commentary. Are we not supposed to be against these obvious negatives? Regulation against undesired outcomes needs to start somewhere. Do you believe we should not regulate, simply because we already do some of the things that seem to fall under these individual buckets?

Deleted Comment

octacat · a year ago
Laws are nice, when they work, clear and applicable.

It is probably would be as useful, as GDPR. Like of course, it sounds nice on the paper, but in reality it will get drown in a lot of legalize. Like with tracking consent in forms nowadays. Do you know which companies you gave consent and when? - me neither.

The issue with such laws, is that they are extremely wide and hard to regulate/enforce/check. But making regulation would make a few political points. While probably not so useful in real life.

We already do a lot falling under these baskets for years, big tech uses AI for algorithms left and right. "Ooopsie, we removed your youtube channel / application, because our AI system said so. You can talk to another AI system next." - we already have these, but I don't hear any reasonable feedback from EU for this.

Basically, big companies with strong legal departments would find the way around the rules. Small startups would be forced to move.

Havoc · a year ago
For once that doesn’t seem overly broad. Pretty much agree with all of the list
johndhi · a year ago
The "high risk" list is where the breadth comes in
hkwerf · a year ago
The "high risk" list, though, is essentially traditional safety functions (article 6) and functions that affect fundamental rights and access to basic services (annex III)? It's not that broad at all either.