Readit News logoReadit News
WJW · 2 years ago
> The risks in terms of privacy and copyright infringements are currently too high, State Secretary Alexandra van Huffelen (Digital Affairs) said in a draft proposal...

Seems reasonable to me.

> Van Huffelen added that she doesn’t want to write off generative AI use within the government entirely. She plans various experiments to see how government services can use the technology safely. The pilots should be ready by mid-2024, after which the government can draw up guidelines for the responsible use of AI. There will also be a training program for civil servants.

This is pretty much how I'd like the issue to be handled by governments. The technology is clearly potentially useful, but there is a lot we do not understand about it yet. Experimenting with it until we have a better understanding is the way to go, and starting mid-2024 is early enough. The article headline is slightly overdone though. To me it suggests a permanent ban, while this is nothing of the sort.

I'm Dutch though, so maybe I'm biased.

vanderZwan · 2 years ago
As another Dutch person I'm actually disappointed that this is purely limited to generated images or text, given that the Childcare Benefits Scandal[0] also involved machine learning algorithms that encoded institutional biases against parts of our society (among many other structural issues, but let's focus on the one relevant to this topic).

But then again, I no longer live in the Netherlands so maybe I missed an earlier memo that the government decided to no longer do that kind of thing. I highly doubt it though.

[0] https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scand...

creesch · 2 years ago
> also involved machine learning algorithms

I could be wrong, but afaik it involved "just" algorithms not machine learning. Which is a meaningful distinction, as traditional algorithms can actually be audited much easier in regards to how they make calls.

Which is also (although with limited success) what they have been focussing on with things like the algorithm register (https://www.digitaleoverheid.nl/overzicht-van-alle-onderwerp...)

wouldbecouldbe · 2 years ago
I know the Dutch government, did lot of dev work for them, these pilots mean a lot of taxpayer money.

Half of the pilots won't be completed and the other half won't be used.

There is no way that in 6-7 months experimenting will lead to different insights then we have now. It does mean a lot of expensive consultants and reports.

The only useful thing would be an on-premise solutions, which I'm sure lots of startups are working on; not something these pilots will be able to manage in 6-7 months unless with lots of luck (finding the right people) & cash.

lozenge · 2 years ago
Yes, this seems like somebody seeing AI as an area they can "take over" including prestige and a headcount.

In this case, every department of the government already has a legal team, a privacy and data use officer, etc, responsible for the use of any product. Why does this need to be centralised?

In my org, somebody got hold of 50 license seats for a popular code generating AI, and required users to fill in lengthy forms and provide regular feedback about their "use cases" and "experience". Somehow, this "pilot" never ends, because then their makework would end.

SkyBelow · 2 years ago
The real issue seems to be the data governance and people forgetting good data governance when they interact with AI. Does it matter if you are asking Chat GPT to organize this list of PII or asking a random Reddit user to organize it, both are leaking data to an unauthorized third party that can leak it elsewhere. If the model is entirely self hosted and is treated similar to a database so that any PII it is given means the model is treated as always having that PII in it, even if the average user can't find a way to extract it, then the privacy concerns are no different than using any other self hosting application that can store data. You will need to treat it like a data source where all users have either full admin access or no access, unlike a database where you can delegate well controlled partial access.

On the copyright side, that seems more uniquely tied to current AI technology so the accusation of AI being a buzzword wouldn't apply.

interactivecode · 2 years ago
Indeed to me that's not a ban, but a restriction in the use of possibly risky software until they had the time to draw up guidelines. Honestly everyone working in a company or large organisation aught to have a similar stance. Until we know for sure, don't use it.
Aeolun · 2 years ago
It's not that hard to just not stick your sensitive data in. If find it absurd that people would even consider it.

I had a whole presentation today on how important it was to 'scrub' your data before sticking it in a generative model, and I was just like 'hell no', I'm not doing that because data that's supposed to be scrubbed shouldn't go anywhere near the model.

deafpolygon · 2 years ago
It's reasonable - I think they just want to be able to get policy in place before we have politicians typing in state secrets like "How can I get Rutte to _____?" I wouldn't like it (also Dutch, hallo!).
FirmwareBurner · 2 years ago
Wasn't there a major Dutch scandal involving families wrongfully being denied welfare on the basis of "the algorithm said so"?
contravariant · 2 years ago
This being Dutch politicians dealing with technology it wouldn't surprise me if they completely miss the point. In fact by basically only mentioning ChatGPT they may have already.

You can do quite a lot of harm with very simple models, all you need is unclear rules, little oversight and no avenues for escalation. Which in fact did go wrong, terribly, plunging tens of thousands of households into bankruptcy. You don't need something capable of modelling human language to make things worse. At most a more powerful model can simply do more, but the risks are roughly the same.

And if neural networks writing obtuse language with little empathy following a complex set of rules are causing problems then I've got some bad news about Dutch officials.

narinxas · 2 years ago
this is an instance of the mistake of putting safety first, before health.

if safety is more important than health, that means that it is ok to shoot first and ask questions later

mistrial9 · 2 years ago
this is a quick anecdote from California.. a very recent paper by Google on the "AI Opportunity Agenda" starts out with very optimistic claims about AI for administering cities, the example is "timing stoplights to increase traffic efficiency and reduce carbon emissions" .. yet, what is actually implemented now in the SF Bay Area is .. automatic license plate readers at the bridges 24x7, and new Gov Newsom approved automatic speeding ticket generators on major roads in the largest cities. Traffic cameras at intersections have been implemented more and more in ten years.

How can an intelligent citizen hear "optimized traffic lights" in the industry white paper, and see "automatic speeding tickets" in reality ? What number of cars on the road are adhering to the posted limit at all times? .. let's be blunt, it is a money machine, for local govt.

Other examples of distributing public benefits, or handling public appeals, could easily see this level of duplicity. One thing is said in PR, another thing entirely is prevalent in practice with citizens.

I claim this example is relevent to the Dutch concerns because a) Netherlands is similarly diverse, crowded and high-tech, and b) national level analysis has detected potential for abuse at the local govt level on a wide scale.

swatcoder · 2 years ago
It’s funny how language makes all the difference in telling stories like yours:

A vendor published a marketing flyer suggesting ways their new expensive technology could improve services typical of a potential customer.

The customer considered the marketing pitch in the flyer and found the ROI unconvincing in most cases, especially given that the technology remains largely unproven and would leave them at high risk of failure or cost surprises as an early adopter.

Nonetheless, they did see at least one place where the implementation was more mature and might pay for itself (and perhaps even contributing revenue) while fulfilling an ongoing service mandate.

— —

Sounds like a dutiful government to me! Rather than being sucked into a flashy sales pitch by a vendor pitching an unproven and expensive product, they’ve stood firm and decided to mostly wait for others to absorb the early adopter risks, carefully stewarding public money.

That governments can sometimes do very stupid and irresponsible things doesn’t mean that they’re doing so every time.

reidjs · 2 years ago
I hate receiving speeding tickets as much as the next guy, but I am pro automated speeding tickets. I must be getting old.

- May encourage slower (safer) driving habits, fewer accidents

- Automates some police duties, frees law enforcement to deal with actual crimes

- City generates more money without raising taxes

These are all hypothetical, and obviously corruption and apathy paves over most of it, but in practice, automated speeding tickets are a great use of "AI"

polski-g · 2 years ago
There seems to be a strange medium in which people think that speeding limits should exist but we shouldn't try too hard to enforce them. Like they just want a little bit of criminality, but not too much. If you are opposed to speeding limit enforcement, then just get rid of them. We do not need "sometimes laws".
mistrial9 · 2 years ago
there is an expression in English that says: you put the cart before the horse. I think this applies here.

It is not "we wish things to be this way, so they are this way" .. the real world is not like that. For several thousand years, new technology like carts or horse-stirrups, have changed a balance of power between groups of humans who use those, and groups that do not; in this example, the cart for the agrarian, the stirrup for the hunting pack.

When new technology is implemented, the behavior of all humans at all times does not change instantly, nor does it change "because we want this".. It changes in fact, in practice, and over weeks or decades.

Right now, there is some agreement among humans driving cars. They drive a certain style, on roads, with law enforcement and prices. Costs are applied for goods like tires or gas, and also penalties like speeding tickets due to a speed limit that is posted in a way that is considered fair and public... in most places.

When I learned to drive, the legend of the "hidden speed limit sign" became dinner table talk. The ability of a small town to generate revenue for their police force, by waiting for people who are not fairly informed, or really just the police lying.. is common knowledge, because it is real.

Your comment makes it appear as if whole populations "want criminality" and this appears to be a conveniently simple, blaming the driver, point of view. It sounds ok in two sentences, but does not hold up to deeper inspection of policy over time in real places. You might say "oh we are not in some small town" well guess what.. large cities have corruption, believe it. Whole countries have corruption, in fact.

My post stands unedited.

pimpampum · 2 years ago
This is the right decision for now. gov officials have access to lots of data that should not en up in OpenAIs servers. Long-term they should have their own inhouse or some ministry of digital infrastructure.
informatimago · 2 years ago
It is a political and societal choice.

I could understand that a people decides that it should be governed with pure human processes, and therefore that its government and civil servant has to use only human brain power (and compassion) to govern and administrate.

I would be afraid to miss some good or even just rational decision making, but I think we can all agree that a 100% AI government or administration would be a bad idea, so how to ensure that there's always a reasonable and informed human to decide, even with the help of AI and computer models?

The wiseness of this decision should probably spread largely, for example, when considering the decision making that has been done and keep being done around things like climate change MODELS, and COVID spreading MODELS.

huijzer · 2 years ago
This makes a lot of sense in my opinion. OpenAI has only satisfied the absolute minimum requirements for their data controls. If you disable "Chat History & Training" (also known as "Chat History" in the app), then almost all functionality is blocked. Specifically, Voice interactions (speech to text works, but not the full touchfree voice mode), and GPTs.

I do have to compliment OpenAI on the fact that they did improve the situation now with the new interface. Browsing and image generation does work when "Chat History & Training" is disabled whereas it used to not work a few weeks ago. Apart from the missing GPTs, ChatGPT is appears quite usable with "Chat History & Training" disabled.

oktoberpaard · 2 years ago
zimmen · 2 years ago
To be fair, the Dutch government basically banned the use of any kind of intelligence a long time ago.

Deleted Comment