Doubt it would have changed anything for Bill. There's a pattern there and this is just a piece of that pattern.
Doubt it would have changed anything for Bill. There's a pattern there and this is just a piece of that pattern.
We should strive for extremely limited power by our public representatives, so their corruption impact is reduced to a minimum. But not only limited power, but also limited budget access, as an extension to limit that power. And that actually means reduced taxation.
But at the same time, the budget for justice system needs to increase. It should be most probably the strongest branch of the government. Delayed justice is one of the most common ways of injustice.
Corruption within private companies is irrelevant, as the main ones to suffer from it are usually shareholders. Government has no say in that. That is unless companies break the law, and that's why a strong Justice system is necessary. With a reduced size of the state there's also way less risk of private companies and individuals to corrupt public representatives.
Monopolies are not always a negative outcome on a free market if the company in Monopoly situation reaches that position by offering better products within the law. However they can be specially dangerous when they're artificially created by the Government (e.g. allocation of a common resource to a specific company --> corruption almost always follows).
As for budget, a country needs money to do stuff; if they don't have money they can't do stuff. Stuff can range from having the world's biggest army (several times over) to providing free education to everyone (the great social equalizer IMO, as in social mobility).
As for your justice argument, it depends - if power corrupts, wouldn't giving more power to justice corrupt them as well? You see what's happening in the US with various law enforcement branches getting A Lot Of Money - militarization of local police force for example, meaning they have the means to apply more violence.
TL;DR, governments and justice systems need a clear description of what they can and cannot do, and checks, balances and consequences when they don't.
> Corruption within private companies is irrelevant, as the main ones to suffer from it are usually shareholders.
This ignores the vast majority of anyone involved in a private company - the customers. Or even the not-customers that are still affected by what a private company does (think e.g. pollution), but that's where as you say the law should come in.
Exactly my takeaway to current AI developments as well. I am also confused by corporate or management who seem to think they are immune to AI developments. If AI ever does get to the point where it can write flawless code, what exactly makes them think they will do any better in composing these tools than the developers who've been working with this technology for years? Their job security is hedged precisely IN THE FACT that we are limited by time and need managed teams of humans to create larger projects. If this limitation falls, I feel like their jobs would be the first on the chopping block, long before me as a developer. Competition from tech-savvy individuals would be massive overnight. Very weird horse to bet on unless you are part of a frontier AI company who do actually control the resources.
More modern-day, low/no-code platforms are advertised as such... and yet, they don't replace software developers. (in fact, some projects my employer does is migrating away from low/no-code platforms in favor of code, because performance and other nonfunctionals are hidden away. We had a major outage as a result when traffic increased.)
Arguably, because LLM tokens are expensive so LLM generated code could be considered a donation? But then so is the labor involved so it's kinda moot. I don't believe people pay software developers to write code for them to contribute to open source projects either (if that makes any sense).
I've trained in similar fashion for my trip to Aconcagua or Nepal, and never researched for that nor discussed with anybody. You carry big backpacks up there a lot, or smaller backpacks for 10-12h each day, every day in places where lack of oxygen makes you lose breath in 5-10 steps easily when walking uphill. It figures that when training for strength-endurance there needs to be a lot of repetitions with some added weight.
I just took some weights into backpack at building I was living back then, hiked those 8 floors on stairs, took elevator down, rinse and repeat many times. Or elliptic trainer with same backpack. Or other movements/machines (just don't run with that).
I have no idea if specialized tools can reliably detect AI writing but, as someone whose writing on forums like HN has been accused a couple of times of being AI, I can say that humans aren't very good at it. So far, my limited experience with being falsely accused is it seems to partly just be a bias against being a decent writer with a good vocabulary who sometimes writes longer posts.
As for the reliability of specialized tools in detecting AI writing, I'm skeptical at a conceptual level because an LLM can be reinforcement trained with feedback from such a tool (RLTF instead of RLHF). While they may be somewhat reliable at the moment, it seems unlikely they'll stay that way.
Unfortunately, since there are already companies marketing 'AI detectors' to academic institutions, they won't stop marketing them as their reliability continues to get worse. Which will probably result in an increasing shit show of false accusations against students.
I'm on Reddit too much and a few times there were memes or whatever that were later on pointed out to be AI. And that's the ones that had tells, more and more (and as price goes down / effort/expenditure increases) it will become harder to impossible to tell.
And I have mixed feelings. I don't mind so much for memes, there's little difference between low-effort image editing and low-effort image generation IMO. There's the "advice" / "story" posts which for a long time now have been more of a creative writing effort than true stories, it's a race to the bottom already and AI will only try and accellerate it. But sometimes it's entertaining.
But "fake news" is the dangerous one, and I'm disappointed that combating this seemed to be a passing fad now that the big tech companies and their leaders / shareholders have bent the knee to regimes that are very interested in spreading disinformation/propaganda to push their agenda under people's skins subtly. I'm surprised it's not more egregious tbh, but maybe it's because my internet bubbles are aligned with my own opinions/morals/etc at the moment.
It feels ... strangely empowering.