Readit News logoReadit News
sholain commented on Going Through Snowden Documents, Part 1   libroot.org/posts/going-t... · Posted by u/libroot
array_key_first · 5 days ago
It's not strange, it's purposeful. It's the same logic as "well George Floyd had a counterfeit 20!"

It's an extremely effective propaganda technique whereby you discredit the person(s) who were affected by injustice, while simultaneously shifting the narrative away from said injustice. It preys on the human minds simple morality reasoning skills - bad people don't do good things, and good people don't do bad things.

Of course, that's not how it works, and it's both. George Floyd maybe did counterfeit a twenty, and that's illegal. But is the punishment for that public execution? What motivation do people have to bring that up? No good motivations, in my mind.

sholain · 4 days ago
A complete mischaracterization.

George Floyd ingested quite a lot of fentanyl, enough to die though it was inconclusive - it's a biological and medical reality that characterized the situation in a very real way.

Snowden released a lot of information that had nothing to do with 'whistle blowing' and enormously benefited very bad actors such such as China and Russia - it was a windfall for them, and destroyed years of work by Western intelligence agencies.

This was right after China had discovered and executed a handful of CIA personnel, whereupon it was very, very clear the possible repercussions of such a release.

His actions were inconsistent with those of someone interested only in whistle-blowing and or 'showing hypocrisy' on espionage; there are any number of ways to whistle-blow in a manner that does not result in the negative outcomes. Since he's smart enough to know better, it's rational to conclude the possibility of ulterior motives.

Russia's espionage and influence campaigns are having a severely negative effect on the political situation in the US and West in general, where they have deeply penetrated many nations security and political apparatus, especially Germany.

Deleted Comment

sholain commented on Going Through Snowden Documents, Part 1   libroot.org/posts/going-t... · Posted by u/libroot
sunaookami · 5 days ago
As the other commenter said, the crimes the NSA did/still does far outweight any "crimes" Snowden did. And whistleblowing is by definition illegal since you have to release confidential files. That's why functioning countries should have laws protecting whistleblowers.
sholain · 5 days ago
Whistle-blowing is not illegal (in the US) that's what the laws are there for, though obviously it's dicey and depends on media portrayal, and those laws could stand to be reinforced.

The Abu Ghraib (Iraq prison scandal) whistle-blower was protected by the system even if some people were very upset.

Deleted Comment

sholain commented on Going Through Snowden Documents, Part 1   libroot.org/posts/going-t... · Posted by u/libroot
sunaookami · 6 days ago
This comment section is strange, a lot of people trying to discredit Snowden, saying he shouldn't have released the files, should be in prison, etc. 12 years ago this was HUGE news and had a major impact on the internet and everyone thanked Snowden for these documents! I certainly am thankful. Disappointed in my country that they literally said that "spying between friends is a no-go" but then did nothing and intimidated journalists and legalized it instead. And thanks to the author for giving the documents another look, found it very interesting. There is also part 2: https://libroot.org/posts/going-through-snowden-documents-pa...
sholain · 5 days ago
One cannot just release whatever one wants, and some of the docs should not have been released.

There were huge variations in the nature of the content that he released, and this is the problem with the narrative.

He's a 'whistle blower' and 'broke the law' at the same time.

A lot of people seem to have difficulty with that.

Edit: we need better privacy laws and transparency around a lot of things, that said, some state actors are going to need to be around for a long while yet. It's a complicated world, none of this is black and white, it's why we need vigilance.

sholain commented on I failed to recreate the 1996 Space Jam website with Claude   j0nah.com/i-failed-to-rec... · Posted by u/thecr0w
martin-t · 7 days ago
> I don't think it's even reasonable to suggest that 1000 people all coming up with variations of some arbitrary bit of code either deserve credit

There's 8B people on the planet, probably ~100M can code to some degree[0]. Something only 1k people write is actually pretty rare.

Where would you draw the line? How many out of how many?

If I take a leaked bit of Google or MS or, god forbid, Oracle code and manage to find a variation of each small block in a few other projects, does it mean I can legally take the leaked code and use it for free?

Do you even realize to what lengths the tech companies went just a few years ago to protect their IP? People who ever even glanced at leaked code were prohibited from working on open source reimplementations.

> That scenario is already today very well accepted legally and morally etc as public domain.

1) Public domain is a legal concept, it has 0 relevance to morality.

2) Can you explain how you think this works? Can a person's work just automatically become public domain somehow by being too common?

> Copyleft is not OSS, it's a tiny variation of it, which is both highly ideological and impractical.

This sentence seems highly ideological. Linux is GPL, in fact, probably most SW on my non-work computer is GPL. It is very practical and works much better than commercial alternatives for me.

> Less than 2% of OSS projects are copyleft.

Where did you get this number? Using search engines, I get 20-30%.

[0]: It's the number of github users, though there's reportedly only ~25M professional SW devs, many more people can code but don't professionaly.

sholain · 7 days ago
+ Once again: 1000 K people coming up with some arbitrary bit of content is already understood in basically every legal regime in the world as 'public domain'.

"Can you explain how you think this works? Can a person's work just automatically become public domain somehow by being too common?"

Please ask ChatGPT for the breakdown but start with this: if someone writes something and does not copyright it, it's already in the 'public domain' and what the other 999 people do does not matter. Moreover, a lot of things are not copyrightable in the first place.

FYI I've worked at Fortune 50 Tech Companies, with 'Legal' and I know how sensitive they are - this is not a concern for them.

It's not a concern for anyone.

'One Person' reproduction -> now that is definitely a concern. That's what this is all about.

+ For OSS I think 20% number may come from those that are explicitly licensed. Out of 'all repos' it's a very tiny amount, of those that have specific licensing details it's closer to 20%. You can verify this yourself just by cruising repos. The breakdown could be different for popular projects, but in the context of AI and IP rights we're more concerned about 'small entities' being overstepped as the more institutional entities may have recourse and protections.

I think the way this will play out is if LLMs are producing material that could be considered infringing, then they'll get sued. If they don't - they won't.

And that's it.

It's why they don't release the training data - it's fully of stuff that is in legal grey area.

sholain commented on I failed to recreate the 1996 Space Jam website with Claude   j0nah.com/i-failed-to-rec... · Posted by u/thecr0w
martin-t · 9 days ago
> if 1K people have done similar things ad the AI learns from that, well, I don't think credit is something that should apply.

I think it should.

Sure, if you make a small amount of money and divide it among the 1000 people who deserve credit due to their work being used to create ("train") the model, it might be too small to bother.

But if actual AGI is achieved, then it has nearly infinite value. If said AGI is built on top of the work of the 1000 people, then almost infinity divided by 1000 is still a lot of money.

Of course, the real numbers are way larger, LLMs were trained on the work of at least 100M but perhaps over a billion of people. But the value they provide over a long enough timespan is also claimed to be astronomical (evidenced by the valuations of those companies). It's not just their employees who deserve a cut but everyone whose work was used to train them.

> Some people might consider this the OSS dream

I see the opposite. Code that was public but protected by copyleft can now be reused in private/proprietary software. All you need to do it push it through enough matmuls and some nonlinearities.

sholain · 9 days ago
- I don't think it's even reasonable to suggest that 1000 people all coming up with variations of some arbitrary bit of code either deserve credit - or certainly 'financial remuneration' because they wrote some arbitrary piece of code.

That scenario is already today very well accepted legally and morally etc as public domain.

- Copyleft is not OSS, it's a tiny variation of it, which is both highly ideological and impractical. Less than 2% of OSS projects are copyleft. It's a legit perspective obviously, but it hasn't bee representative for 20 years.

Whatever we do with AI, we already have a basic understanding of public domain, at least we can start from there.

sholain commented on OpenAI needs to raise at least $207B by 2030   ft.com/content/23e54a28-6... · Posted by u/akira_067
riffraff · 21 days ago
Gemini-cli existed long before Antigravity. It took Google very little.

And the gemini app will come preloaded on any android phone, who else can say the same?

sholain · 9 days ago
Yes agree with the sentiment - G has the reach for sure.
sholain commented on I failed to recreate the 1996 Space Jam website with Claude   j0nah.com/i-failed-to-rec... · Posted by u/thecr0w
nextos · 9 days ago
In case of LLMs, due to RAG, very often it's not just learning but almost direct real-time plagiarism from concrete sources.
sholain · 9 days ago
RAG and LLMs are not the same thing, but 'Agents' incorporate both.

Maybe we could resolve the bit of a conundrum by the op in requiring 'agents' to give credit for things if they did rag them or pull them off the web?

It still doesn't resolve the 'inherent learning' problem.

It's reasonable to suggest that if 'one person did it, we should give credit' - at least in some cases, and also reasonable that if 1K people have done similar things ad the AI learns from that, well, I don't think credit is something that should apply.

But a couple of considerations:

- It may not be that common for an LLM to 'see one thing one time' and then have such an accurate assessment of the solution. It helps, but LLMs tend not to 'learn' things that way.

- Some people might consider this the OSS dream - any code that's public is public and it's in the public domain. We don't need to 'give credit' to someone because they solved something relatively arbitrary - or - if they are concerned with that, then we can have a separate mechanism for that, aka they can put it on Github or Wikipedia even, and then we can worry about 'who thought of it first' as a separate consideration. But in terms of Engineering application, that would be a bit of a detractor.

sholain commented on OpenAI declares 'code red' as Google catches up in AI race   theverge.com/news/836212/... · Posted by u/goplayoutside
lateforwork · 15 days ago
OpenAI has already lined up enormous long-term commitments — over $500 billion through initiatives like Stargate for U.S. data centers, $250 billion in spending on Microsoft Azure cloud services, and tens of billions on AMD’s plan to deliver 6 GW of Instinct GPUs. Meanwhile, Oracle has financed its role in Stargate with at least $18 billion in corporate bonds plus another $9.6 billion in bank loans, and analysts expect its total capital need for these AI data centers could climb toward $100 billion.

The risk is straightforward: if OpenAI falls behind or can’t generate enough revenue to support these commitments, it would struggle to honor its long-term agreements. That failure would cascade. Oracle, for example, could be left with massive liabilities and no matching revenue stream, putting pressure on its ability to service the debt it already issued.

Given the scale and systemic importance of these projects — touching energy grids, semiconductor supply chains, and national competitiveness — it’s not hard to imagine a future where government intervention becomes necessary. Even though Altman insists he won’t seek a bailout, the incentives may shift if the alternative is a multi-company failure with national-security implications.

sholain · 14 days ago
"it would struggle to honor its long-term agreements. That failure would cascade. Oracle, for example, could be left with massive liabilities and no matching revenue stream,"

No, there's a not of noise about this but these are just 'statements of intent'.

Oracle very intimately understands OpenAI's ability to pay.

They're not banking $50B in chips and then waking up naively one morning to find out OpenAI has no funding.

What will 'cascade' is maybe some sentiment, or analysts expectations etc.

Some of it, yes, will be a problem - but at this point, the data centre buildout is not an OpenAI driven bet - it's a horizontal be across tech.

There's not that much risk in OpenAI not raising enough to expand as much as it wants.

Frankly - a CAPEX slowdown will hit US GDP growth and freak people out more than anything.

u/sholain

KarmaCake day21October 14, 2025View Original