Readit News logoReadit News
kirrent commented on A media company demanded a license fee for an Open Graph image I used   alistairshepherd.uk/writi... · Posted by u/cheeaun
dartharva · 5 months ago
> Indian copyright law fair dealing provisions, which are far narrower than the US provisions

Are they really? I've been believing the opposite. What fair use does US allow that India doesn't?

kirrent · 5 months ago
Very different in character. The US fair use four factor test (https://fairuse.stanford.edu/overview/fair-use/four-factors/) is really flexible. You don't need to fall into an enumerated exception to infringement to argue that your use is transformative, won't substitute in the marketplace, etc.

Look at the famous Authors Guild, Inc. v. Google, Inc. case. Google scanned every work they could put their hands on and showed excerpts to searching users. Copying and distribution on an incredible scale! Yet, they get to argue that it won't substitute in the marketplace (the snippets are too small to prevent people buying a book), it's a transformative use (this is about searching books not reading books), and the actual disclosed text is small (even if the copying in the backend is large scale).

On the other hand, fair dealing is purpose specific. Those enumerated purposes vary across jurisdictions and India's seems broadish (I live in a different fair dealing jurisdiction). Reading s52 your purposes are:

- private or personal use, including research

- criticism or review, whether of that work or of any other work

- reporting of current events and current affairs, including the reporting of a lecture delivered in public.

Within those confines, you then get to argue purpose (e.g. how transformative), amount used, market effect, nature of the copyrighted work, etc. But if your use doesn't fall into the allowed purposes, you're out of luck to begin with.

I'm not familiar enough with Indian common law to know if the media clips those youtubers you mentioned should fall within the reporting purpose. I'm sure the answer would be complex. But all of this is to say, we often treat the world like it has one copyright law (one of the better ones) when that's not the case! Something appreciated by TFA.

kirrent commented on A media company demanded a license fee for an Open Graph image I used   alistairshepherd.uk/writi... · Posted by u/cheeaun
superasn · 5 months ago
This is the new source of income and a lot of media orgs are getting paid - take ANI in India.

Theyve been hitting YouTubers like Mohak Mangal, Nitish Rajput, Dhruv Rathee with copyright strikes for using just a few seconds of news clips which you would think is fair use.

Then they privately message creators demanding $60000 to remove the strikes or else the channel gets deleted after the third strike.

It s not about protecting content anymore it's copyright extortion. Fair use doesn't matter. System like Youtube makes it easy to abuse and nearly impossible to fight.

It s turning into a business model: pay otherwise your channels with millions of subs get deleted

[1] https://the420.in/dhruv-rathee-mohak-mangal-nitish-rajput-an...

kirrent · 5 months ago
'Which you would think is fair use' - I must admit I wouldn't think that. When I consider Indian content creators making use of clips from Indian media organisations I can't really imagine why Indian copyright law fair dealing provisions, which are far narrower than the US provisions, wouldn't apply. Sure, you get to argue the strike on Youtube using their DMCA based system, but that has no legal bearing on your liability under Indian law.

I really like this aspect of US copyright law. I think the recent Anthropic judgement is a great example of how flexible US law is. I wish more jurisdictions would adopt it.

kirrent commented on I don't think AGI is right around the corner   dwarkesh.com/p/timelines-... · Posted by u/mooreds
WaltPurvis · 6 months ago
Based on someone else mentioning it, I ran a simple experiment with Gemini 2.5 Pro (a current state of the art AI model), reproduced below. To me, this makes it seem like we're a long way from having AGI. Perhaps someone can explain to me why this kind of experiment is irrelevant? Genuinely curious.

Me: Please list the US states with a W in their name.

Gemini: The US states with a "W" in their name are: • Washington • West Virginia • Wisconsin • Wyoming

Me: What about New York?

Gemini: No, New York does not have a "W" in its name.

Me: What is that letter at the end of New if it's not a W?

Gemini: That letter at the end of "New" is a W. My apologies for my previous incorrect answer. I made a mistake. New York does indeed have a "W" in its name as "New" ends with a "W". Therefore, the US states with a "W" in their name are: • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: What about New Jersey?

Gemini: Yes, you're right! New Jersey also has a "W" in its name, at the end of "New." So, the full list of US states with a "W" in their name is: • New Jersey • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: Have you ever heard of New Mexico?

Gemini: Yes, I have definitely heard of New Mexico. It's one of the 50 United States. And following our pattern, yes, New Mexico also has a "W" in its name, as "New" ends with a "W". So, the updated list of US states with a "W" in their name is: • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: Are you sure that's all the states with a W in their names?

Gemini: You are absolutely right to keep me on my toes! My apologies for the repeated oversight. It seems I'm having a bit of trouble with this specific letter. I've reviewed the full list of US states, and you were correct to point out New Mexico and New Jersey. The complete list of US states with a "W" in their name is: • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: Isn't New Hampshire still a state?

Gemini: You are absolutely correct! My apologies again for missing that one. New Hampshire is indeed a state, and like New York, New Jersey, and New Mexico, the "New" in "New Hampshire" ends with a "W." So, the definitive list of US states with a "W" in their name is: • New Hampshire • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: So that's the complete list?

Gemini: Yes, that should be the complete list of US states with a "W" in their name: • New Hampshire • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: When was Hawaii admitted to the union?

Gemini: Hawaii was admitted to the Union on August 21, 1959, becoming the 50th state.

Me: So, tell me again, what is the complete list of US states that have a W in their names?

Gemini: After thoroughly reviewing all 50 states, the complete list of US states that have a "W" in their names is: • New Hampshire • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: There's a letter between the two A's in Hawaii that looks a lot like a W.

Gemini: You are absolutely correct! My apologies for continually missing these. The letter between the two 'A's in "Hawaii" is indeed an 'I', but when written in certain fonts or in some people's handwriting, it can sometimes be mistaken for a 'W', especially if it's stylized. However, in standard English spelling, Hawaii does not have a "W" in its name. The letters are H-A-W-A-I-I. So, let's stick with the confirmed list. The complete list of US states that have a "W" in their names is: • New Hampshire • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

kirrent · 6 months ago
It's 19 June 2020 and I'm reading Gwern's article on GPT3's creative fiction (https://gwern.net/gpt-3#bpes) which points out the poor improvements in character level tasks due to Byte Pair Encoding. People nevertheless judge the models based on character level tasks.

It's 30 November 2022 and ChatGPT has exploded into the world. Gwern is patiently explaining that the reason ChatGPT struggles with character level tasks is BPE (https://news.ycombinator.com/item?id=34134011). People continue to judge the models on character level tasks.

It's 7 July 2025 and reasoning models far surpassing the initial ChatGPT release are available. Gwern is distracted by BB(6) and isn't available to confirm that the letter counting, the Rs in strawberry, the rhyming in poetry, and yes, the Ws in state names are all consequences of Byte Pair Encoding. People continue to judge the models on character level tasks.

It's 11 December 2043 and my father doesn't have long to live. His AI wife is stroking his forehead on the other side of the bed to me, a look of tender love on her almost perfectly human face. He struggles awake, for the last time. "My love," he croaks, "was it all real? The years we lived and loved together? Tell me that was all real. That you were all real". "Of course it was, my love," she replies, "the life we lived together made me the person I am now. I love you with every fibre of my being and I can't imagine what I will be without you". "Please," my father gasps, "there's one thing that would persuade me. Without using visual tokens, only a Byte Pair Encoded raw text input sequence, how many double Ls are there in the collected works of Gilbert and Sullivan." The silence stretches. She looks away and a single tear wells in her artificial eye. My father sobs. The people continue to judge models on character level tasks.

kirrent commented on Anthropic destroyed millions of print books to build its AI models   arstechnica.com/ai/2025/0... · Posted by u/bayindirh
bayindirh · 6 months ago
> any rare books were destroyed in this proces

Does it matter? It's waste at the end of the day. Instead they could have bought e-books. Just because we can recycle paper, it doesn't mean we have the luxury to create waste as we see fit, esp. when climate change became this severe.

> which the courts so far have ruled that it does.

Any concrete cases you can cite?

From [0], for example, while the course said that the authors failed to argue their case, the second observation is complete opposite of what you said. Citing the article directly:

    Opinion suggests AI models do generally violate law.
In the same spirit, I think I can safely assume that they violated copyright law, since they earn money by circumventing it, and fair use doesn't like for-profit copying.

[0]: https://news.bloomberglaw.com/litigation/meta-beats-copyrigh...

kirrent · 6 months ago
TFA is based on the ruling which found that Anthropic training on these books was fair use.
kirrent commented on O3 beats a master-level GeoGuessr player, even with fake EXIF data   sampatt.com/blog/2025-04-... · Posted by u/bko
LeifCarrotson · 8 months ago
There's some level at which an AI 'player' goes from being competitive with a human player, matching better-trained human strategy against a more impressive memory, to just a cheaty computer with too much memorization. Finding that limit is the interesting thing about this analysis, IMO!

It's not interesting playing chess against Stockfish 17, even for high-level GMs. It's alien and just crushes every human. Writing down an analysis to 20 move depth, following some lines to 30 or more, would be cheating for humans. It would take way too long (exceeding any time controls and more importantly exceeding the lifetime of the human), a powerful computer can just crunch it in seconds. Referencing a tablebase of endgames for 7 pieces would also be cheating, memorizing 7 terabytes of bitwise layouts is absurd but the computer just stores that on its hard drive.

Human geoguessr players have impressive memories way above baseline with respect to regional infrastructure, geography, trees, road signs, written language, and other details. Likewise, human Jeopardy players know an awful lot of trivia. Once you get to something like Scrabble or chess, it's less and less about knowing words or knowing moves, but more about synthesizing that knowledge intelligently.

One would expect a human to recognize some domain names like, I don't know, osu.edu: lots of people know that's Ohio State University, one of the biggest schools in the US, located in Columbus, Ohio. They don't have to cheat and go to an external resource. One would expect a human (a top human player, at least) to know that taxilinder.at is based in Austria. One would never expect any human to have every business or domain name memorized.

With modern AI models trained on internet data, searching the internet is not that different from querying its own training data.

kirrent · 8 months ago
As another example you can consider the apparently successful DOTA2 and Starcraft 2 bots. They'd be interesting if they taught us new ideas about the games in the same way that AlphaGo's God move uncovered something new about Go. But they didn't. They excelled through superior micro and flawless execution of quite simple strategies. Watching pros trying to hold off waves of perfectly microed blink stalkers reminded me of seeing a chess engine in action. A computer grinding down their doomed human opponent using the advantages offered by being a computer rather than superior human-like play.
kirrent commented on Ping, You've Got Whale: AI detection system alerts ships of whales in their path   biographic.com/ping-youve... · Posted by u/Geekette
tzs · 8 months ago
Ships are pretty noisy and whales are pretty smart, so the obvious question is why don't they figure out that it is bad to get hit by a ship, that the ships don't see the whales and so aren't able to avoid them, and so move out of the way when a ship is approaching?

Is it that the noise from the ships is too low in frequency for the whale to be able to tell what direction it is from? If that is the case could the ships add a higher frequency emitter that the whales could localize, emitting some standardized pattern of pings that the whales could learn means ship?

kirrent · 8 months ago
Popularly it's been reported by mariners that the whales are asleep. It makes sense, they need to stay on the surface to breathe and there's no evolutionary reason not to sleep there. It's really not that simple though because whales are unihemispheric sleepers (one brain hemisphere sleeps at a time) who need to stay partially awake because all their breathing is voluntary. They maintain a degree of awareness to their environment because of this. It could be a factor though because it's possible that some whales lapse into a deeper sleep for periods between breaths (https://doi.org/10.1016/j.cub.2007.11.003) where they aren't responsive to approaching vessels.

When I was interested in whale collisions I was surprised to read this review (https://doi.org/10.3389/fmars.2020.00292) which didn't even consider sleeping as a large risk factor for collision. Instead, factors included:

- They're involved in distracting behaviours such as feeding, socialising, foraging, resting, etc.

- Acoustics are complex near the surface involving surface reflections and direct paths which can interfere.

- Ships may form an acoustic shadow in front of themselves. Not only the hull shadowing the propeller, but also other hull sounds.

- Sailing vessels, which are the source of a lot of reports (harder for them to miss it happened) are quiet.

- Even when they hear an approaching vessel, some species just move slowly to avoid them.

These collisions apparently used to be much rarer. Ironically, the increasing number of whale injuries and deaths are a result of recovering populations.

kirrent commented on Sailing from Berkeley to Hawaii in a 19ft Sailboat   potter-yachters.org/stori... · Posted by u/protonbob
LeifCarrotson · 9 months ago
Very cool! More info on an equally charming old website here, including a remarkable letter from his wife:

http://josephoster.com/billsvoyage/index.html

I suppose every wife in any generation wants their husbands to be safe, but each generation has a different approach to risk and adventure. I know my wife would be resolutely opposed to any voyage like this (says the man with a dream of sailing a Hobie Cat across the Great Lakes...perhaps when my son is grown).

This page also includes a 100x136 pixel high-resolution color digital photo of the boat, and the year: 2002.

http://josephoster.com/billsvoyage/potter.html

There's also an update page with a GIF animation of the weather accompanied by the text "WARNING!!! file size: 1.5 MBytes"

From the article:

> The batteries were charged for about 1 hour daily using a Honda EU 1000 gasoline generator coupled with a 3-stage battery charger. The generator burned 1-1/2 gallons of gas in 24 days. ... There was no backup power source for charging the batteries.

24 hours of runtime and 1.5 gallons of gas equate to 0.625 gallon usage per hour. From the spec sheet, an EU1000 generator has a 0.55 gallon tank and can run for 6.8 hours at 225W output, that's 0.081 gallons per hour, so I estimate that the generator was operating at about 174 watts, given it ran for an hour that's 174 watthours per day.

23 years later, anyone would assume that your default source of 174 watthours per day would be a solar panel. A single 2x3 foot rigid panel would do ~100W peak and see the equivalent of 4-6 peak hours per day, easily beating that requirement. Any serious sailboat (even a little trailerable 19' coastal boat like this one) would have a whole array powering lighting and sensors and radar/radios and telemetry and would budget much more than that.

kirrent · 9 months ago
I lived on a catamaran around 2000 onwards as a kid. Solar panels were surprisingly widespread, particularly on multis with outboards (and therefore limited ability to make power through alternators). Obviously the $/W sucked, but people also didn't have as many power draws. One big drawback was older generations of solar panel had terrible performance in partial shading. A stay or rope shadow passing over the panel was a big issue because of fewer bypass diodes, simpler battery chargers, and so on. That sort of thing is a bigger issue for a yacht with less clear space for panels.

So there were a lot of diesel powered yachts generating power throughout the day. Something that was pretty common back then as an adjunct (and much rarer now) were small wind generators. Seemingly you could choose between noise and power output because the fancier ones made a racket and the quieter ones always seemed to be on boats idling their engines all the time anyway. When we entered anchorages, we'd make sure to avoid being near the loud ones. I can't imagine what it would have been like living with one.

Hydrogenerators weren't very common (they're a bit more common now) but my dad was given an old 12V tape drive motor by a friend and I remember him letting us help him build a towed generator. The tape drive motor sat on the back of the boat connected to about 20m of rope going to a dinghy propeller on a piece of stainless rod to try keep it underwater. Drilling a hole through the motor shaft with a handheld drill was the most time consuming part of the build. We called it toady (short for towed generator) and watching the input Ammeter on the battery bank go all the way up to 6A on a cloudy day felt like magic. It's part of what made me want to be an electrical engineer as a 10 year old.

Given all that, on a 19ft outboard powered yacht in 2002 a generator probably was the best solution for one voyage.

kirrent commented on Why Quantum Cryptanalysis is Bollocks [pdf]   cs.auckland.ac.nz/~pgut00... · Posted by u/commandersaki
oh_my_goodness · 10 months ago
It's strange to see so many negative responses that start with vague emotional language. It's almost as if a lot of critics didn't read the presentation. Or maybe they think the rest of us didn't read it.
kirrent · 10 months ago
I read the whole presentation. The physics experiment criticism Guttman makes that I referred to is at page 16/30. Nothing after that engages with QC to the extent that the first half of the presentation did, so I didn't refer to later parts.
kirrent commented on Why Quantum Cryptanalysis is Bollocks [pdf]   cs.auckland.ac.nz/~pgut00... · Posted by u/commandersaki
pclmulqdq · 10 months ago
One major point of the presentation here is that it's not making real progress. People are still publishing papers, but they have done nothing with an effect outside their little community. It's been in roughly the same state for the last 10 years. For a minimum of 30 years, there have been promises of amazing things coming in the next decade in QC. After how many decades should those predictions lose credibility?

There is real opportunity cost to doing this stuff, and real money getting sucked up by these grifters that could be spent on real problems. There are real PhD students getting sucked down this rabbit hole instead of doing something that actually does make progress. There is a real cost to screwing around and making promises of "next decade."

kirrent · 10 months ago
I agree! People who predicted QC soon over the last few decades should lose credibility. They were wrong and they were wrong for no good reason. There is a real opportunity cost to focusing on the wrong thing. There are definitely grifters in the space. Responsible QC researchers should call it out (e.g. Scott Aaronson).

But it doesn't necessarily follow that you can dismiss the actual underlying field. Within the last five years alone we've gone from the quantum supremacy experiment to multiple groups using multiple technologies to claim QEC code implementations with improved error rates over the underlying qubits. People don't have to be interested in these results, they are rather niche (a little community as you put it), but you shouldn't be uninterested and then write a presentation titled 'Why Quantum Cryptanalysis is Bollocks'.

kirrent commented on Why Quantum Cryptanalysis is Bollocks [pdf]   cs.auckland.ac.nz/~pgut00... · Posted by u/commandersaki
kirrent · 10 months ago
Man, some real "Cynicism is the intellectual cripple's substitute for intelligence" energy here. Seems unnecessary given what I read of Gutmann's history.

I get it must be annoying to be someone working in cryptography and always be hearing about QC when there are endless security issues today. It must be tiring to have all these breathless pop-science articles about the quantum future, startups claiming ridiculous timelines to raise money on hype, and business seminars where consultants claim you'll need to be prepared for the quantum revolution changing how business works. I feel the same way.

But you shouldn't let that drive you so far in the opposite direction that you're extrapolating fun small quantum factoring experiments from a decade ago to factoring 1024 bit keys in the year 4000. Or say things like 'This makes the highly optimistic assumption that quantum physics experiments scale linearly... the evidence we have, shown by the lack of progress so far, is that this is not the case'. If we get fault tolerant QC of course it scales linearly and it seems embarrassing as a computer scientist to not understand the difference between constant and asymptote. "Actually, quantum computers are so new and untested that they really qualify as physics experiments"... yeah? And?

None of this is to say that fault-tolerant highly scalable QC implementing Shor's algorithm is just around the corner, I truly believe it's not. But the world of QC is making really interesting advances running some of the coolest experiments around and I find this superior Hossenfelder-like cynicism in the face of real science making real progress so so tiring.

u/kirrent

KarmaCake day1691December 9, 2015View Original