Readit News logoReadit News
rnkn commented on Meta's Llama 3.1 can recall 42 percent of the first Harry Potter book   understandingai.org/p/met... · Posted by u/aspenmayer
paxys · 9 months ago
As an experiment I searched Google for "harry potter and the sorcerer's stone text":

- the first result is a pdf of the full book

- the second result is a txt of the full book

- the third result is a pdf of the complete harry potter collection

- the fourth result is a txt of the full book (hosted on github funny enough)

Further down there are similar copies from the internet archive and dozens of other sites. All in the first 2-3 pages.

I get that copyright is a problem, but let's not pretend that an LLM that autocompletes a couple lines from harry potter with 50% accuracy is some massive new avenue to piracy. No one is using this as a substitute for buying the book.

rnkn · 9 months ago
You were so close! The takeaway is not that LlmS represent a bottomless tar pit of piracy (they do) but that someone can immediately perform the task 58% better without the AI than with it. This is nothing more than “look what the clever computer can do.”
rnkn commented on What happens when people don't understand how AI works   theatlantic.com/culture/a... · Posted by u/rmason
benjismith · 9 months ago
A similar kind of question about "understanding" is asking whether a house cat understands the physics of leaping up onto a countertop. When you see the cat preparing to jump, it take a moment and gazes upward to its target. Then it wiggles its rump, shifts its tail, and springs up into the air.

Do you think there are components of the cat's brain that calculate forces and trajectories, incorporating the gravitational constant and the cat's static mass?

Probably not.

So, does a cat "understand" the physics of jumping?

The cat's knowledge about jumping comes from trial and error, and their brain builds a neural network that encodes the important details about successful and unsuccessful jumping parameters. Even if the cat has no direct cognitive access to those parameters.

So the cat can "understand" jumping without having a "meta-understanding" about their understanding. When a cat "thinks" about jumping, and prepares to leap, they aren't rehearsing their understanding of the physics, but repeating the ritual that has historically lead them to perform successful jumps in the past.

I think the theory of mind of an LLM is like that. In my interactions with LLMs, I think "thinking" is a reasonable word to describe what they're doing. And I don't think it will be very long before I'd also use the word "consciousness" to describe the architecture of their thought processes.

rnkn · 9 months ago
That’s interesting. I thought your cat analogy (which I really liked) was going to be an example of how LLMs do not have understanding the way a cat understands the skill of jumping. But then you went the other way.
rnkn commented on What happens when people don't understand how AI works   theatlantic.com/culture/a... · Posted by u/rmason
xyzal · 9 months ago
To me, it's empathetic and caring. Which the LLMs will never be, unless you give money to OpenAI.

Robots won't go get food for your sick, dying friend.

rnkn · 9 months ago
A robot could certainly be programmed to get food for a sick, dying friend (I mean, don't drones deliver Uber Eats?) but it will never understand why, or have a phenomenal experience of the act, or have a mental state of performing the act, or have the biological brain state of performing the act, or etc. etc.
rnkn commented on What happens when people don't understand how AI works   theatlantic.com/culture/a... · Posted by u/rmason
lordnacho · 9 months ago
The article skirts around a central question: what defines humans? Specifically, intelligence and emotions?

The entire article is saying "it looks kinds like a human in some ways, but people are being fooled!"

You can't really say that without at least attempting the admittedly very deep question of what an authentic human is.

To me, it's intelligent because I can't distinguish its output from a person's output, for much of the time.

It's not a human, because I've compartmentalized ChatGPT into its own box and I'm actively disbelieving. The weak form is to say I don't think my ChatGPT messages are being sent to the 3rd world and answered by a human, though I don't think anyone was claiming that.

But it is also abundantly clear to me that if you stripped away the labels, it acts like a person acts a lot of the time. Say you were to go back just a few years, maybe to covid. Let's say OpenAI travels back with me in a time machine, and makes an obscure web chat service where I can write to it.

Back in covid times, I didn't think AI could really do anything outside of a lab, so I would not suspect I was talking to a computer. I would think I was talking to a person. That person would be very knowledgeable and able to answer a lot of questions. What could I possibly ask it that would give away that it wasn't real person? Lots of people can't answer simple questions, so there isn't really a way to ask it something specific that would work. I've had perhaps one interaction with AI that would make it obvious, in thousands of messages. (On that occasion, Claude started speaking Chinese with me, super weird.)

Another thing that I hear from time to time is an argument along the line of "it just predicts the next word, it doesn't actually understand it". Rather than an argument against AI being intelligent, isn't this also telling us what "understanding" is? Before we all had computers, how did people judge whether another person understood something? Well, they would ask the person something and the person would respond. One word at a time. If the words were satisfactory, the interviewer would conclude that you understood the topic and call you Doctor.

rnkn · 9 months ago
> isn't this also telling us what "understanding" is?

When people start studying theory of mind someone usually jumps in with this thought. It's more or less a description of Functionalism (although minus the "mental state"). It's not very popular because most people can immediately identify an phenomenon of understanding separate from the function of understanding. People also have immediate understanding of certain sensations, e.g. the feeling of balance when riding a bike, sometimes called qualia. And so on, and so forth. There is plenty of study on what constitutes understanding and most healthily dismiss the "string of words" theory.

rnkn commented on Subvert – Collectively owned music marketplace   subvert.fm/... · Posted by u/cloudfudge
freedomben · a year ago
I think this is great, but I do hope thought is being put into solving the hardest problem of all IMHO: Music Discovery

I have bought a lot on Bandcamp, but would have bought 10x more if I could just find stuff I liked. The existing system makes discovery nearly impossible unless you happen to like the stuff being mainly bought and curated or are in a lucky genre.

Discoverability is especially hard because 99% of the music people create sucks. This may not seem true if you mainly listen to "radio" and playlists, but if you ever get access to a large catalog of independent music, try picking stuff at pseudo-random and take notes. As much as I love good art (and I do), most art is not good art. You can't go on popularity because some of the great artists (especially on Bandcamp) are relatively unknown and therefore are not popular. For example, Thousand Needles in Red is a phenomenal band with great albums, and almost completely unknown. These Four Walls is similar (but at least they are on Youtube Music/Spotify/etc). I'd buy the crap out of similar albums, but discovering them is very challenging. I mainly found those two out of random luck.

Anyway I'm rambling, but I do hope you can figure out a good means for discovery. I think finding and grouping people with similar tastes is among the best ways, and also having artists that a person likes recommend other artists can be super valuable.

rnkn · a year ago
I'm surprised at this. I find music discovery easy. Some tips:

On Bandcamp: in addition to obviously following artists I like, I follow several fan accounts of those artists, then I can see what they buy. I also try to sample the Bandcamp album of the day.

On NTS.live I have a bunch of favourite hosts and try to listen to every show they release, and note the track listing. Too many to ever get through.

Podcasts: NPR All Songs Considered, and Resident Advisor when I can.

On Apple Music there's the algorithm. Hit or miss.

Back in the heydays of music blogs I would find a lot of great stuff on Hype Machine, but alas, I think those days are gone.

Just with these few sources I find there is far too much great new music to get through in one lifetime. Godspeed!

rnkn commented on Judge dismisses DMCA copyright claim in GitHub Copilot suit   theregister.com/2024/07/0... · Posted by u/samspenc
munificent · 2 years ago
> Indeed, last year GitHub was said to have tuned its programming assistant to generate slight variations of ingested training code to prevent its output from being accused of being an exact copy of licensed software.

If I, a human, were to:

1. Carefully read and memorize some copyrighted code.

2. Produce new code that is textually identical to that. But in the process of typing it up, I randomly mechanically tweak a few identifiers or something to produce code that has the exact same semantics but isn't character-wise identical.

3. Claim that as new original code without the original copyright.

I assume that I would get my ass kicked legally speaking. That reads to me exactly like deliberate copyright infringement with willful obfuscation of my infringement.

How is it any different when a machine does the same thing?

rnkn · 2 years ago
It seems the total disregard that the tech community showed toward copyright when it was artists losing out has come back to bite. Face-eating leopards, etc.
rnkn commented on Microsoft is killing WordPad in Windows   bleepingcomputer.com/news... · Posted by u/turtlegrids
rnkn · 3 years ago
Old software is depreciated. To deprecate software would be to call it names.
rnkn commented on Proof you can do hard things   blog.nateliason.com/p/pro... · Posted by u/jamiegreen
constantcrying · 3 years ago
I really despise all the "why you should care about math" takes.

The question is never asked about any other school subject and only mathematics has to justify itself that way. I had to learn about categories of plants and animals, interpret 20th century literature, learn about events from a thousand years ago, I did presentations on the demographics of European countries, how certain chemicals react and much, much more. I never used any of that knowledge for anything, certainly not in my career or in university.

But somehow mathematics is the one field which needs to justify its own existence? Mathematics needs to bend itself over and "be relevant" so that people will actually learn about it? Why? Why not ask the same of any other subject.

Justifying mathematics is easy, especially such a universally applicable subject as calculus. But I see no reason why it should have to justify itself in any way.

rnkn · 3 years ago
Hello mathematics, meet philosophy.
rnkn commented on RGBWatermark – protect art against machine learning   rgbwatermark.net/... · Posted by u/thefilmore
rnkn · 3 years ago
As always, the best way to protect your house is with the law, not a better lock.

Regulation is constructive, deregulation is destructive.

rnkn commented on Show HN: Neat – Minimalist CSS Framework   neat.joeldare.com... · Posted by u/codazoda
rnkn · 3 years ago
Great framework. Unfortunately Apple hijack certain arrow shapes (e.g. the home button) and incorrectly display these as emoji. There’s a CSS override you can add to prevent this but I don’t remember off top of my head.

u/rnkn

KarmaCake day422January 22, 2020View Original