Readit News logoReadit News
SkyBelow commented on Implications of AI to schools   twitter.com/karpathy/stat... · Posted by u/bilsbie
zkmon · 21 days ago
It's not the students. It's the teachers and school using AI first, and publicly. Why does he talk about only students using AI?

Also, just like how calculators are allowed in the exam halls, why not allow AI usage in exams? In real-life job you are not going to avoid use of calculator or AI. So why test people in a different context? I think the tests should focus on the skills in using calculator and AI.

SkyBelow · 21 days ago
>Also, just like how calculators are allowed in the exam halls, why not allow AI usage in exams?

Dig deeper into this. When are calculators allowed, and when are they not? If it is kids learning to do basic operations, do we really allow them to use calculators? I doubt it, and I suspect that places that do end up with students who struggle with more advanced math because they off loaded the thinking already.

On the other hand, giving a calculus student a 4 function calculator is pretty standard, because the type of math they can do isn't what is being tested, and having a student be able to plug 12 into x^3 - 4x^2 + 12 very quickly instead of having to work it out doesn't impact their learning. On the other hand, more advanced calculator are often not allowed when they trivialize the content.

LLMs are much more powerful than a calculator, so finding where in education it doesn't trivialize the learning process is pretty difficult. Maybe at grad level or research, but anything grade school it is as bad as letting a kid learning their times tables use a calculator.

Now, if we could create custom LLMs that are targeted at certain learning levels? That would be pretty nice. A lot more work. Imagine a Chemistry LLM that can answer questions, but know the homework well enough to avoid solving problems for students. Instead, it can tell them what chapter of their textbook to go read, or it can help them when they are having a deep dive beyond the level of material and give them answers to the sorts of problems they aren't expected to solve. The difficulty is that current LLMs aren't this selective and are instead too helpful, immediately answering all problems (even the ones they can't).

SkyBelow commented on Over-regulation is doubling the cost   rein.pk/over-regulation-i... · Posted by u/bilsbie
locknitpicker · 25 days ago
> There are some laws prohibiting the sale of used tires with less than a certain amount of tread.

I think you're confused. I'll explain why.

Some contries enforce regulations on what tyres are deemed road-legal, due to requirements on safety and minimum grip. It's also why it's illegal to drive around with bald tyres.

However, said countries also allow the sale of tyres for track and competitive use, as long as they are clearly sold as not road-legal and for competitive use only.

So, no. You can buy track tyres. You just can't expect to drive with them when you're dropping off your kids at school and not get a fine.

Also, it should be noted that some motorsport competition ban or restrict the use of slick tyres.

SkyBelow · 25 days ago
>Some contries enforce regulations on what tyres are deemed road-legal, due to requirements on safety and minimum grip. It's also why it's illegal to drive around with bald tyres.

Yes, this is a good thing. Where it becomes bad is when someone says "Oh, we should stop that from happening, let's ban the sell of such tires." With no exception.

This isn't a problem unique to regulations and laws. In software development, it is very common for the user to not think about exceptions. The rare the exception, the more likely it is missed in the requirements. It is the same fundamental problem of not thinking about all the exception cases, just in different contexts. You also see this commonly in children learning math. They'll learn and blindly apply a rule, not remembering the exceptions they were told they need to handle (can't divide by zero being a very common one).

SkyBelow commented on Heretic: Automatic censorship removal for language models   github.com/p-e-w/heretic... · Posted by u/melded
bilbo0s · a month ago
I would ask for it to give me one line of a song in another language, broken down into sections, explaining the vocabulary and grammar used in the song, with call out to anything that is non-standard outside of a lyrical or poetic setting.

I know no one wants to hear this from the cursed IP attorney, but this would be enough to show in court that the song lyrics were used in the training set. So depending on the jurisdiction you're being sued in, there's some liability there. This is usually solved by the model labs getting some kind of licensing agreements in place first and then throwing all that in the training set. Alternatively, they could also set up some kind of RAG workflow where the search goes out and finds the lyrics. But they would have to both know that the found lyrics where genuine, and ensure that they don't save any of that chat for training. At scale, neither of those are trivial problems to solve.

Now, how many labs have those agreements in place? Not really sure? But issues such as these are probably why you get silliness like DeepMind models not being licensed for use in the EU for instance.

SkyBelow · a month ago
I didn't really say this in my previous point as it was going to get a bit too detailed about something not quite related to what I was describing, but when models do give me lyrics without using a web search, it has hallucinated every time.

As for searching for the lyrics, I often have to give it the title and the artist to find the song, and sometimes even have to give context of where the song is from, otherwise it'll either find a more popular English song with a similar title or still hallucinate. Luckily I know enough of the language to identify when the song is fully wrong.

No clue how well it would work with popular English songs as I've never tried those.

SkyBelow commented on SlopStop: Community-driven AI slop detection in Kagi Search   blog.kagi.com/slopstop... · Posted by u/msub2
cschep · a month ago
I do appreciate this side of the argument but.. do you think that the level/strength of a marriage commitment is worthy of comparison to walking by someone in public / riding the same subway as them randomly / visiting their blog?

They seem world's apart to me!

SkyBelow · a month ago
I find them comparable, but not equal, for that reason.

Especially if we consider the summation of these commitments. One is obviously much larger, but it defines just one of our relationships within society. The other defines the majority of our interactions within society at large, so a change to it, while much less impactful to any one single interaction or relationship (I use them interchangeably here as often the relationship is just that one single interaction) is magnified by how much more often it occurs. This does move towards making the costs of losing some trust in such a small interaction as having a much larger cost than it first appears, which I think further increases how one can compare them.

(More generally, I also like comparing things even when the scale doesn't match, as long as the comparison really applies. Like apples and oranges, both are fruits you can make juice or jam with.)

SkyBelow commented on Heretic: Automatic censorship removal for language models   github.com/p-e-w/heretic... · Posted by u/melded
charcircuit · a month ago
>Not illegal

Reproducing a copyrighted work 1:1 is infringing. Other sites on the internet have to license the lyrics before sending them to a user.

SkyBelow · a month ago
I've asked for non 1:1 versions and have been refused. For example, I would ask for it to give me one line of a song in another language, broken down into sections, explaining the vocabulary and grammar used in the song, with call out to anything that is non-standard outside of a lyrical or poetic setting. Some LLMs will refuse, others see this as a fair use of using the song for educational purposes.

So far all I've tried are willing to return a random phrase or grammar used in a song, so it is only getting to asking for a line of lyrics or more that it becomes troublesome.

(There is also the problem that the LLMs who do comply will often make up the song unless they have some form of web search and you explicitly tell them to verify the song using it.)

SkyBelow commented on SlopStop: Community-driven AI slop detection in Kagi Search   blog.kagi.com/slopstop... · Posted by u/msub2
cschep · a month ago
This is an absurd comparison - you (presumably) made a commitment to your wife. There is no such commitment on a public blog?
SkyBelow · a month ago
Is it that absurd?

We have many expectations in society which often aren't formalized into a stated commitment. Is it really unreasonable to have some commitment towards society to these less formally stated expectations? And is expecting communication presented as being human to human to actually be from a human unreasonable for such an expectation? I think not.

If you were to find out that the people replying to you were actually bots designed to keep you busy and engaged, feeling a bit betrayed by that seems entirely expected. Even though at no point did those people commit to you that they weren't bots.

Letting someone know they are engaging with a bot seems like basic respect, and I think society benefits from having such a level of basic respect for each other.

It is a bit like the spouse who says "well I never made a specific commitment that I would be the one picking the gift". I wouldn't like a society where the only commitments are those we formally agree to.

SkyBelow commented on AI assistants misrepresent news content 45% of the time   bbc.co.uk/mediacentre/202... · Posted by u/sohkamyung
amarant · 2 months ago
Human journalists misrepresent the white paper 85% of the time.

With this in mind, 45% doesn't seem so bad anymore

SkyBelow · 2 months ago
Years ago in college, we had a class where we analyzed science in the news for a few weeks compared to the publish research itself. I think it was a 100% misrepresentation rate comparing what a news article summarized about a paper verses what the paper itself said. We weren't going off of CNN or similar main news sites, but news websites aimed at specific types of news which were consistently better than the articles in mainstream news (whenever the underlying research was noteworthy enough to earn a mention on larger sites). Leaving out complete details or only reporting some of the findings weren't enough to count, as it was expected any news summary would reduce the total amount of information being provided about a published paper compared to reading the paper directly. The focus was on looking for summaries that were incorrect or which made claims which the original paper did not support.

Probably the most impactful "easy A" class I had in college.

SkyBelow commented on LLMs can get "brain rot"   llm-brain-rot.github.io/... · Posted by u/tamnd
askafriend · 2 months ago
Why shouldn't the author use LLMs to assist their writing?

The issue is how tools are used, not that they are used at all.

SkyBelow · 2 months ago
Assist without replacing.

If you were to pass your writing it and have it provide a criticism for you, pointing out places you should consider changes, and even providing some examples of those changes that you can selectively choose to include when they keep the intended tone and implications, then I don't see the issue.

When you have it rewrite the entire writing and you past that for someone else to use, then it becomes an issue. Potentially, as I think the context matter. The more a writing is meant to be from you, the more of an issue I see. Having an AI write or rewrite a birthday greeting or get well wishes seems worse than having it write up your weekly TPS report. As a simple metric, I judge based on how bad I would feel if what I'm writing was being summarized by another AI or automatically fed into a similar system.

In a text post like this, where I expect others are reading my own words, I wouldn't use an AI to rewrite what I'm posting.

As you say, it is in how the tool is used. Is it used to assist your thoughts and improve your thinking, or to replace them? That isn't really a binary classification, but more a continuum, and the more it gets to the negative half, the more you will see others taking issue with it.

SkyBelow commented on Beliefs that are true for regular software but false when applied to AI   boydkane.com/essays/boss... · Posted by u/beyarkay
xutopia · 2 months ago
The most likely danger with AI is concentrated power, not that sentient AI will develop a dislike for us and use us as "batteries" like in the Matrix.
SkyBelow · 2 months ago
I agree.

Our best technology at current require teams of people to operate and entire legions to maintain. This leads to a sort of balance, one single person can never go too far down any path on their own unless they convince others to join/follow them. That doesn't make this a perfect guard, we've seen it go horribly wrong in the past, but, at least in theory, this provides a dampening factor. It requires a relatively large group to go far along any path, towards good or evil.

AI reduces this. How greatly it reduces this, if it reduces it to only a handful, to a single person, or even to 0 people (putting itself in charge), seems to not change the danger of this reduction.

SkyBelow commented on Robin Williams' daughter pleads for people to stop sending AI videos of her dad   bbc.co.uk/news/articles/c... · Posted by u/dijksterhuis
lukev · 2 months ago
There's two things potentially at stake here:

1. Whether there is an effective legal framework that prevents AI companies from generating the likenesses of real people.

2. The shared cultural value that, this is not cool actually, not respectful, and in fact somewhat ghoulish.

Establishing a cultural value is probably more important than any legal structures.

SkyBelow · 2 months ago
I think there is also a major distinction between creating the likeness of someone and sending that likeness to the family of the deceased.

If AI somehow allowed me to create videos of the likeness of Isaac Newton or George Washington, that seems far less a concern because they are long dead and none of their grieving family is being hurt by the fakes.

u/SkyBelow

KarmaCake day2796March 5, 2019View Original