Readit News logoReadit News
shawnjan8 · 3 years ago
Just to be clear: it's not for every student (at least not yet!). We are in a research phase sharing it with a limited subset of users. More details about our approach to the responsible development of AI: https://blog.khanacademy.org/aiguidelines/
itronitron · 3 years ago
It better not be for every student. I am very familiar with Khan Academy as I am currently guiding a student through several AP courses on khanacademy. In my opinion, khan academy's time would be better spent fixing the UI for teachers, and improving the organization of the physics curriculum.

I would prefer that khan academy not be dragged into some PR fluff piece for some AI shill.

robbintt · 3 years ago
This argument has a pretty standard counterargument: organizations can and should do more than 1 thing at a time. It's a non-profit so you're probably better off contributing your ideas to them.

Dead Comment

jrussino · 3 years ago
Do you know if/how someone can get involved with this at the research phase? I have a daughter who will be entering 1st grade next year and I'd be interested in having her try this out if there was a way of signing up.
shawnjan8 · 3 years ago
The sign-up page to join our waitlist is here: https://www.khanacademy.org/khan-labs

But we aren't planning to allow folks to enroll / allow Khanmigo for folks with children under 13 at this time as part of our research phase.

germinalphrase · 3 years ago
I was a high school English teacher for about ten years before transitioning into tech last July. Has KA explored assistive assessment tools for in-person instruction?
yterdy · 3 years ago
>it's not for every student (at least not yet!)

>Donation required after chosen from waitlist

So, it's for rich ones?

frankfrankfrank · 3 years ago
I frequently find myself in the middle of many things, one of them is that although I oppose the grotesque plunder of the public and citizens of countries by neo-aristocrats claiming to do one thing or another amidst blatant lies of social welfare, I also object to slavery and therefore cannot condone that everything should be free since people like being compensated for their work, and not by being indentured and forced to do things.

Things have to cost, even if they wouldn’t have to cost as much if we could stop the psychopaths at the top constantly plundering and raiding the resources of the productive people to maintain their parasitic and decadent lives.

worrycue · 3 years ago
Am I the only one somewhat alarmed that we are choosing to rely on AIs that are vulnerable to hallucinations so much?

Personally I rather have a more limited but reliable tool than a more powerful but unreliable one.

tenpies · 3 years ago
Are the tolerances in education really that tight?

How many terrible teachers are allowed to continue to teach after decades of disastrous results?

How many ideologues with no interest in teaching, but every interest in indoctrinating young minds, are tolerated because the alternative is "no teacher" and the class wouldn't run?

How many teachers who would fail state exams, teach, despite relying on answer sheets to be "competent"?

I agree with you that on the top end of education, this is no replacement and at best a supplementary tool. For the poor kid in a bad neighbourhood whose teacher is more interested in "de-colonizing" mathematics than teaching mathematics, this is a Godsend.

worrycue · 3 years ago
I feel the question is, should we be making this worse with unreliable AI.

Currently AI seem to have 2 weaknesses, it’s quite bad at reasoning and it hallucinates.

Yes, humans screw up reasoning too but at least they try. AI like ChatGPT seem to skip critical thinking altogether - there are examples of ChatGPT contradicting itself multiple times in a single conversation.

Then there are hallucinations. When people don’t know something, most just say they don’t know. AI just can’t seem to help itself but make up complete BS that it confidently try to pass off as facts.

P.S. Frankly I think the 2 might be related. AI can’t “sanity check” it’s own output.

vineyardmike · 3 years ago
> How many terrible teachers are allowed to continue to teach after decades of disastrous results?

Counterpoint: Why "hire" a known-bad teacher? Is a teacher that is known to give fake information better or worse than none at all?

I don't know the answer to this, but I hope that Khan Academy thought about this and decided that the good of GPT is better than it's bad. Or, they can't know so thats why they're doing a limited study before rolling this out.

acomjean · 3 years ago
Its more troubling because its being used in a role as an educational resource. the audience is learning and significantly less able to realize that.
cypress66 · 3 years ago
Chatgpt is already more accurate than a lot of my high school teachers.
megaman821 · 3 years ago
My thoughts exactly. What is the baseline here? Teachers already give biased and/or inaccurate information at a rate much higher than I have seen in ChatGPT.

Deleted Comment

worrycue · 3 years ago
There is being misinformed and making a mistake when reasoning. Then there is making s** up.

Most humans have the decency to not do the latter while AI (at the moment) freely makes up complete lies when it doesn't have the answer instead of just admitting it doesn't know.

Wata2 · 3 years ago
Firstly, the unreliability may not be permanent, and we may improve the AI accuracy in the future. Secondly, we need to figure out what works and what not, and we're in the middle of such phase. Finally, there is no need to be alarmed, even with hallucinations, the amount of advanced (and reliable) knowledge provided by the AI vastly counterbalances the small mistakes. The elitist mindset needs to die, let's give access to advanced knowledge to everybody and stop putting it behind some unreachable walls.
worrycue · 3 years ago
> Firstly, the unreliability may not be permanent, and we may improve the AI accuracy in the future.

How about we fix the unreliability first and not put the cart before the horse.

> The elitist mindset needs to die, let's give access to advanced knowledge to everybody and stop putting it behind some unreachable walls.

What are you talking about? We have high quality university courses on the internet for free. We have things like the Khan Academy and similar sites. How is advanced knowledge behind unreachable walls?

missingdays · 3 years ago
What walls are you talking about and how an AI assistant that you have to "donate" to get access to is going to help with these walls?
progman32 · 3 years ago
That is a good goal. Is AI the best tool for the job?
celestialcheese · 3 years ago
Without providing context or examples, GPT-4 is already better at answering questions than the average teacher, with the unlimited patience that only a computer can provide.

With context, which Khan academy has in abundance due to their lesson plans and transcripts, accuracy will be higher than even the best teachers and tutors.

Once you give context and known-true facts to best-in-class LLMs like GPT-4, the output is shockingly good.

kobalsky · 3 years ago
If you don't know how they are using GPT-4 it's not fair to say it will hallucinate.

As far as I understand the preferred way to use LLMs nowadays for domain specific information retrieval is through embeddings that insert the related context in the prompt. GPT-4 is specially good for this since they increased the prompt size almost by an order of magnitude.

This means that the model can be given a very specific task: to extract information from the context or avoid providing an answer at all.

The answer doesn't rely on the neural memory of the model, since it doesn't need to store information, just understand the task, and they are really good at that.

the8472 · 3 years ago
They're saying that they have reduced hallucinations. But RLHF seems to undo some of the improvements (figure 8 in the paper).
visarga · 3 years ago
Those are calibration curves for confidence.
itronitron · 3 years ago
'reduced hallucinations', um ok.
itronitron · 3 years ago
There are two indicators that this is PR bullshit produced by whoever is trying to capitalize their investment, 1) is the use of the marketing term 'AI' or 'AI-powered' and 2) use of the 'think of the children' trope.
falcor84 · 3 years ago
'AI' is a marketing term? I don't see how that's more than saying that 'cloud' is a marketing term - while being useful in marketing, it's just an industry term.
pixl97 · 3 years ago
I mean that is but one thing to worry about, we've gone about nowhere with the alignment problem, and we're screaming ahead at full speed with making these things more powerful.

Deleted Comment

MagicMoonlight · 3 years ago
Don’t expect it to know specific facts but it is excellent at explaining topics
ihatepython · 3 years ago
That is a feature, not a bug. It's an efficient way to brainwash a population.
mullingitover · 3 years ago
I wonder if this kind of intelligent tutoring could be the answer to Bloom's Two Sigma Problem. The limiting factor with that problem was that not everyone can afford a personal tutor. Having an AI tutor that can breeze through the SAT seems like it should give every student a major boost.
dopeboy · 3 years ago
Circa late 2015, early 2016, my cofounder and I spent ~6 month attempting to build exactly this.

Our hypothesis was that students self studying for the SAT used books (eg Kaplan) but the solutions in the back were poor in explaining how they arrived to the solution because they usually just cut to the chase rather than explaining the approach & helping you arrive to the solution.

So we took each practice SAT math question and wrote out a solution spread over four swipeable pages. Our hope was that if you saw the first or second page, you'd arrive at the solution without having to see the answer, thus increasing your chance to solve it on your own next time.

We spent over 100 hours tutoring kids and had them use our MVP to mild success. The obvious next step was to automate the solutions rather than hand write it ourselves - like GPT-4 has done here.

As someone who has logged over 2000 hours teaching, there's a lot of soft nuances and tactics to tutoring. Also, motivation is a big thing a good tutor brings. I don't see this solution as quite there yet but getting close.

Very optimistic here.

terribleperson · 3 years ago
The weird thing is how rarely approaches like the one you describe here are applied. It's not exactly a secret that simply providing the answer to math questions is great if someone gets them all correct, but nearly useless when someone gets them wrong. It's a problem that's been solved before - in a classroom, the traditional answer is working through the problem on the board and hopefully students are following along. Accompanying books with videos that work through the problems has been done at least since we had VHS.

So why are there still so many bad study books?

alach11 · 3 years ago
My mind went to the same place. Knowing that "the average tutored student was above 98% of the students in the control class", imagine how transformative it will be to lift the average student to the 98th percentile!

This may be the most transformative change to education since the invention of public schooling.

sophiabits · 3 years ago
Every now and then I think about Marc Andreessen’s “software is eating the world” article. He was extremely excited about investing in EdTech and HealthTech software companies at the time—for obvious reasons!

But EdTech hasn’t really delivered much value. The modern LMS feels to me like a reinvention of Facebook groups in many ways…

LLMs feel like something that _could_ very well end up eating education, and that’s a really exciting prospect. My personal experience with schooling was less than stellar, and figuring out alternative and more scalable ways to get knowledge into kids’ heads has a lot of possible upside for society imo

tgv · 3 years ago
It's not the tutoring alone. It's also the social effect. A machine simply does not exert that kind of influence, nor do online courses require the same amount of dedication and ambition. Compare MOOCs: they have incredibly high drop-out rates.
asdff · 3 years ago
Chances are if you have a tutor you are paying good money for it, and have made appointments at regular intervals, that probably helps with motivation too. I think its possible that chatchpt and other learning ai will end up being like how office hours are used today in college. Like office hours, these tools are available for everyone, but for one reason or another not everyone will bother with them or use them to their fullest.
asdff · 3 years ago
On the other hand, I think it could also trigger an arms race, and might very well incentivize learning how to write a good prompt more than learning the concept well enough to succeed in a class. This reminds me in my era of the divide in success in my era, between students who were able to navigate the internet to the benefit of their academic success, and those that weren't able to do things like quickly paraphrase a wikipedia article on the subject and use the wikipedia citations as their own, find a relevant flashcard set someone else made and shared on quizlet, or even find pdfs of the teacher's solutions accompanying the textbook. Cheat codes are cheat codes. They will help in terms of your grade but usually hurt in terms of understanding.
mullingitover · 3 years ago
> On the other hand, I think it could also trigger an arms race, and might very well incentivize learning how to write a good prompt more than learning the concept well enough to succeed in a class.

I think general public having access to these AI tools is going to create a shift from scholastic aptitude testing to more performance testing. So no more homework, you use your time out of class to train so you can use the in-class time to perform based on the training you've been doing with your personal AI tutor.

oh_sigh · 3 years ago
Every student might get an absolute boost, but the relative gap may even widen. Consider students who can afford to use cutting edge, expensive to train/run models, vs students who can only use off the shelf, older models.
visarga · 3 years ago
More likely AI will be cheap and widespread like electricity. Data likes to be free.

Assuming the divide will be real, you can just record a million problem solutions from ExpensiveLM and use it to tune your CheapLM, it will be a specialised cheap model that works ok on your task of interest.

james-revisoai · 3 years ago
Since using T5 to generate quiz questions in 2019 I've thought this. Transformers now can understand misconceptions, quote source materials... it will equalise education.
recuter · 3 years ago
A tutor doesn't just imply parents who can afford one but also who care enough to do so in the first place and see the value of an education.

I think you'd find that even by the sat-prep age the actual limiting factor for many (most?) of the tutor-less is a fate-accompli attitude towards their path in life.

Might as well turn on the chat bots at the cradle. Ah, and make sure everybody gets the same quality AI to parent them for it to be egalitarian.

alach11 · 3 years ago
Right... but the whole point Bloom's experiment is that if you give every kid a tutor, they perform (on average) at the 98th percentile. Not just the rich kids who can afford tutors.

I do think there are valid concerns that this technology (like old-school human tutoring) could be disproportionately available to already-privileged children. This could potentially widen the achievement gap. But it seems way more likely that we can get poor kids a laptop to tutor them vs. getting them a dedicated human tutor.

davesque · 3 years ago
GPT-4 definitely seems to be doing better on a lot of benchmarks and that's impressive. But it still hallucinates facts and I don't think anyone really has a good understanding of when and how that happens. Given that, is it really a good idea to be positioning this model as some kind of factual authority figure?
thethimble · 3 years ago
Human teachers make mistakes too - perhaps even at higher rates than GPT in some cases.

Instead of isolating a single factor of hallucinations I think you have to also consider the cost and experience quality and make a more holistic judgement on whether these models are good or bad.

davesque · 3 years ago
Yes, but human teachers can eventually come to understand why they made a mistake. And they probably are more able to communicate how confident they are in their answers. I think that's because human teachers have a mental model that they're working from. On the other hand, we have no idea if transformer models are working from anything like a "mental model."

Just because humans make mistakes (or even as many or more mistakes than machines) doesn't mean the nature or consequences of the mistakes are the same.

jhp123 · 3 years ago
I've tutored people at times. A good tutor needs to understand the subject very well, so they can not only understand the right answer but also figure out why the student is coming to the wrong answer.

I personally think that assigning GPT as a "tutor" is devaluing the real skill involved in tutoring and I doubt it will work out.

navigate8310 · 3 years ago
I would like to divert your attention to this recent post: https://openai.com/research/gpt-4 specifically on the topic of "Steerability: Socratic tutor" (Sample 1 of 3)
haskellandchill · 3 years ago
I was trying to get it to teach me Ordinary Differential Equations and it just kept getting everything wrong. It has no concept of checking its work etc, that example makes it appear like it can but I guess it's just getting lucky for linear equations.
pnt12 · 3 years ago
Very interesting, thanks for referencing it as I had missed it.

There have been occurrences of people asking chat gpt to ignore instructions on the initial prompt: have these been solved? It's not clear to me if the first prompt is given any extra weight. Or maybe during training of version 4 they have heavily penalized these sorts of attacks to make it more resilient.

itronitron · 3 years ago
why?
wnolens · 3 years ago
In my brief asking ChatGPT question and to rephrase/clarify, it seems at least as good as the average tutor/teacher/TA which for me has been marginally better than warm body rephrasing the theories and working through textbook problems in front of you.

And in time, much better. To the point of raising the standard of education for all of humanity with access to the internet. I'd count on it "working out".

gremlinsinc · 3 years ago
I'm 42 and still don't always like asking questions for fear of looking stupid if I ask one the wrong way. chatgpt has none of that because it's just an ai and won't judge me or make my imposter syndrome worse.
raldi · 3 years ago
My daughter does children's martial arts. Often in the dojo there's an atmosphere where the yellow belts are helping the white belts, the red belts are periodically chiming in, and the black belt adult instructor is free to float around the room supervising everything, giving individual attention where it's most effective, but leaving the lower ranks to take care of the lessons they're capable of giving.

In the same way, AI will be able to make an expert human tutor 100x more effective by escalating the tricky cases but learning how to handle the ones that come up over and over.

Deleted Comment

kirill5pol · 3 years ago
This is something I’m working on. I think having a good understanding of the material for good explanations is doable in the near future, but the main problem is actually building a good model of what the learner knows to guide the LLM

Deleted Comment

jamestimmins · 3 years ago
Highly recommend the Neal Stephenson book The Diamond Age: Or, A Young Lady's Illustrated Primer, for an interesting exploration of a custom AI tutor for each student.

In that book, students have a "magic book" that teaches them lessons in a story form while encouraging certain life paths. It's pretty fascinating to consider the implications and whether that's something we'll want, bc it may soon be possible.

wesleychen · 3 years ago
Khan Academy's integration of GPT-4 already steers students towards particular modes of thought. According to the article, they want "to get students thinking deeply about the content that they’re learning" and in the examples, Khanmigo prompts students to recall information from the lesson rather than directly explaining how to solve the problem.

I think what Khan Academy has done here is desirable, but just as Stephenson's Primer enforces its creator's values through AI tutoring, Khanmigo is enforcing Khan Academy's values. It's easy to imagine how someone with more authoritarian values could use this same technology (e.g. to teach students to follow instructions without doubt) to indoctrinate students with scale.

TedDoesntTalk · 3 years ago
> Khanmigo is enforcing Khan Academy's values

Every teacher does that, sometimes intentionally and sometimes not.

nonethewiser · 3 years ago
Interesting. Should they drop support for Chinese?
hrn_34 · 3 years ago
More authoritarian values? Isn't this what's happening now from the perspective of foreign countries? The USA enforcing values foreign to these countries.
turtleyacht · 3 years ago
Ractors for hire!

Once AI can generate videos, we just need pervasive surveillance capability to insert context-aware content into the stories.

The Primer was able to identify the protagonist's family member and pet toys in its personalized myth.

I wonder which TTRPG sourcebook will have the first credit to ChatGPT in DriveThruRPG :)

Procedurally-generated worlds plus conversational NPCs!

hesdeadjim · 3 years ago
Unfortunately the guard rails of the current systems prevent, in my opinion, interesting antagonists in fiction or games. I’ve experimented with some writing and when you can convince ChatGPT to give you some examples of negative behavior it will attempt to bring things back to a wholesome version. GPT-4 having more effective guard rails makes this problem worse.
LegitShady · 3 years ago
For younger readers Monica Hughes' classic "Devil on my back" is more explicitly about a society where those who can interface with the web via computers installed into them become higher caste and those who cannot become lower caste, resulting in a revolution. Definitely aimed at a younger audience than Stephenson.
ortusdux · 3 years ago
See also: the Mind Game from Ender's Game

https://enderverse.fandom.com/wiki/Mind_Game

basch · 3 years ago
(I havent read The Diamond Age)

The eventuality I see is everybody talking to each other through LLM's, and the world ending up in some kind of tower of babel situation. You wear an AR/MR headset, and all conversation is dual translated, with some kind of bytecode intermediary. Each persons language evolves to be incompatible with anyone else's, and only your LLM can understand you. The power goes out, and nobody can talk to anybody.

vsareto · 3 years ago
>It's pretty fascinating to consider the implications and whether that's something we'll want, bc it may soon be possible.

I wouldn't be surprised if it told people to take some path of least resistance to $10+ million so you don't have to work to survive, then you can mostly do whatever you want.

falcor84 · 3 years ago
Having heard many recommendations, I tried reading that book more than once, but have real issues with Stephenson's writing style there. Could you please recommend an overview of the qualities of the AI tutor that I could go over instead?

Dead Comment

jcims · 3 years ago
I’m sitting on the couch with my new grandson. He’s six months old. What is school going to look like for him over the course of his education? Should be interesting.
Buttons840 · 3 years ago
My daughter is in 2nd grade and math is her favorite subject. I liked math too. I've looked forward to helping her learn more advance math through the years, but a doubt has entered my mind; will I have any reason to help her?
lajamerr · 3 years ago
AI can efficiently teach your daughter how to solve math problems, but as a human, you can provide her with the context and understanding of why and what she's learning. AI is great at handling the technical aspects, freeing us to focus on cultivating our uniquely human qualities of empathy, creativity, and critical thinking. Letting the AI teach the how and the human teach the why and what can lead to a more holistic and enriching education.
WASDx · 3 years ago
This sounds like an argument from when calculators were invented. We will keep getting more powerful technologies, and you will need to understand them. What we should be learning will continue to gradually shift.
pixl97 · 3 years ago
Honestly it depends on the growth rate of our models and if they converge with human intelligence without requiring have the computing power on the planet.

If we get slow incremental progress over the years then it will act like an interactive personalized encyclopedia that can teach you based on your strengths and help bolster you where you are weak, as the technology improves that is.

Now, it's still scary as hell in some ways even in this slow growth mode. The authoritarians will love to have a system like this that just forgets particular parts of history and make reports back to the mothership if you ask too many 'wrong' questions. So actual implementation will be interesting to see.

Now, if we have explosive growth to AGI and beyond, then pretty much every bet about the future is cancelled and we'll have more luck in taking random guesses, gluing them to the wall, and then throwing darts at them blindfolded.

addandsubtract · 3 years ago
Xaeon-12 wakes up at 8am, has a bowl of cereal and put's on her VR goggles. She is greeted by her tutor Tom. Tom will guide her through advanced "AI prompt" class today. Xaeon-12 learns to create her own short story based on her dreams that were recorded the night before. 12am, time for lunch. After lunch, the kids all connect to play Roblox VR for 30 mins. Suddenly, the doorbell rings, but no one else is home. Fortunately, Xaeon-12 doesn't have to answer the door, as everything is automated. The mailbot dropped off some packages, before flying back to the zeppelin mothership to retrieve the next package. Xaeon-12 is just finishing her last class, "circuit boards", when her dads come home from the golf range – with an abundance of free, renewable energy, working for a basic income has become irrelevant.

Deleted Comment

BigCryo · 3 years ago
Reminds me of that science fiction story by Ray Bradbury called the veldt... Computers raise the kids
yarone · 3 years ago
Another one that came to my mind: "The Fun They Had" by Asimov

"Margie was thinking about how the kids must have loved it in the old days. She was thinking about the fun they had."

http://web1.nbed.nb.ca/sites/ASD-S/1820/J%20Johnston/Isaac%2...

the_af · 3 years ago
It's funny that you mention "The Fun They Had", because if you visit its Wikipedia page now, the summary looks written by a bot. It looks like "basic" English, only with a weird quality to it -- either written by someone whose grasp of language is very poor, or by ChatGPT. I'd say worse than ChatGPT actually: the sentences are short and almost mechanical. Go look at it.

The edit changing a way more human-sounding plot summary to this version was made on September 2022, in case anyone is wondering.

PS: if GPT derives much of its data from Wikipedia, and people start using GPT to write Wikipedia articles, I wonder what kind of strange feedback loop we're getting into.

tectonic · 3 years ago
A Young Lady's Illustrated Primer
yamtaddle · 3 years ago
... which had a real human on the other side, and the ones that didn't yielded rather different results.
lamplovin · 3 years ago
You get the full experience reading that story if you have deadmau5's the veldt playing in the background as well
p_j_w · 3 years ago
It's very upsetting that deadmau5 hasn't released an instrumental version of this song.