Two months ago, Claude was great for "here is a specific task I want you to do to this file". Today, they seem to be pivoting towards "I don't know how to code but want this feature" usage. Which might be a good product decision, but makes it worse as a substitute for writing the code myself.
The demo leans heavily on "choose the words for the sentence", which avoids spelling/keyboard issues, and maybe generalizes around the problems of N->N language maps better. The "decoy selection" for multiple choice answers also isn't great - I am getting sentences mixed with numbers for the translation of "three".
It also has the Duolingo-esque audio "reward" sounds. I personally hate them, but a lot of people feel otherwise.
There's generally protections in many jurisdictions against having to honor contracts that are based on obvious errors that should have been obvious to the other party however ("too good to be true"), and other protections against various kinds of fraud - which may also apply here, since this was clearly not done in good faith.
If you have an AI chatbot on your website, I highly recommend communicating to the user clearly that nothing it says constitutes an offer, contract, etc, whatever it may say after. As a company you could be in a legally binding contracts merely if someone could reasonably believe they entered into a contract with you. Claiming that it was a mistake or that your employee/chatbot messed up may not help. Do not bury the disclaimer in some fine-print either.
Or just remove the chatbot. Generally they mainly piss people off rather than being useful.