Also my laptop that has no Windows 11 available for it because 7th Gen Intel isn't good enough anymore. I don't think it's outdated, but ask Microsoft about that
And voice-to-text is just as broken. I did side-by-side tests, and if you’re in perfect conditions, facing the phone directly, with no background noise (i.e., never), then the 7 performed okay, but it would fail horrible in any other case. The 2XL performed quite well, relatively. I even replaced the 7 thinking there was an issue, but the new one did the same thing.
Unfortunately, lack of updates means the 2 is no longer usable for many purposes.
If Amazon hadn't taken that extreme step in the first place, the stakes wouldn't be so high, and there would be less reason to discipline the employee (for the record I don't think they should be fired in any case).
And yes, I recanted on the 'firing" part, but I still feel that Amazon's "resolution" here was weak-sauce compared to the "extreme" action taken in the first place. At this point, I'm guessing they wish they'd offered the customer some minor token (say 2 years of free Prime at a minimum) compensation in return for an NDA on the topic. 20/20 hindsight ;-)
This whole asking for punishment is what actually drives a culture in which these kinds of things do happen more frequently, because everyone involved just wants to cover their asses.
- A customer can be mistakenly called a racist
- Their home automation systems they bought and paid for disabled
- Any digital content (Kindle books, Amazon Prime Video purchases, Audible books, etc.) they bought (sorry, "licensed") revoked.
- It takes more than a week to resolve after being provided with clear evidence of the company's mistake. I mean, good grief, at least the manager/executive should have reactivated the customer's account during the review process, but they opted for "guilty until we've taken our sweet time reviewing the evidence and make sure they're innocent".
- After all that, the customer isn't even offered an apology, much less compensation.
This isn't just a "mistakes and failures happen" situation. Failures and mistakes occurred at multiple points in this process and along the decision chain, and apparently no one involved had the common sense to break out of the resulting insanity-loop.
Short summary - Someone had tied in most of their home automation to Alexa and found their account cancelled suddenly one day. They went through the automated recovery systems and were told to contact support, which they did. Support ended up transferring them to an Amazon exec (let's assume "manager") who told them their account was disabled because an Amazon delivery driver reported that someone said something racist to them over their video doorbell (which wasn't a Ring, ironically).
Upon investigating, checking cameras, logs, etc,, the owner determined that (a) no one was home at the time of the delivery, (b) the driver was wearing headphones, (c) the doorbell had done an automated, "Hello, how can I help you?" response to the driver as they were walking away (presumably ring-and-dash or drop-and-dash delivery, as usual).
The driver had apparently, with the headphones on, completely misunderstood.
It took over a week to get Amazon to review all the evidence and reactivate the account. No apology at that point (although I believe I saw they subsequently have).
That's a bad look for Amazon, and the Youtuber makes a valid point that it's a bad idea to trust control of your home to a company that will make such boneheaded decisions.
IMHO, the only correct response for Amazon here is firing at least two people involved in the debacle, apologizing publicly, and promising to review and adapt their policies in response to the incident. Any halfway decent PR department at anything other than a mega-monopoly would be scurrying to do exactly that, but not Amazon apparently.
But spamming thousands of answers an hour automatically and wanting the community to do all the work is just not sustainable I feel. It'll also kill the sense of community if half the actors are bots.
I think it's way more likely that poor answers won't mention the usage of LLM's to generate the answer, while good answers aided by LLM's will more often mention it.
Punishing honesty just seems incredibly counterproductive.
Automatic detection is downright dystopian... being censored by an algorithm because it mistook my effort and work for a LLM.
And while I'm not a moderator, as just a user I've flagged over 1,200 answers on Stack Overflow (and several of the smaller communities like Ask Ubuntu) that were subsequently removed. Automatic detection was never the sole criteria that was used to determine if it was AI - It's entirely possible to spot GPT content using multiple methods. I don't publicly talk about most of these, since we do have a group of users (sometimes spammers) who attempt to hide their use and make it more difficult to detect. See some of my additional notes on the topic on https://meta.stackexchange.com/a/389674/902710
For example here on HN we have a rule if you see bot content you don't mention it in the thread. You report it and let the admins decide. Anything else just turns into flamewars.
I saw one user yesterday post 10 lengthy, detailed answers in an hour, in 3 different programming languages. But the Mods aren't allowed by SE to consider that (or pretty much anything) to be an indicator that it's AI-generated.