Readit News logoReadit News
xnx · 10 months ago
I'm guessing this is a result of these two concepts not being far from each other in vector space? Like a data-driven version of "miserable failure" Google Bombing: https://en.wikipedia.org/wiki/Google_bombing
cozzyd · 10 months ago
Sounds like a bug where they got it the wrong way around
mirawelner · 10 months ago
People resorting to protest via causing inconvenience. Which is probably one of the more effective forms of protest anyways.
Clubber · 10 months ago
All those people sitting in the middle of the road blocking work traffic aren't endearing anyone to their cause.
itishappy · 10 months ago
You're talking about them. Their goal is not endearment.
piva00 · 10 months ago
Apartheid protesters also didn't try to endear anyone. Why do you think protests need to entertain and endear for a cause? I'd venture you've never actively participated in such acts...

Dead Comment

Aloisius · 10 months ago
I haven't been able to reproduce this. It correctly transcribes racist as racist.
acdha · 10 months ago
It changed last night. I reproduced it repeatedly[1] but then it stopped happening a bit later. At first I thought it was the on-device recognition but behaviour was identical both with and without a network connection.

1. https://news.ycombinator.com/item?id=43179712

acdha · 10 months ago
This smells like an LLM trying to correct the output of a speech recognition system. I said the word “racist” repeatedly and got this unedited output. You could see it changing the text momentarily after the initial recognition result, and given the way Mamaroneck sounds nothing like either of the other words I’d bet this thing was trained on news stories:

“Racist, Trump, Mamaroneck racist Trump Mamaroneck, racist racist racist racist Trump Mamaroneck”

irusensei · 10 months ago
It happens during speech to text so its unlikely LLM is involved.
acdha · 10 months ago
That’s what I was referring to in the first sentence: you can see the raw text from the speech system change afterwards. Normally that’s things like punctuation and ambiguous words like their/they’re. That secondary process felt like a system which operates on test tokens because “racist” and “Mamaroneck” don’t sound similar at all.
rezaprima · 10 months ago
I would immediately find other alternatives that have better accuracy
irusensei · 10 months ago
Silly idiotic activism aside it's concerning that if someone working at Apple managed to slip in such a bold change into the OS then can a malicious group do the same?
acdha · 10 months ago
There’s another angle about ML systems: say this is some issue with a model having two terms too close to each other, how would you prove it wasn’t malice or offer assurances that something like that won’t happen again? A lot of our traditional practices around change management and testing are based on a different model.