Readit News logoReadit News
RobRivera · a month ago
Text to speech has been a technology for a very long time. This is, in my opinion, a whole article about nothing, leaning on the AI label to garner views.

Yes, we may ask the question whether or not speculative uses of AI in other manners have negative implications, and these should be asked, but that isn't the case here.

It is very much like asking the question if cars, upon inve tipn, started driving into random fields with no restraint, off-roading as if any car owner woulddo this, upon the sight of seeing a new motor carriage driving down a street. Important questions to ask of emergent technology, sure, but right now that motor carriage is on the road, let it be.

wahnfrieden · a month ago
Did you read the article?

> "The official could not confirm whether Burns employs the AI to draft full written decisions or only to read his written rulings aloud using text‑to‑speech software..."

He is using AI to read judgments and has not said either way whether AI also wrote them. Using it to read them raises suspicion of further automation employed. So it is not accurate to claim that the voice is known to be the full extent

dspillett · a month ago
I can't read the article as it seems hugged to death and not archived elsewhere yet, but if this is the case I'm thinking about, he wrote everything and used a TTS system to read it as him.

Normally I'm relatively anti-generative-AI, but I don't see a big problem with this one. TTS has been used for a long time, just less convincingly so, often in work situations. Many people with disabilities that affect their verbal ability do it so they can communicate in a way that feels less impersonal than in written form - if everyone else using TTS normalises this sort of thing more than it'll be a boost for those users.

My only concern here is that TTS systems based on generative tech have been known to hallucinate slight changes to the text they are reading. In legal contexts small changes in wording can have significant impact, so I hope he checks the output in detail, or has someone else do so, after it is produced before giving it to anyone else…

wahnfrieden · a month ago
> "The official could not confirm whether Burns employs the AI to draft full written decisions or only to read his written rulings aloud using text‑to‑speech software..."

He is using AI to read judgments and has not said either way whether AI also wrote them. Using it to read them raises suspicion of further automation employed. So it is not accurate to claim that the voice is known to be the full extent

bitwize · a month ago
So if he's writing the decisions himself and using an AI voice to read them, big deal. It's pretty much a nothingburger, unless the AI voice somehow misread something in a legally relevant way. If he's using AI to generate decision text, that's a more serious issue.
nerevarthelame · a month ago
Generative text to speech models can hallucinate and produce words that are not in the original text. It's not always consequential, but a court setting is absolutely the sort of place where those subtle differences could be impactful.

Lawyers dealing with gen-AI TTS rulings should compare what was spoken compared to what was in the written order to make sure there aren't any meaningful discrepancies.

csallen · a month ago
People can also make mistakes while reading, and I suspect we do so at just as much if not more frequency as gen AI text-to-speech algos.

It's the AI thinking that makes me wary, not AI text-to-speech.

Deleted Comment

AngryData · a month ago
Im not sure I agree if it isn't neccessary for a health issue. It depersonalizes the defendent and detaches the judge from the real human consequences of their decisions. It is a whole extra step into gamifying the judicial process which helps facilitate even worse abuses of the justice system than we already deal with.
datadrivenangel · a month ago
Except this judge is especially harsh, which suggests that he's very biased, and thus being more productive via AI seems like a bad outcome.

From TFA: "Burns approved just 2 percent of asylum claims between fiscal 2019 and 2025—compared with a national average of 57.7 percent."

silisili · a month ago
While not as bad as AI rendering the decision itself obviously, I wouldn't exactly say it's a nothing burger. It feels completely inauthentic and dystopian.

I can only imagine the hell of being nervous in a big court case waiting for the decision, and hearing that annoying TikTok lady deliver the bad news.

wahnfrieden · a month ago
Did you read the article?

> "The official could not confirm whether Burns employs the AI to draft full written decisions or only to read his written rulings aloud using text‑to‑speech software..."

He is using AI to read judgments and has not said either way whether AI also wrote them. Using it to read them raises suspicion of further automation employed. So it is not accurate to claim that the voice is known to be the full extent

monerozcash · a month ago
This feels like a daily mail article for a slightly different audience. Is this what's now referred to as "rage baiting"?
mjw1007 · a month ago
Is this a real judge, or is an "Immigration Judge" one of those not-actually-a-judge decisionmakers employed by the executive?
Terr_ · a month ago
> one of those not-actually-a-judge decisionmakers

With all the hubbub these days of those same decision-makers writing "warrants", I consciously try to reframe them as "memos." (Ex: "I have a memo for your arrest.")

Sure, it may not be a term of art for executive-branch bureaucrats... but it's way less misleading for the public that associates "warrant" with a much weightier process.

It also underscores the absurd recklessness of ICE flunkies ramming cars and pointing guns into people's faces while hunting for what are often civil infractions. Not felonies, not misdemeanors, but the equivalent of parking tickets.

Deleted Comment

khuey · a month ago
The latter. They're not even real administrative law judges.

Dead Comment