Readit News logoReadit News
Hugsun commented on Court orders restart of all US offshore wind power construction   arstechnica.com/science/2... · Posted by u/ck2
koolba · 9 days ago
> That's the reason why the courts are regularly ruling against the administration -- they're pretending to legal authority they don't have in the first place.

Lower courts. The track record of this administration at the SCOTUS is 90%.

Hugsun · 9 days ago
SCOTUS is indeed compromised.
Hugsun commented on If AI replaces workers, should it also pay taxes?   english.elpais.com/techno... · Posted by u/PaulHoule
n1b0m · 2 months ago
Ultra-wealthy individuals legally minimise their tax liability by:

Receiving a relatively low official salary (Bezos's Amazon salary was $81,840 for many years).

Not receiving dividends, so the wealth remains in stock that is not taxed annually.

Borrowing money against their stock holdings to fund their lifestyle. Loans are not considered income and are therefore not taxable, and the interest on the loans can sometimes be used as a deduction.

Hugsun · 2 months ago
I heard from a medium reliable source that this loophole wasn't as popular as the zeitgeist implies. I'd love to know how true that is and if so, how the rich finance themselves.
Hugsun commented on The Perplexing Appeal of the Telepathy Tapes   asteriskmag.com/issues/12... · Posted by u/surprisetalk
Hugsun · 3 months ago
I listened to a few episodes on a recommendation. The episodes lead you to believe that the host is some agnostic person that's just curious about these reports. They are not as the original lead is from another superstition believer podcast.

They also present the cameraman as a token skeptic, who is of course quickly swayed into belief.

They lean heavily on a host of tricks with long histories of non-reproduction when tested rigorously.

A "scientist" (known crackpot and woo believer) is employed to make the experiments sound. And their terrible academic reputation was explained away using conspiratorial arguments.

I found TT wholly unconvincing and consider it a scam to get people to pay for the actual evidence. I won't pay of course and confidently assume it to be poor based on the publicly available material.

Hugsun commented on GPT-5 Thinking in ChatGPT (a.k.a. Research Goblin) is good at search   simonwillison.net/2025/Se... · Posted by u/simonw
simonw · 5 months ago
Better source validation is one of the main reasons I'm excited about GPT-5 Thinking for this. It would be interesting to try your Gemini prompts against that and see how the results compare.
Hugsun · 5 months ago
I've found GPT-5 Thinking to perform worse than o3 did in tasks of a similar nature. It makes more bad assumptions that de-rail the train of thought.
Hugsun commented on Structured Output with LangChain and Llamafile   blog.brakmic.com/structur... · Posted by u/brakmic
Hugsun · 8 months ago
The version of llama.cpp that Llamafile uses supports structured outputs. Don't waste your time with bloat like langchain.

Think about why langchain has dozens of adapters that are all targeting services that describe themselves as OAI compatible, Llamafile included.

I'd bet you could point some of them at Llamafile and get structured outputs.

Note that they can be made 100% reliable when done properly. They're not done properly in this article.

Hugsun commented on Structured Output with LangChain and Llamafile   blog.brakmic.com/structur... · Posted by u/brakmic
zingababba · 8 months ago
What should be used instead?
Hugsun · 8 months ago
I gave up after it didn't let me see the prompt that went into the LLM, without using their proprietary service. I'd recommend just using the API directly. They're very simple. There might be some simpler wrapper library if you want all the providers and can't be bothered to implement the support for each. Vercel's ai-sdk seems decent for JS.
Hugsun commented on Major sugar substitute found to impair brain blood vessel cell function   medicalxpress.com/news/20... · Posted by u/wglb
hedora · 8 months ago
Studies have shown artificial (and non-nutritional organic) sweeteners are much worse than sugar for decades.

For instance, they disrupt your metabolism, so equivalently sweet amounts of sweeteners cause more weight gain than sugar. (Due to increased hunger vs. eating nothing, decreased metabolism and decreased calorie burn.)

The study in the article isn’t surprising at all. Links between nutrisweet and migraine headaches have been well understood for a long time. It’s not surprising other similar chemicals have similar negative side effects.

There’s no valid reason to use artificial sweeteners (other than diabetes, but even then, gaining weight from the sweeteners is a problem if the diabetes is weight related.)

Hugsun · 8 months ago
I was under the impression that this is not the case. Aspartame has been studied a lot and not found to be harmful.
Hugsun commented on AGI is not multimodal   thegradient.pub/agi-is-no... · Posted by u/danielmorozoff
patrickscoleman · 8 months ago
It feels like some of the comments are responding to the title, not the contents of the article.

Maybe a more descriptive but longer title would be: AGI will work with multimodal inputs and outputs embedded in a physical environment rather than a frankenstein combination of single-modal models (what today is called multimodal) and throwing more computational resources at the problem (scale maximalism) will be improved with thoughtful theoretical approaches to data and training.

Hugsun · 8 months ago
I discovered that this is very common when posting a long article about LLM reasoning. Half the comments spoke of the exact things in the article as if they were original ideas.
Hugsun commented on Adventures in Symbolic Algebra with Model Context Protocol   stephendiehl.com/posts/co... · Posted by u/freediver
Hugsun · 9 months ago
I was very pleased to discover that Mistral's Le Chat has inbuilt support for python code execution and sympy is importable.

It will regularly use it and reliably when asked to.

Hugsun commented on Ollama violating llama.cpp license for over a year   github.com/ollama/ollama/... · Posted by u/Jabrov
Koshima · 9 months ago
I think it’s fair to push for clear attribution in these cases, but it’s also important to remember that the MIT license is intentionally permissive. It was designed to make sharing code easy without too many hoops. If Ollama is genuinely trying to be part of the open-source community, a little transparency and acknowledgment can avoid a lot of bad blood.
Hugsun · 9 months ago
Consensus seems to be forming around the fact that Ollama is not genuinely trying to be part of the open-source community.

u/Hugsun

KarmaCake day682June 9, 2021
About
https://arnaldur.be
View Original