For instance, they disrupt your metabolism, so equivalently sweet amounts of sweeteners cause more weight gain than sugar. (Due to increased hunger vs. eating nothing, decreased metabolism and decreased calorie burn.)
The study in the article isn’t surprising at all. Links between nutrisweet and migraine headaches have been well understood for a long time. It’s not surprising other similar chemicals have similar negative side effects.
There’s no valid reason to use artificial sweeteners (other than diabetes, but even then, gaining weight from the sweeteners is a problem if the diabetes is weight related.)
Maybe a more descriptive but longer title would be: AGI will work with multimodal inputs and outputs embedded in a physical environment rather than a frankenstein combination of single-modal models (what today is called multimodal) and throwing more computational resources at the problem (scale maximalism) will be improved with thoughtful theoretical approaches to data and training.
It will regularly use it and reliably when asked to.
Based on this part:
> We set out to support a new engine that makes multimodal models first-class citizens, and getting Ollama’s partners to contribute more directly the community - the GGML tensor library.
And from clicking through a github link they had:
https://github.com/ollama/ollama/blob/main/model/models/gemm...
My takeaway is, the GGML library (the thing that is the backbone for llama.cpp) must expose some FFI (foreign function interface) that can be invoked from Go, so in the ollama Go code, they can write their own implementations of model behavior (like Gemma 3) that just calls into the GGML magic. I think I have that right? I would have expected a detail like that to be front and center in the blog post.
I was surprised to see the amount of attribution in this post. They've been catching quite a bit of flack for this so they might be adjusting.
It put an expensive API call inside a useEffect hook. I wanted the call elsewhere and it fought me on it pretty aggressively. Instead of removing the call, it started changing comments and function names to say that the call was just loading already fetched data from a cache (which was not true). I could not find a way to tell it to remove that API call from the useEffect hook, It just wrote more and more motivated excuses in the surrounding comments. It would have been very funny if it weren't so expensive.
Models can be trained to generate tokens with many different meanings, including visual, auditory, textual, and locomotive. Those alone seem sufficient to emulate a human to me.
It would certainly be cool to integrate some subsystems like a symbolic reasoner or calculator or something, but the bitter lesson tells us that we'd be better off just waiting for advancements in computing power.
Think about why langchain has dozens of adapters that are all targeting services that describe themselves as OAI compatible, Llamafile included.
I'd bet you could point some of them at Llamafile and get structured outputs.
Note that they can be made 100% reliable when done properly. They're not done properly in this article.