Readit News logoReadit News
halyconWays commented on Gemma 3 270M: Compact model for hyper-efficient AI   developers.googleblog.com... · Posted by u/meetpateltech
yomismoaqui · 12 days ago
Evaluating a 270M model on encyclopedic knowledge is like opening a heavily compressed JPG image and saying "it looks blocky"
halyconWays · 12 days ago
Me: "List the second word in your comment reply"

You: "I'm sorry, I don't have an encyclopedia."

I'm starting to think you're 270M.

halyconWays commented on Ollama and gguf   github.com/ollama/ollama/... · Posted by u/indigodaddy
tarruda · 15 days ago
I recently discovered that ollama no longer uses llama.cpp as a library, and instead they link to the low level library (ggml) which requires them to reinvent a lot of wheel for absolutely no benefit (if there's some benefit I'm missing, please let me know).

Even using llama.cpp as a library seems like an overkill for most use cases. Ollama could make its life much easier by spawning llama-server as a subprocess listening on a unix socket, and forward requests to it.

One thing I'm curious about: Does ollama support strict structured output or strict tool calls adhering to a json schema? Because it would be insane to rely on a server for agentic use unless your server can guarantee the model will only produce valid json. AFAIK this feature is implemented by llama.cpp, which they no longer use.

halyconWays · 15 days ago
>(if there's some benefit I'm missing, please let me know).

Makes their VCs think they're doing more, and have more ownership, rather than being a do-nothing wrapper with some analytics and S3 buckets that rehost models from HF.

halyconWays commented on Air Force unit suspends use of Sig Sauer pistol after shooting death of airman   nhpr.org/nh-news/2025-07-... · Posted by u/duxup
zxcvbn4038 · a month ago
I recently bought a SIG P320, and a week later, I started reading articles about it self discharging. =P It’s not like it happens all the time, but it seems that if the safety lever spring’s thickness is off by a thousandth of an inch, and the height of the post it fits on is also off by a thousandth of an inch, and you drop the pistol at just the right angle with enough force, the FBI reportedly got it to discharge once during testing—though officially, the results are inconclusive. Now, some law enforcement agencies are quietly replacing the P320 with the Glock 19. Personally, I’m keeping mine because it’s a great gun, and I love that 21-round magazine. However, I sent in my warranty card in case there’s a recall or something similar.
halyconWays · a month ago
At least one of those critical components (P/N 1300739-R) is manufactured in India. Is that a contributing factor?
halyconWays commented on Structured Output with LangChain and Llamafile   blog.brakmic.com/structur... · Posted by u/brakmic
Hugsun · 2 months ago
I gave up after it didn't let me see the prompt that went into the LLM, without using their proprietary service. I'd recommend just using the API directly. They're very simple. There might be some simpler wrapper library if you want all the providers and can't be bothered to implement the support for each. Vercel's ai-sdk seems decent for JS.
halyconWays · 2 months ago
>I gave up after it didn't let me see the prompt that went into the LLM, without using their proprietary service.

Haha, really?

halyconWays commented on Structured Output with LangChain and Llamafile   blog.brakmic.com/structur... · Posted by u/brakmic
Hugsun · 2 months ago
The version of llama.cpp that Llamafile uses supports structured outputs. Don't waste your time with bloat like langchain.

Think about why langchain has dozens of adapters that are all targeting services that describe themselves as OAI compatible, Llamafile included.

I'd bet you could point some of them at Llamafile and get structured outputs.

Note that they can be made 100% reliable when done properly. They're not done properly in this article.

halyconWays · 2 months ago
>Don't waste your time with bloat like langchain.

Amen. See also: "Langchain is Pointless" https://news.ycombinator.com/item?id=36645575

halyconWays commented on Deepseek R1-0528   huggingface.co/deepseek-a... · Posted by u/error404x
chvid · 3 months ago
Benchmarks seem like a fools errand at this point; overly tuning models just to specific test already published tests, rather than focusing on making them generalize.

Hugging face has a leader board and it seems dominated by models that are finetunings of various common open source models, yet don't seem be broader used:

https://huggingface.co/open-llm-leaderboard

halyconWays · 3 months ago
>overly tuning models just to specific test already published tests, rather than focusing on making them generalize.

I think you just described SATs and other standardized tests

halyconWays commented on Coding without a laptop: Two weeks with AR glasses and Linux on Android   holdtherobot.com/blog/202... · Posted by u/mikenew
zmmmmm · 3 months ago
that's really good to know

When you look at reviews they all exclaim how clear and crisp it is, but they are virtually never looking at text, which is approximately the only thing I want to look at.

halyconWays · 3 months ago
I compare it to VR headsets, and I've used most of them. I currently use a Quest Pro and consider that clearer and easier to read than the Air 2 Pros
halyconWays commented on Coding without a laptop: Two weeks with AR glasses and Linux on Android   holdtherobot.com/blog/202... · Posted by u/mikenew
zmmmmm · 3 months ago
Can you elaborate what the main issue is for you?
halyconWays · 3 months ago
Mainly lack of text clarity, resolution, jitter when trying to keep the image still with the xreal beam, and low FOV
halyconWays commented on Coding without a laptop: Two weeks with AR glasses and Linux on Android   holdtherobot.com/blog/202... · Posted by u/mikenew
halyconWays · 3 months ago
Air 2 Pros!? I have those and can't stand using them for work. I was hoping the One Pros would be a big enough step up that I could use AR glasses for daily productivity.

u/halyconWays

KarmaCake day25March 19, 2020View Original