Readit News logoReadit News
VHRanger commented on I wasted years of my life in crypto   twitter.com/kenchangh/sta... · Posted by u/Anon84
jobs_throwaway · 9 days ago
How has it not been a store of value?
VHRanger · 9 days ago
A store of value is an asset with as close to 0% volatility in price as possible.

Bitcoin is a speculative asset: it has very high price volatility. It is not a store of value in the proper term.

VHRanger commented on RAM is so expensive, Samsung won't even sell it to Samsung   pcworld.com/article/29989... · Posted by u/sethops1
davely · 13 days ago
About a month ago, the mobo for my 5950x decided to give up the ghost. I decided to just rebuild the whole thing and update from scratch.

So went crazy and bought a 9800X3D, purchased a ridiculous amount of DDR5 RAM (96GB, which matches my old machine’s DDR4 RAM quantity). At the time, it was about $400 USD or so.

I’ve been living in blissful ignorance since then. Seeing this post, I decided to check Amazon. The same amount of RAM is currently $1200!!!

VHRanger · 13 days ago
Same, I got 96GB of high end 6000MHz DDR5 this summer for $600CAD and now it's nearly triple at $1500CAD
VHRanger commented on Mistral 3 family of models released   mistral.ai/news/mistral-3... · Posted by u/pember
giancarlostoro · 14 days ago
Maybe give Perplexity a shot? It has Grok, ChatGPT, Gemini, Kimi K2, I dont think it has Mistral unfortunately.
VHRanger · 14 days ago
Kagi has Mistral as well
VHRanger commented on Kagi Assistants   blog.kagi.com/kagi-assist... · Posted by u/ingve
Havoc · 25 days ago
Easier access to search api in general would be awesome. If I’m paying someone for search it may as well be kagi
VHRanger · 25 days ago
If you send a message in our discord we can give you a beta key I think. Costs nothing to ask over there, say Matt R sent you
VHRanger commented on Kagi Assistants   blog.kagi.com/kagi-assist... · Posted by u/ingve
spott · a month ago
I want a kagi mcp server I can use with ChatGPT or Claude.

I don’t want to use kagi ultimate (I use too many other features of ChatGPT and Claude), I just want to be able to improve the results of my AI models with kagi.

VHRanger · a month ago
What ChatGPT / Claude features do you use that we don't support?

We have an MCP server I can give you access to for search immediately. Down the line a search API and chat completions API to our assistant in the pipeline.

VHRanger commented on Kagi Assistants   blog.kagi.com/kagi-assist... · Posted by u/ingve
sometimes_all · a month ago
I tried both these prompts (along with the SINAC one as per GP) in Sonnet 4.5 and Gemini 3, and they both answered correctly for all three. Both also provided context on the chess question as well.
VHRanger · a month ago
All of this will depend on the settings on the model (reasoning effort, temperature, top_k,etc) as well.

Which is why you should have benchmarks that are a bit broader generally (>10 questions for a personal setup) otherwise you overfit to noise

VHRanger commented on Kagi Assistants   blog.kagi.com/kagi-assist... · Posted by u/ingve
grayhatter · a month ago
> LLMs are bullshitters. But that doesn't mean they're not useful

> Note: This is a personal essay by Matt Ranger, Kagi’s head of ML

I appreciate the disclaimer, but never underestimate someone's inability to understand something, when their job depends on them not understanding it.

Bullshit isn't useful to me, I don't appreciate being lied to. You might find use in declaring the two different, but sufficiently advanced ignorance (or incompetence) is indistinguishable from actual malice, and thus they should be treated the same.

Your essay, while well written, doesn't do much to convince me any modern LLM has a net positive effect. If I have to duplicate all of it's research to verify none of it is bullshit, which will only be harder after using it given the anchoring and confirmation bias it will introduce... why?

VHRanger · a month ago
That's a fair comment.

Just to give my point of view: I'm head of ML here, but I'm choosing to work here for the impact I believe I can have. I could work somewhere else.

As for the net positive effect, the point of my essay is that the trust relation you raise (not having to duplicate the research, etc) to me is a product design issue.

LLMs are fundamentally capable of bullshit. So products that leverage them have to keep that in mind to build workflows that don't end up breaking user trust in it.

The way we're currently thinking of doing that is to keep the user in the loop and incentivize the user to check sources by making it as easy as possible to quickly fact check LLM claims.

I'm on the same page as you that a model you can only trust 95% of the time is not useful because it's untrustable. So the product has to build an interaction flow that assumes that lack of trust but still makes something that is useful, saves time, respects user preferences, etc.

You're welcome to still think they're not useful for you, but that's the way we currently think about it and our goal is to make useful tools, not lofty promises of replacing humans at tasks.

VHRanger commented on Kagi Assistants   blog.kagi.com/kagi-assist... · Posted by u/ingve
data-ottawa · a month ago
Search is AI now, so I don’t get what your argument is.

Since 2019 Google and Bing both use BERT style encoder-only search architecture.

I’ve been using Kagi ki (now research assistant) for months and it is a fantastic product that genuinely improves the search experience.

So overall I’m quite happy they made these investments. When you look at Google and Perplexity this is largely the direction the industry is going.

They’re building tools on other LLMs and basically running open router or something behind the scenes. They even show you your token use/cost against your allowance/budget in the billing page so you know what you’re paying for. They’re not training their own from-scratch LLMs, which I would consider a waste of money at their size/scale.

VHRanger · a month ago
We're not running on openrouter, that would break the privacy policy.

We get specific deals with providers and use different ones for production models.

We do train smaller scale stuff like query classification models (not trained on user queries, since I don't even have access to them!) but that's expected and trivially cheap.

VHRanger commented on Kagi Assistants   blog.kagi.com/kagi-assist... · Posted by u/ingve
oidar · a month ago
do you use pinned/deranked sites as an indicator for quality?
VHRanger · a month ago
I don't think we share them across accounts, no, but we do use your personal kagi search config in assistant searches.
VHRanger commented on Kagi Assistants   blog.kagi.com/kagi-assist... · Posted by u/ingve
hatthew · a month ago
I'm a little confused about what the point of these are compared to the existing features/models that kagi already has. Are they just supposed to be a one-stop shop where I don't have to choose which model to use? When should I use kagi quick/research assistant instead of, e.g. kimi?

I tried the quick assistant a bit (don't have ultimate so I can't try research), and while the writing style seems slightly different, I don't see much difference in information compared to using existing models through the general kagi assistant interface.

VHRanger · a month ago
Quick assistant is a managed experience, so we can add features to it in a controlled way we can't for all the models we otherwise support at once.

For now Quick assistant has a "fast path" answer for simple queries. We can't support the upgrades we want to add in there on all the models because they differ in tool calling, citation reliability, context window, ability to not hallucinate, etc.

The responding model is currently qwen3-235B from cerebras but we want to decouple the user expectations from that so we can upgrade it down the road to something else. We like Kimi, but couldn't get a stable experience for Quick on it at launch with current providers (tool calling unreliability)

u/VHRanger

KarmaCake day6377March 28, 2017
About
singlelunch.com
View Original