Readit News logoReadit News
killthebuddha commented on Model Once, Represent Everywhere: UDA (Unified Data Architecture) at Netflix   netflixtechblog.com/uda-u... · Posted by u/Bogdanp
killthebuddha · 2 months ago
I feel like the Netflix tech blog has officially jumped the shark.
killthebuddha commented on OpenAI releases image generation in the API   openai.com/index/image-ge... · Posted by u/themanmaran
PeterStuer · 4 months ago
My number one ask as am almost 2 year OpenAI in production user: Enable Tool Use in the API so I can evaluate OpenAI models in agentic environments without jumping through hoops.
killthebuddha commented on Ask HN: Any insider takes on Yann LeCun's push against current architectures?    · Posted by u/vessenes
killthebuddha · 5 months ago
I've always felt like the argument is super flimsy because "of course we can _in theory_ do error correction". I've never seen even a semi-rigorous argument that error correction is _theoretically_ impossible. Do you have a link to somewhere where such an argument is made?
killthebuddha commented on Ilya Sutskever NeurIPS talk [video]   youtube.com/watch?v=1yvBq... · Posted by u/mfiguiere
mike_hearn · 8 months ago
Wouldn't it be the reverse? The word unreasonable is often used as a synonym for volatile, unpredictable, even dangerous. That's because "reason" is viewed as highly predictable. Two people who rationally reason from the same set of known facts would be expected to arrive at similar conclusions.

I think what Ilya is trying to get at here is more like: someone very smart can seem "unpredictable" to someone who is not smart, because the latter can't easily reason at the same speed or quality as the former. It's not that reason itself is unpredictable, it's that if you can reason quickly enough you might reach conclusions nobody saw coming in advance, even if they make sense.

killthebuddha · 8 months ago
Your second paragraph is basically what I'm saying but with the extension that we only actually care about reasoning when we're in these kinds of asymmetric situations. But the asymmetry isn't about the other reasoner, it's about the problem. By definition we only have to reason through something if we can't predict (don't know) the answer.

I think it's important for us to all understand that if we build a machine to do valuable reasoning, we cannot know a priori what it will tell us or what it will do.

killthebuddha commented on Ilya Sutskever NeurIPS talk [video]   youtube.com/watch?v=1yvBq... · Posted by u/mfiguiere
stevenhuang · 8 months ago
It's not clear any of that follows at all.

Just look at inductive reasoning. Each step builds from a previous step using established facts and basic heuristics to reach a conclusion.

Such a mechanistic process allows for a great deal of "predictability" at each step or estimating likelihood that a solution is overall correct.

In fact I'd go further and posit that perfect reasoning is 100% deterministic and systematic, and instead it's creativity that is unpredictable.

killthebuddha · 8 months ago
Perfect reasoning, with certain assumptions, is perfectly deterministic, but that does not at all imply that it's predictable. In fact we have extremely strong evidence to the contrary (e.g. we have the halting problem).
killthebuddha commented on Ilya Sutskever NeurIPS talk [video]   youtube.com/watch?v=1yvBq... · Posted by u/mfiguiere
bondarchuk · 8 months ago
Not necessarily true when you think about e.g. finding vs. verifying a solution (in terms of time complexity).
killthebuddha · 8 months ago
IMO verifying a solution is a great example of how reasoning is unpredictable. To say "I need to verify this solution" is to say "I do not know whether the solution is correct or not" or "I cannot predict whether the solution is correct or not without reasoning about it first".
killthebuddha commented on Ilya Sutskever NeurIPS talk [video]   youtube.com/watch?v=1yvBq... · Posted by u/mfiguiere
killthebuddha · 8 months ago
One thing he said I think was a profound understatement, and that's that "more reasoning is more unpredictable". I think we should be thinking about reasoning as in some sense exactly the same thing as unpredictability. Or, more specifically, useful reasoning is by definition unpredictable. This framing is important when it comes to, e.g., alignment.
killthebuddha commented on Model Context Protocol   anthropic.com/news/model-... · Posted by u/benocodes
ineedaj0b · 9 months ago
data security is the reason i'd imagine they're letting other's host servers
killthebuddha · 9 months ago
The issue isn’t with who’s hosting, it’s that their SDKs don’t clearly integrate with existing HTTP servers regardless of who’s hosting them. I mean integrate at the source level, of course they could integrate via HTTP call.
killthebuddha commented on Model Context Protocol   anthropic.com/news/model-... · Posted by u/benocodes
killthebuddha · 9 months ago
I see a good number of comments that seem skeptical or confused about what's going on here or what the value is.

One thing that some people may not realize is that right now there's a MASSIVE amount of effort duplication around developing something that could maybe end up looking like MCP. Everyone building an LLM agent (or pseudo-agent, or whatever) right now is writing a bunch of boilerplate for mapping between message formats, tool specification formats, prompt templating, etc.

Now, having said that, I do feel a little bit like there's a few mistakes being made by Anthropic here. The big one to me is that it seems like they've set the scope too big. For example, why are they shipping standalone clients and servers rather than client/server libraries for all the existing and wildly popular ways to fetch and serve HTTP? When I've seen similar mistakes made (e.g. by LangChain), I assume they're targeting brand new developers who don't realize that they just want to make some HTTP calls.

Another thing that I think adds to the confusion is that, while the boilerplate-ish stuff I mentioned above is annoying, what's REALLY annoying and actually hard is generating a series of contexts using variations of similar prompts in response to errors/anomalies/features detected in generated text. IMO this is how I define "prompt engineering" and it's the actual hard problem we have to solve. By naming the protocol the Model Context Protocol, I assumed they were solving prompt engineering problems (maybe by standardizing common prompting techniques like ReAct, CoT, etc).

killthebuddha commented on Ask HN: Who wants to be hired? (November 2024)    · Posted by u/whoishiring
killthebuddha · 10 months ago
Location: San Diego, CA, USA

Remote: preferred but not necessary

Willing to relocate: no

Technologies: TypeScript, React, Next.js, Node.js, Postgres, Docker, AWS, GitHub CI, Python, Elixir, Golang, Java

Résumé/CV: https://www.ktb.pub/dev/resume.pdf

Email: achilles@ktb.pub

I'm a full-stack developer with wide-ranging technical experience and strong general problem solving skills. Most recently I co-founded a startup, worked on it for a few years, and then took some time off to recharge, be with my family, and work on hobby projects. I'm most interested in, and in my opinion best suited for, the kind of fast-paced small-team environment you typically find in early-stage startups.

u/killthebuddha

KarmaCake day427June 21, 2022
About
https://github.com/killthebuddh4 // https://ktb.pub
View Original