Readit News logoReadit News
killthebuddha commented on Phoenix: A modern X server written from scratch in Zig   git.dec05eba.com/phoenix/... · Posted by u/snvzz
manytimesaway · 3 days ago
This is a really really bad comment. I've never heard of the framework you're talking about and I thought you were talking about the Firefox prototype.
killthebuddha · 3 days ago
I don't have an opinion on the matter, but it's pretty popular. According to [1], Phoenix "was used extensively over the past year" by 2.4% of responders.

[1] https://survey.stackoverflow.co/2025/technology

killthebuddha commented on A definition of AGI   arxiv.org/abs/2510.18212... · Posted by u/pegasus
xnx · 2 months ago
I like François Chollet definition of AGI as a system that can efficiently acquire new skills outside its training data.
killthebuddha · 2 months ago
I really appreciate his iconoclasty right now, but every time I engage with his ideas I come away feeling short changed. I’m always like “there is no such thing as outside the training data”. What’s inside and what’s outside the training data is at least as ill-defined as “what is AGI”.
killthebuddha commented on Why study programming languages (2022)   people.csail.mit.edu/rach... · Posted by u/bhasi
killthebuddha · 2 months ago
The answer to (2) is, IMO, "to make computing cheaper". It's interesting to me that this is not the obvious, default answer (it may not be the most actionable answer but IMO it should at least be noted as a way to frame discussions). I think we're at the tail end of computing's artisanal, pre-industrial era where researchers and programmers alike have this latent, tacit view of computing as a kind of arcana.
killthebuddha commented on Model Once, Represent Everywhere: UDA (Unified Data Architecture) at Netflix   netflixtechblog.com/uda-u... · Posted by u/Bogdanp
killthebuddha · 6 months ago
I feel like the Netflix tech blog has officially jumped the shark.
killthebuddha commented on OpenAI releases image generation in the API   openai.com/index/image-ge... · Posted by u/themanmaran
PeterStuer · 8 months ago
My number one ask as am almost 2 year OpenAI in production user: Enable Tool Use in the API so I can evaluate OpenAI models in agentic environments without jumping through hoops.
killthebuddha commented on Ask HN: Any insider takes on Yann LeCun's push against current architectures?    · Posted by u/vessenes
killthebuddha · 9 months ago
I've always felt like the argument is super flimsy because "of course we can _in theory_ do error correction". I've never seen even a semi-rigorous argument that error correction is _theoretically_ impossible. Do you have a link to somewhere where such an argument is made?
killthebuddha commented on Ilya Sutskever NeurIPS talk [video]   youtube.com/watch?v=1yvBq... · Posted by u/mfiguiere
mike_hearn · a year ago
Wouldn't it be the reverse? The word unreasonable is often used as a synonym for volatile, unpredictable, even dangerous. That's because "reason" is viewed as highly predictable. Two people who rationally reason from the same set of known facts would be expected to arrive at similar conclusions.

I think what Ilya is trying to get at here is more like: someone very smart can seem "unpredictable" to someone who is not smart, because the latter can't easily reason at the same speed or quality as the former. It's not that reason itself is unpredictable, it's that if you can reason quickly enough you might reach conclusions nobody saw coming in advance, even if they make sense.

killthebuddha · a year ago
Your second paragraph is basically what I'm saying but with the extension that we only actually care about reasoning when we're in these kinds of asymmetric situations. But the asymmetry isn't about the other reasoner, it's about the problem. By definition we only have to reason through something if we can't predict (don't know) the answer.

I think it's important for us to all understand that if we build a machine to do valuable reasoning, we cannot know a priori what it will tell us or what it will do.

killthebuddha commented on Ilya Sutskever NeurIPS talk [video]   youtube.com/watch?v=1yvBq... · Posted by u/mfiguiere
stevenhuang · a year ago
It's not clear any of that follows at all.

Just look at inductive reasoning. Each step builds from a previous step using established facts and basic heuristics to reach a conclusion.

Such a mechanistic process allows for a great deal of "predictability" at each step or estimating likelihood that a solution is overall correct.

In fact I'd go further and posit that perfect reasoning is 100% deterministic and systematic, and instead it's creativity that is unpredictable.

killthebuddha · a year ago
Perfect reasoning, with certain assumptions, is perfectly deterministic, but that does not at all imply that it's predictable. In fact we have extremely strong evidence to the contrary (e.g. we have the halting problem).
killthebuddha commented on Ilya Sutskever NeurIPS talk [video]   youtube.com/watch?v=1yvBq... · Posted by u/mfiguiere
bondarchuk · a year ago
Not necessarily true when you think about e.g. finding vs. verifying a solution (in terms of time complexity).
killthebuddha · a year ago
IMO verifying a solution is a great example of how reasoning is unpredictable. To say "I need to verify this solution" is to say "I do not know whether the solution is correct or not" or "I cannot predict whether the solution is correct or not without reasoning about it first".
killthebuddha commented on Ilya Sutskever NeurIPS talk [video]   youtube.com/watch?v=1yvBq... · Posted by u/mfiguiere
killthebuddha · a year ago
One thing he said I think was a profound understatement, and that's that "more reasoning is more unpredictable". I think we should be thinking about reasoning as in some sense exactly the same thing as unpredictability. Or, more specifically, useful reasoning is by definition unpredictable. This framing is important when it comes to, e.g., alignment.

u/killthebuddha

KarmaCake day432June 21, 2022
About
https://github.com/killthebuddh4 // https://ktb.pub
View Original