Readit News logoReadit News
adpirz commented on Anthropic acquires Bun   bun.com/blog/bun-joins-an... · Posted by u/ryanvogel
Simran-B · 3 months ago
Classic - brand new blog post:

> We’re hiring engineers.

Careers page:

> Sorry, no job openings at the moment.

adpirz · 3 months ago
It's the Anthropic careers page that you're likely looking for now:

https://www.anthropic.com/jobs?team=4050633008

adpirz commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
simonw · 9 months ago
> The counter-argument as I see it is that going from “not using LLM tooling” to “just as competent with LLM tooling” is…maybe a day? And lessening and the tools evolve.

Very much disagree with that. Getting productive and competent with LLM tooling takes months. I've been deeply invested in this world for a couple of years now and I still feel like I'm only scraping the surface of what's possible with these tools.

adpirz · 9 months ago
Plug for Simon's (very well written) longer form article about this topic: https://simonwillison.net/2025/Mar/11/using-llms-for-code/
adpirz commented on Zod 4   zod.dev/v4... · Posted by u/bpierre
johnfn · 10 months ago
I'm curious if anyone here can answer a question I've wondered about for a long time. I've heard Zod might be in the right ballpark, but from reading the documentation, I'm not sure how I would go about it.

Say I have a type returned by the server that might have more sophisticated types than the server API can represent. For instance, api/:postid/author returns a User, but it could either be a normal User or an anonymous User, in which case fields like `username`, `location`, etc come back null. So in this case I might want to use a discriminated union to represent my User object. And other objects coming back from other endpoints might also need some type alterations done to them as well. For instance, a User might sometimes have Post[] on them, and if the Post is from a moderator, it might have special attributes, etc - another discriminated union.

In the past, I've written functions like normalizeUser() and normalizePost() to solve this, but this quickly becomes really messy. Since different endpoints return different subsets of the User/Post model, I would end up writing like 5 different versions of normalizePost for each endpoint, which seems like a mess.

How do people solve this problem?

adpirz · 10 months ago
It's hard to unpack without knowing more about the use case, but adding discriminant properties (e.g. "user_type") to all the types in the union can make it easier to handle the general and specific case.

E.g.

if (user.user_type === 'authenticated') {

  // do something with user.name because the type system knows we have that now

}

adpirz commented on A Research Preview of Codex   openai.com/index/introduc... · Posted by u/meetpateltech
orliesaurus · 10 months ago
Why hasn't Github released this? Why it's OpenAI releasing this?!
adpirz · 10 months ago
It's on their roadmap: https://github.blog/news-insights/product-news/github-copilo...

But they aren't moving nearly as fast as OpenAI. And it remains to be seen if first mover will mean anything.

adpirz commented on Acquisitions, consolidation, and innovation in AI   frontierai.substack.com/p... · Posted by u/pfarago
skeeter2020 · a year ago
I agree with this but I think it's still an open question if anyone can build a successful product on top of the tech. There will likely be some but it feels eerily similar to the dot com boom (and then bust) when the vast majority of new products built on top of this (internet) technology didn't produce and didn't survive. Most AI products so far are fun toys or interesting proofs, and mediocre when evaluated against other options. They'll need to be applied to a much smaller set of problems (that doesn't support the current level of investment) or find some new miracle set of problems where they change the rules.

Businesses are definitely rearranging themselves structurally around AI - at least to try and get the AI valuation multiplier and Executives have levels of FOMO I've never seen before. I report to a CTO and the combination of 100,000 foot hype combined with down in the weeds focus on the "protocol de jour" (with nothing in between that looks like a strategy) is astounding. I just find it exhausting.

adpirz · a year ago
The dot com boom is an apt analogy: the internet took off, we understood it had potential, but the innovation didn't all come in the first wave. It took time for the internet to bake, and then we saw another boom with the advent of mobile phones, higher bandwidth, and more compute per user.

It is still simply too early to tell exactly what the new steady state is, but I can tell you that where we're at _today_ is already a massive paradigm shift from what my day-to-day looked like 3 years ago, at least as a SWE.

There will be lots of things thrown at the wall and the things that stick will have a big impact.

adpirz commented on Gemini 2.5   blog.google/technology/go... · Posted by u/meetpateltech
malisper · a year ago
I've been using a math puzzle as a way to benchmark the different models. The math puzzle took me ~3 days to solve with a computer. A math major I know took about a day to solve it by hand.

Gemini 2.5 is the first model I tested that was able to solve it and it one-shotted it. I think it's not an exaggeration to say LLMs are now better than 95+% of the population at mathematical reasoning.

For those curious the riddle is: There's three people in a circle. Each person has a positive integer floating above their heads, such that each person can see the other two numbers but not his own. The sum of two of the numbers is equal to the third. The first person is asked for his number, and he says that he doesn't know. The second person is asked for his number, and he says that he doesn't know. The third person is asked for his number, and he says that he doesn't know. Then, the first person is asked for his number again, and he says: 65. What is the product of the three numbers?

adpirz · a year ago
Interactive playground for the puzzle: https://claude.site/artifacts/832e77d7-5f46-477c-a411-bdad10...

(All state is stored in localStorage so you can come back to it :) ).

adpirz commented on Generative AI hype peaking?   bjornwestergard.com/gener... · Posted by u/bwestergard
adpirz · a year ago
Having used the latest models regularly, it does feel like we're at diminishing returns in terms of raw performance from GenAI / LLMs.

...but now it'll be exciting to let them bake. We need some time to really explore what we can do with them. We're still mostly operating in back-and-forth chats, I think there's going to be lots of experimentation with different modalities of interaction here.

It's like we've just gotten past the `Pets.com` era of GenAI and are getting ready to transition to the app era.

adpirz commented on A few words about FiveThirtyEight   natesilver.net/p/a-few-wo... · Posted by u/JumpCrisscross
iuyhtgbd · a year ago
I don't know if I'm just dense but I continue to have no idea what the implication is.
adpirz · a year ago
both "get" and "eat" would be followed by an expletive.

op doesn't like nate.

adpirz commented on AI-designed chips are so weird that 'humans cannot understand them'   livescience.com/technolog... · Posted by u/anonymousiam
adpirz · a year ago
I’ve never been able to put it into words, but when we think about engineering in almost any discipline, a significant amount of effort goes into making things buildable by different groups of people. We modularize components or code so that different groups can specialize in isolated segments.

I always imagined if you could have some super mind build an entire complex system, it would find better solutions that got around limitations introduced by the need to make engineering accessible to humans.

adpirz commented on The Generative AI Con   wheresyoured.at/longcon/... · Posted by u/nimbleplum40
simonw · a year ago
> When you put aside the hype and anecdotes, generative AI has languished in the same place, even in my kindest estimations, for several months, though it's really been years. The one "big thing" that they've been able to do is to use "reasoning" to make the Large Language Models "think" [...]

This is missing the most interesting changes in generative AI space over the last 18 months:

- Multi-modal: LLMs can consume images, audio and (to an extent) video now. This is a huge improvement on the text-only models of 2023 - it opens up so many new applications for this tech. I use both image and audio models (ChatGPT Advanced Voice) on a daily basis.

- Context lengths. GPT-4 could handle 8,000 tokens. Today's leading models are almost all 100,000+ and the largest handle 1 or 2 million tokens. Again, this makes them far more useful.

- Cost. The good models today are 100x cheaper than the GPT-3 era models and massively more capable.

adpirz · a year ago
The "iPhone moment" gets used a lot, but maybe it's more analogous to the early internet: we have the basics, but we're still learning what we can do with this new protocol and building the infrastructure around it to be truly useful. And as you've pointed out, our "bandwidth" is increasing exponentially at the same time.

If nothing else, my workflows as a software developer have changed significantly in these past two years with just what's available today, and there is so much work going into making that workflow far more productive.

u/adpirz

KarmaCake day1071May 2, 2012View Original