Readit News logoReadit News
curious_cat_163 commented on My boss fired me over WhatsApp while he was on vacation in Honolulu   ginoz.bearblog.dev/my-bos... · Posted by u/enemyz0r
johnfn · 8 days ago
I empathize with the author of this post, but is this really the type of story we want on HN? Does this inspire intellectual curiosity? I just feel a directionless anger after reading this.
curious_cat_163 · 8 days ago
Give it time, it should [1] go down to where it belongs. :-)

[1] https://news.ycombinator.com/item?id=1781013

curious_cat_163 commented on NSF and Nvidia award Ai2 $152M to support building an open AI ecosystem   allenai.org/blog/nsf-nvid... · Posted by u/_delirium
yalogin · 10 days ago
After reading through the article I couldn't understand what an open AI ecosystem is. Are they talking about hardware or software? If it's software we have opensource models, are they going to create open source vertical integrations?
curious_cat_163 · 10 days ago
We don't really have very many open source models. We have "open weights" models. Ai2 is one of the very few labs that actually make their entire training/inference code AND datasets AND training run details public. So, that this investment is happening is a welcome step.

Congratulations to the team at Ai2!

curious_cat_163 commented on Genie 3: A new frontier for world models   deepmind.google/discover/... · Posted by u/bradleyg223
timeattack · 19 days ago
Advances in generative AI are making me progressively more and more depressive.

Creativity is taken from us at exponential rate. And I don't buy argument from people who are saying they are excited to live in this age. I can get that if that technology stopped at current state and remained to be just tools for our creative endeavours, but it doesn't seem to be an endgame here. Instead it aims to be a complete replacement.

Granted, you can say "you still can play musical instruments/paint pictures/etc for yourself", but I don't think there was ever a period of time where creative works were just created for sake of itself rather for sharing it with others at masse.

So what is final state here for us? Return to menial not-yet-automated work? And when this would be eventually automated, what's left? Plug our brains to personalized autogenerated worlds that are tailored to trigger related neuronal circuitry for producing ever increasing dopamine levels and finally burn our brains out (which is arguably already happening with tiktok-style leasure)? And how you are supposed to pay for that, if all work is automated? How economics of that is supposed to work?

Looks like a pretty decent explanation of Fermi paradox. No-one would know how technology works, there are no easily available resources left to make use of simpler tech and planet is littered to the point of no return.

How to even find the value in living given all of that?

curious_cat_163 · 19 days ago
> So what is final state here for us? Return to menial not-yet-automated work? And when this would be eventually automated, what's left? Plug our brains to personalized autogenerated worlds that are tailored to trigger related neuronal circuitry for producing ever increasing dopamine levels and finally burn our brains out (which is arguably already happening with tiktok-style leasure)? And how you are supposed to pay for that, if all work is automated? How economics of that is supposed to work?

Wow. What a picture! Here's an optimistic take, fwiw: Whenever we have had a paradigm shift in our ability to process information, we have grappled with it by shifting to higher-level tasks.

We tend to "invent" new work as we grapple with the technology. The job of a UX designer did not exist in 1970s (at least not as a separate category employing 1000s of people; now I want to be careful this is HN, so there might be someone on here who was doing that in the 70s!).

And there is capitalism -- if everyone has access to the best-in-class model, then no one has true edge in a competition. That is not a state that capitalism likes. The economics _will_ ultimately kick in. We just need this recent S-curve to settle for a bit.

curious_cat_163 commented on Intel CEO Letter to Employees   morethanmoore.substack.co... · Posted by u/fancy_pantser
enraged_camel · a month ago
Intel is circling the drain. At this point I'm of the opinion that the sooner it dies the better. Its death will probably result in a lot of spin-offs and startups, which might be what the US chip industry needs.
curious_cat_163 · a month ago
That's an interesting way to look at it.

Aren't layoffs a version of that? Are we seeing any evidence that folks who have been let go from Intel have resulted in spin-offs and startups?

I know at least one person who went to work at Nvidia from Intel but that is neither of those things.

curious_cat_163 commented on Intel CEO Letter to Employees   morethanmoore.substack.co... · Posted by u/fancy_pantser
samrus · a month ago
Agentic AI? How exactly does a chip manufacturer focus on agentic AI? Thats software. How does riding that hype bubble help them male better chips?
curious_cat_163 · a month ago
Oh, I don't know. Maybe build chips that do things 10x more efficiently and sell them a lower cost to compete?

It _is_ a hype bubble but it is also an S-curve. Intel has missed the AI boat so far, if they are trying to catch up, I would encourage them to try. Building marginally better x86 chips might not cut it anymore.

curious_cat_163 commented on The rise of AI as a threat to the S&P 500 [pdf]   autonomy.work/wp-content/... · Posted by u/seangrvs
simonw · a month ago
They used LLMs to look at the risk disclosures in recent 10-K filings, and found that 3/4 of S&P 500 companies mentioned additional AI risks compared tot heir own previous 10-K - AI-driven cyber attacks, deepfakes, energy demands, regulation (AI EU act) etc.
curious_cat_163 · a month ago
And, so what?
curious_cat_163 commented on Reflections on OpenAI   calv.info/openai-reflecti... · Posted by u/calvinfo
hinterlands · a month ago
It is fairly rare to see an ex-employee put a positive spin on their work experience.

I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.

Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.

This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!

curious_cat_163 · a month ago
> It is fairly rare to see an ex-employee put a positive spin on their work experience.

I liked my jobs and bosses!

curious_cat_163 commented on Ask HN: Is it time to fork HN into AI/LLM and "Everything else/other?"    · Posted by u/bookofjoe
toomuchtodo · a month ago
Indeed! Previous:

https://news.ycombinator.com/item?id=44261825

I suppose an extension is the answer, classifying and customizing the user’s view accordingly with a pluggable LLM config.

curious_cat_163 · a month ago
Maybe you'll want to try Techne: https://techne.app.
curious_cat_163 commented on The upcoming GPT-3 moment for RL   mechanize.work/blog/the-u... · Posted by u/jxmorris12
curious_cat_163 · a month ago
> Rather than fine-tuning models on a small number of environments, we expect the field will shift toward massive-scale training across thousands of diverse environments.

This is a great hypothesis for you to prove one way or the other.

> Doing this effectively will produce RL models with strong few-shot, task-agnostic abilities capable of quickly adapting to entirely new tasks.

I am not sure if I buy that, frankly. Even if you were to develop radically efficient means to create "effective and comprehensive" test suites that power replication training, it is not at all a given that it will translate to entirely new tasks. Yes, there is the bitter lesson and all that but we don't know if this is _the_ right hill to climb. Again, at best, this is a hypothesis.

> But achieving this will require training environments at a scale and diversity that dwarf anything currently available.

Yes. You should try it. Let us know if it works. All the best!

curious_cat_163 commented on Measuring the impact of AI on experienced open-source developer productivity   metr.org/blog/2025-07-10-... · Posted by u/dheerajvs
noisy_boy · a month ago
It is 80/20 again - it gets you 80% of the way in 20% of the time and then you spend 80% of the time to get the rest of the 20% done. And since it always feels like it is almost there, sunk-cost fallacy comes into play as well and you just don't want to give up.

I think an approach that I tried recently is to use it as a friction remover instead of a solution provider. I do the programming but use it to remove pebbles such as that small bit of syntax I forgot, basically to keep up the velocity. However, I don't look at the wholesale code it offers. I think keeping the active thinking cap on results in code I actually understand while avoiding skill atrophy.

curious_cat_163 · a month ago
100% agreed. It is all about removing friction for me. Case in point: I would not have touched React in my previous career without the assist that LLMs now provide. The barrier to entry just _felt_ to be too large and one always has the instinct to stick with what one knows.

However, it is _fun_ to go over the barrier if it is chatting with a model to get a quick tutorial and produce working code for a prototype (for your specific needs) where the understanding that you just developed is applied. The alternative (without LLMs) is to first do the ground work of learning via tutorials in text/video form and then do the cognitive mapping of applying the learning to one's prototype. I would make a lot of mistakes that expert/intermediate React developers don't make on this path.

One could argue that it shortcuts some learning and perhaps the old way results in better retention. But, our field changes so fast... and when it remains static for too long, projects die. I think of all this as accelerant for progress in adoption of new ways of thinking about software and diffusing that more quickly across the developer population globally. Code is always fungible, anyway. The job is about all the other things that one needs to do besides coding.

u/curious_cat_163

KarmaCake day379February 28, 2021
About
meet.hn/city/41.8755616,-87.6244212/Chicago

Socials: - bsky.app/profile/https://bsky.app/profile/curious-cat-163.bsky.social

---

View Original