Readit News logoReadit News
bakuninsbart commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
jqpabc123 · 3 days ago
He wants educators to instead teach “how do you think and how do you decompose problems”

Ahmen! I attend this same church.

My favorite professor in engineering school always gave open book tests.

In the real world of work, everyone has full access to all the available data and information.

Very few jobs involve paying someone simply to look up data in a book or on the internet. What they will pay for is someone who can analyze, understand, reason and apply data and information in unique ways needed to solve problems.

Doing this is called "engineering". And this is what this professor taught.

bakuninsbart · 3 days ago
It is tough though, I'd like to think I learnt how to think analytically and critically. But thinking is hard, and often times I catch myself trying to outsource my thinking almost subconsciously. I'll read an article on HN and think "Let's go to the comment section and see what the opinions to choose from are", or one of the first instincts after encountering a problem is googling and now asking an LLM.

Most of us are also old enough to have had a chance to develop taste in code and writing. Many of the young generation lack the experience to distinguish good writing from LLM drivel.

bakuninsbart commented on Ask HN: With all the AI hype, how are software engineers feeling?    · Posted by u/cpt100
ath3nd · 13 days ago
> No, it's just logical, LLM is a useful tool

How open are you to the possibility that it's the other way around? Because the study suggests that it's actually junior code monkeys that benefit from LLMs, and experienced software engineers don't instead get a decline of their productivity.

At least that's what the only available study so far shows.

That's corroborated with my experience mentoring juniors, the more they struggle with basic things like syntax or expressing their thoughts clearly in code, the more benefit they got from using LLM tools like Claude.

Once they go mid-level and above, the LLMs are a detriment to them. Do you currently get big benefit from LLMs? Maybe you are more early in your career?

bakuninsbart · 13 days ago
I think you are making a couple of very good points getting bogged down in the wrong framework of discussion. Let me rephrase what I think you are saying:

Once you are very comfortable in a domain, it is detrimental to have to wrangle a junior dev with low IQ, way too much confidence but encyclopediac knowledge of everything instead of just doing it yourself.

The dichotomy of Junior vs. Senior is a bit misleading here, every junior is uncomfortable in the domain they are working in, but a Senior probably isn't comfortable in all domains. For example, many people with 10+ SE experience I know aren't very good with databases and data engineering, which is becoming an increasingly large part of the job. For someone who has worked 10+ years on Java Backends, now attempting to write Pythin data pipelines, Coding Agents might be a useful tool to gap that bridge.

The other thing is creation vs. critique. I often let my code, writing and planning be rewiewed by Claude or Gemini, because once I have created something, I know it very well, and I can very quickly go through 20 points of criticism/recommendations/tips and pick out the relevant ones. - And honestly, that has been super helpful. Using it that way around, Claude has caught a number of bugs, taught me some new tricks and made me aware of some interesting tech.

bakuninsbart commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
machiaweliczny · 17 days ago
When to short NVIDIA? I guess when chinese get their cards production
bakuninsbart · 17 days ago
I think one thing to look out for are "deliberately" slow models. We are currently using basically all models as if we needed them in an instant loop, but many of these applications do not have to run that fast.

To tell a made-up anecdote: A colleague told me how his professor friend was running statistical models over night because the code was extremely unoptimized and needed 6+ hours to compute. He helped streamline the code and took it down to 30 minutes, which meant the professor could run it before breakfast instead.

We are completely fine with giving a task to a Junior Dev for a couple of days and see what happens. Now we love the quick feedback of running Claude Max for a hundred bucks, but if we could run it for a buck over night? Would be quite fine for me as well.

bakuninsbart commented on Genie 3: A new frontier for world models   deepmind.google/discover/... · Posted by u/bradleyg223
ducktective · 19 days ago
Wasn't the model winning gold in IMO result of a breakthrough? I doubt an stochastic parrot can solve math at IMO level...
bakuninsbart · 19 days ago
Why wouldn't it? I still have to hear one convincing argument how our brain isn't working as a function of probable next best actions. When you look at amoebas work, and animals that are somewhere between them and us in intelligence, and then us, it is a very similar kind of progression we see with current LLMs, from almost no state of the world, to a pretty solid one.
bakuninsbart commented on Genie 3: A new frontier for world models   deepmind.google/discover/... · Posted by u/bradleyg223
gavinray · 19 days ago

  > What's with this insane desire for anthropomorphism?
Devil's advocate: Making the assumption that consciousness is uniquely human, and that humans are "special" is just as ludicrous.

Whether a computational medium is carbon-based or silicon-based seems irrelevant. Call it "carbon-chauvinism".

bakuninsbart · 19 days ago
That's not even a devil's advocate, many other animals clearly have consciousness, at least if we're not solipsistic. There have been many very dangerous precedents in medicine where people have been declared "brain dead" only to awake and remember.

Since consciousness is closely linked to being a moral patient, it is all the more important to err on the side of caution when denying qualia to other beings.

bakuninsbart commented on AI promised efficiency. Instead, it's making us work harder   afterburnout.co/p/ai-prom... · Posted by u/mooreds
SoftTalker · 20 days ago
As someone who doesn't use AI for writing code, why can't you just ask Claude to write up an explanation of each change for code review? Then at least you can look at whether the explanation seems sane.
bakuninsbart · 20 days ago
I've been experimenting with Claude, and feel like it works quite well if I micromanage it. I will ask it: "Ok, but why this way and not the simpler way? And it will go "You are absolutely right" and implement the changes exactly how I want them. At least I think it does. Repeatedly, I've looked at a PR I created (and review myself, as I'm not using it "on production"), and found some pretty useless stuff mixed into otherwise solid PRs. These things are so easily missed.

That said, the models, or to be more precise, the tools surrounding it and the craft of interacting with it, are still improving at a pace where I now believe we will get to a point where "hand-crafted" code is the exception in a matter of years.

bakuninsbart commented on Persona vectors: Monitoring and controlling character traits in language models   anthropic.com/research/pe... · Posted by u/itchyjunk
andsoitis · 21 days ago
> Other personality changes are subtler but still unsettling, like when models start sucking up to users or making up facts.

My understanding is that the former (sucking up) is a personality trait, substantially influenced by the desire to facilitate engagement. The latter (making up facts), I do not think is correct to ascribe to a personality trait (like compulsive liar); instead, it is because the fitness function of LLMs drive them to produce some answer and they do not know what they're talking about, but produce strings of text based on statistics.

bakuninsbart · 21 days ago
Regarding truth telling, there seems to be some evidence that LLMs at least sometimes "know" when they are lying:

https://arxiv.org/abs/2310.06824

bakuninsbart commented on Palantir gets $10B contract from U.S. Army   washingtonpost.com/techno... · Posted by u/aspenmayer
npteljes · 23 days ago
I think it's not about the truth in that message, but rather how the message is delivered, and how the kernel of truth is planted into what context.

For example, the same message could be told by referring to respect instead of fear.

"I want less war. You only stop war by having the best technology so much that earns the respect of your adversaries. If they don't respect you, if they don’t respect the might that your army can summon, you. Instead of going along with you, they will attack you at the next opportunity"

bakuninsbart · 23 days ago
The issue is that by introducing hyperbole, the meaning changes completely. Take the two statements:

1. I want peace.

2. a) Therefore I need to be strong enough to deter any attack.

2. b) Therefore I need to be so strong that all my enemies fear me.

2. a) is sound. Nobody attacks if they believe the cost is higher than benefit. ("Believe" is doing heavy lifting here, most wars start when countries belief about cost/value is misaligned)

2. b) is incompatible with 1. Either you believe that a stronger party does not necessarily attack weaker parties, thus peace could also be maintained without supremacy, or you believe supremacy leads to wars, but then your own goal of supremacy cannot be in the name of peace.

Unless, of course, you're a race supremacist, who believes you're so much wiser and more moral than anyone else that only you can be trusted with unchecked power. An idiotic and immoral position to take.

bakuninsbart commented on Figma will IPO on July 31   figma.com/blog/ipo-pricin... · Posted by u/nevir
adastra22 · 24 days ago
It’s a terrible analysis that ignores that LLMs destroy most of the value proposition of Figma, and this is a last chance to find a bigger bag holder.
bakuninsbart · 23 days ago
On the contrary, Figma's value proposition is increased by LLMs. Current coding assistants are like savant-idiot junior devs: They have relatively low reasoning capabilities, way too much courage, lack taste and need to be micromanaged to be successful.

But they can be successful if you spell out the exact specifications. And what is Figma if not an exact specification of the design you want? Within a couple of years the Frontend Developer Market might crash pretty hard.

bakuninsbart commented on Programmers aren’t so humble anymore, maybe because nobody codes in Perl   wired.com/story/programme... · Posted by u/Timothee
kqr · 23 days ago
A lot of people in this thread speculate that Raku (formerly "Perl 6") killed Perl. But I have yet to see convincing first hand accounts confirming that.

I certainly don't believe it. Everyone I talked to at the time who worked with Perl knew it would not go away: humanity had chained too much of the infrastructure of the internet to it. Someone would have to maintain it for many years to come, even if Larry's new experiment became a wild success. (Already back then people seemed skeptical of the experiment and hung back with Perl 5 waiting to see what came out of it before paying too much attention.)

I still struggle to understand why Perl went out of favour[1] but I think what another commenter wrote here might come close: for Unixy folks who know shell, C, awk, sed, Vim, etc. Perl is a natural extension. Then came a generation of programmers brought up on ... I don't know, Visual Basic and Java? and these were more attracted to something like Python, which then became popular enough to become the next generation's first language.

[1]: As someone who knows me might understand: https://entropicthoughts.com/why-perl

bakuninsbart · 23 days ago
There were many daggers making the Perl Community bleed:

1. Enterprise Development

Java et Al led to a generation of developers working further from the kernel and the shell. Professionalization of the field led to increased specialization, and most developers had less to do with deployment and management of running software.

Tools also got much better, requiring less glue and shifting the glue layer to configs or platform specific languages.

Later on, DevOps came for the SysAdmins, and there's just much less space for Perl in the cloud.

2. The rise of Python

I would put this down mostly to universities. Perl is very expressive by design, in Python there's supposedly only "one right way to do it". Imagine you're a TA grading a hundred code submissions; in Python, everyone probably die it in one of three ways, in Perl the possibilities are endless. Perl is a language for writing, not reading.

3. Cybersecurity became a thing

Again, this goes back to readability and testability. Requirements for security started becoming a thing, and Perl was not designed with that in mind.

4. The Web was lost to Rails, PHP, then SPAs

I'm less clear on the why of that, but Perl just wasn't able to compete against newer web technologies.

u/bakuninsbart

KarmaCake day2176August 16, 2019
About
Aware of many things but proficient in none.
View Original