Readit News logoReadit News
kevin42 commented on Micron to exit consumer memory business amid global supply shortage   reuters.com/business/micr... · Posted by u/djkoolaide
kevin42 · 11 days ago
My wife works for a small distributor of memory (not modules, just the chips), and their manufacturers won't even provide prices or take backorders.

I understand why they did it in the current market, but Micron's exit is going to cause retail prices to go even higher.

I used pcpartsbuilder to spec out a machine I was thinking of building a few months ago, and the DRAM that was $109 for 32GB is now $399. It's crazy.

kevin42 commented on The Death of Arduino?   linkedin.com/posts/adafru... · Posted by u/ChuckMcM
lemonwaterlime · 25 days ago
I was never a fan of the Maker Movement. While it did get people to tinker, there was always this massive gap between lighting up an LED and using EEPROM, JTAG debugging, interrupts, and even designing some of the more intricate circuit designs to pull of intermediate projects. I found that there were people who knew how to do that stuff and the rest just trying to get by.

The last time I used Arduino, I ended up just coding the bare metal out of necessity for the things I was trying to do. Some functionality of the chips was literally not accessible unless you break out of the sandbox. But then I wondered why we didn't just get people set up without shielding them so much from what it actually takes to do embedded development. Ultimately, the failure of the Maker Movement to me is that there is not an upgrade path. You start blinking LEDs and then what? Thus, lots of people end up being eternal beginners, which I don't think is helpful.

kevin42 · 25 days ago
That's a pretty condescending take.

To some extent I agree that the upgrade path is lacking. I recently helped a friend move out of the ino file model into building regular c++ applications because his design was getting pretty complicated. Once he realized that he knew more of c++ than he thought he did, it was a game changer for him.

At the same time, people have done some pretty amazing stuff using the Arduino platform without knowing how to use the things you mention. What you call eternal beginners have accomplished a lot. James Bruton does some pretty impressive robotics work using Arduino.

kevin42 commented on The AI coding trap   chrisloy.dev/post/2025/09... · Posted by u/chrisloy
abrichr · 3 months ago
> While the LLMs get to blast through all the fun, easy work at lightning speed, we are then left with all the thankless tasks: testing to ensure existing functionality isn’t broken, clearing out duplicated code, writing documentation, handling deployment and infrastructure, etc.

I’ve found LLMs just as useful for the "thankless" layers (e.g. tests, docs, deployment).

The real failure mode is letting AI flood the repo with half-baked abstractions without a playbook. It's helpful to have the model review the existing code and plan out the approach before writing any new code.

The leverage may be in using LLMs more systematically across the lifecycle, including the grunt work the author says remains human-only.

kevin42 · 3 months ago
That's my experience as well. The LLM is great for what I consider scaffolding. I can describe the architecture I want, some guidelines in CLAUDE.md, then let it write a bunch of stubbed out code. It saves me a ton of time typing.

It's also great for things that aren't creative, like 'implement a unit test framework using google test and cmake, but don't actually write the tests yet'. That type of thing saves me hours and hours. It's something I rarely do, so it's not like I just start editing my cmake and test files, I'd be looking up documentation, and a lot of code that is necessary, but takes a lot of time.

With LLMs, I usually get what I want quickly. If it's not what I want, a bit of time reviewing what it did and where it went wrong usually tells me what I need to give it a better prompt.

kevin42 commented on All AI models might be the same   blog.jxmo.io/p/there-is-o... · Posted by u/jxmorris12
bbarnett · 5 months ago
LLMs don't think, nor are they intelligent or exhibiting intelligence.

Language does have constraints, yet it evolves via its users to encompass new meanings.

Thus those constraints are artificial, unless you artificially enforce static language use. And of course, for an LLM to use those new concepts, it needs to be retokenized by being trained on new data.

For example, if we trained LLMs only on books, encyclopedias, newpapers, and personal letters from 1850, it would have zero capacity to speak comprehensibly or even seem cogent on much of the modern world.

And it would forever remain in that disconnected positon.

LLMs do not think, understand anything, nor learn. Should you wish to call tokenization, learning, then you'd better call a clock "learning" from the gears and cogs that enable its function.

LLMs do not think, learn, or exhibit intelligence. (I feel this is not said enough).

We will never, ever get AGI from an LLM. Ever.

I am sympathetic to the wonder of LLMs. To seeing them as such. But I see some art as wonderous too. Some machinery is beautiful in execution and to use.

But that doesn't change truths.

kevin42 · 5 months ago
You made a lot of bold assertions there. It's as if you have a complete and definitive theory of human intelligence to compare it against. Which if true, would be incredible, because there isn't a scientifically accepted theory, nor is there consensus from a philosophical standpoint.

I can't say that you are wrong, you might be right, especially about AGI. And I think it's unlikely that LLMs are the direct path to AGI. But, just looking at how human brains work, it seems unlikely that we would be intelligent either if we used your same reductionist logic.

An individual neuron doesn't "think" or "understand" anything. It's a biological cell that simply fires an electrochemical signal when its input threshold is met. It has no understanding of language or context. By your logic, since the fundamental components are just simple signal processors, the brain cannot possibly learn or be intelligent. Yet, from the complex interaction of ~86 billion of these simple biochemical machines, the emergent properties of thought, understanding, and consciousness arise.

Dismissing an LLM's capabilities because its underlying operations are basically just math operating on tokenized data is like dismissing human consciousness because it's "just electrochemistry" in a network of cells. Both arguments mistake the low-level mechanism for the high-level emergent phenomenon.

kevin42 commented on Writing Code Was Never the Bottleneck   ordep.dev/posts/writing-c... · Posted by u/thunderbong
kevin42 · 5 months ago
The same thing came up in the early 2000s when businesses tried offshoring software development. It turns out that translating business requirement to technical design and implementation is one of the hard parts of software development. In a pure waterfall model, this works ok, but you rarely know what you want until the end users see it. This is while agile is so important.

I hate to say it, but with over 30 years experience as a software developer in various roles, writing code has never been the hard part of delivering a successful project.

kevin42 commented on RP2350pc Open Source Hardware all in one computer   olimex.wordpress.com/2025... · Posted by u/AlexeyBrin
kevin42 · 5 months ago
I think this is a really cool project, but the problem with putting so many peripherals on such a small processor is that it's really tough to have firmware that uses all of those things at once and fit in the memory footprint.

I can see this as a great platform for prototyping though.

kevin42 commented on AGI is Mathematically Impossible 2: When Entropy Returns   philarchive.org/archive/S... · Posted by u/ICBTheory
somenameforme · 6 months ago
I don't think science and consciousness go together quite well at this point. I'll claim consciousness doesn't exist. Try to prove me wrong. Of course I know I'm wrong because I am conscious, but that's literally impossible to prove, and it may very well be that way forever. You have no way of knowing I'm conscious - you could very well be the only conscious entity in existence. This is not the case because I can strongly assure you I'm conscious as well, but a philosophical zombie would say the same thing, so that assurance means nothing.
kevin42 · 6 months ago
There are more than one theories, as well as some evidence that consciousness may not exist in the way we'd like to think.

It may be a trick our mind plays on us. The Global Workspace Theory addresses this, and some of the predictions this theory made have been supported by multiple experiments. If GWT is correct, it's very plausible, likely even, that an artificial intelligence could have the same type of consciousness.

kevin42 commented on AGI is Mathematically Impossible 2: When Entropy Returns   philarchive.org/archive/S... · Posted by u/ICBTheory
somenameforme · 6 months ago
Consciousness is an issue. If you write a program to add 2+2, you probably do not believe some entity poofs into existence, perceives itself as independently adding 2+2, and then poofs out of existence. Yet somehow, the idea of an emergent consciousness is that if you instead get it to do 100 basic operations, or perhaps 2^100 then suddenly this becomes true? The reason one might believe this is not because it's logical or reasonable - or even supported in any way, but because people assume their own conclusion. In particular if one takes a physicalist view of the universe then consciousness must be a physical process and so it simply must emerge at some sufficient degree of complexity.

But if you don't simply assume physicalism then this logic falls flat. And the more we discover about the universe, the weirder things become. How insane would you sound not that long ago to suggest that time itself would move at different rates for different people at the same "time", just to maintain a perceived constancy of the speed of light? It's nonsense, but it's real. So I'm quite reluctant to assume my own conclusion on anything with regards to the nature of the universe. Even relatively 'simple' things like quantum entanglement are already posing very difficult issues for a physicalist view of the universe.

kevin42 · 6 months ago
My issue is that from a scientific point of view, physicalism is all we have. Everything else is belief, or some form of faith.

Your example about relativity is good. It might have sounded insane at some point, but it turns out, it is physics, which nicely falls into the physicalism concept.

If there is a falsifiable scientific theory that there is something other than a physical mechanism behind consciousness and intelligence, I haven't seen it.

kevin42 commented on AGI is Mathematically Impossible 2: When Entropy Returns   philarchive.org/archive/S... · Posted by u/ICBTheory
rusk · 6 months ago
It can be avoided certainly, but can it be avoided with the current or near term technology about which many are saying “it’s only a matter of time”
kevin42 · 6 months ago
I like the distinction you made there. My observation that when it comes to AGI, there are those who are saying "Not possible with the current technology." and "Not possible at all, because humans have [insert some characteristic here about self awareness, true creativity, etc] and machines don't.

I can respect the first argument. I personally don't see any reason to believe AGI is impossible, but I also don't see evidence that it is possible with the current (very impressive) technology. We may never build an AGI in my lifetime, maybe not ever, but that doesn't mean it's not possible.

But the second argument, that humans do something machines aren't capable of always falls flat to me for lack of evidence. If we're going to dismiss the possibility of something, we shouldn't do it without evidence. We don't have a full model of human intelligence, so I think it's premature to assume we know what isn't possible. All the evidence we have is that humans are biological machines, everything follows the laws of physics, and yet, here we are. There isn't evidence that anything else is going on other than physical phenomenon, and there isn't any physical evidence that a biological machine can't be emulated.

kevin42 commented on AGI is Mathematically Impossible 2: When Entropy Returns   philarchive.org/archive/S... · Posted by u/ICBTheory
ICBTheory · 6 months ago
Sure I can (and thanks for writing)

Well, given the specific way you asked that question I confirm your self assertion - and am quite certain that your level of Artificiality converges to zero, which would make you a GI without A...

- You stated to "feel" generally intelligent (A's don't feel and don't have an "I" that can feel) - Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity

A "précis" as you wished: Artificial — in the sense used here (apart from the usual "planfully built/programmed system" etc.) — algorithmic, formal, symbol-bound.

Humans as "cognitive system" have some similar traits of course - but obviously, there seems to be more than that.

kevin42 · 6 months ago
>but obviously, there seems to be more than that.

I don't see how that's obvious. I'm not trying to be argumentative here, but it seems like these arguments always come down to a qualia, or the insistence that humans have some sort of 'spark' that machines don't have, therefore: AGI is not possible since machines don't have it.

I also don't understand the argument that "Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity". How does that follow?

What scientific evidence is there that we are anything other than a biochemical machine? And if we are a biochemical machine, how is that inherently capable of more than a silicon based machine is capable of?

u/kevin42

KarmaCake day1016February 12, 2014View Original