Readit News logoReadit News
eaglelamp commented on Why we migrated from Python to Node.js   blog.yakkomajuri.com/blog... · Posted by u/yakkomajuri
never_inline · 2 months ago
Can someone explain a plebian C/python/java/go programmer with good idea about languages & runtimes:

==> what makes erlang runtimes so special which you don't get by common solutions for retries etc?

eaglelamp · 2 months ago
Normal Erlang code has a fixed number of reductions (function calls) before it must yield to a scheduler. Processes also have their own stacks and heaps and run garbage collection independently. The result is that no single process can stop the whole system by monopolizing CPU or managing shared memory.

The Erlang runtime can start a scheduler for every core on a machine and, since processes are independent, concurrency can be achieved by spawning additional processes. Processes communicate by passing messages which are copied from the sender into the mailbox of the receiver.

As an application programmer all of your code will run within a process and passively benefit from these properties. The tradeoff is that concurrency is on by default and single threaded performance can suffer. There are escape hatches to run native code, but it is more painful than writing concurrent code in a single-threaded by default language. The fundamental assumption of Erlang is that it is much more likely that you will need concurrency and fault tolerance than maximum single thread performance.

eaglelamp commented on Just talk to it – A way of agentic engineering   steipete.me/posts/just-ta... · Posted by u/freediver
CGMthrowaway · 2 months ago
I'm picturing COBOL developers in the 80s saying the same thing about modern developers today (without even bringing in AI).
eaglelamp · 2 months ago
Higher level abstractions are built on rational foundations, that is the distinction. I may not understand byte code generated by a compiler, but I could research the compiler and understand how it is generated. No matter how much I study a language model I will never understand how it chose to generate any particular output.
eaglelamp commented on I Am An AI Hater   anthonymoser.github.io/wr... · Posted by u/BallsInIt
petralithic · 4 months ago
When did I say I believe AI to be intelligent or emotional? Of course I use it for economic factors, but I'm honest about it, not wrapping it up in some intellectual, solipsizing arguments. I'm not even sure what non-economic arguments you're talking about, my point is that at the end of the day most people care about the economic impact it might have on them, not anything about the technology itself.
eaglelamp · 4 months ago
I don’t think the author is hiding his economic anxiety behind solipsism. He states plainly he doesn’t like the deskilling of work.

My point is why are your economic motivations valid while his aren’t?

eaglelamp commented on I Am An AI Hater   anthonymoser.github.io/wr... · Posted by u/BallsInIt
petralithic · 4 months ago
I've talked to people like this and when you dig deep enough, it's a fear of the economic effects of it, not actually any strongly held belief of AI inherently not being intelligent or emotional. Similarly, and I'm speaking generally here, ask artists about coding AI and they won't care, and ask programmers about media generation AI and they also won't care. That's because AI outside their domain does not (ostensibly) threaten their livelihood.
eaglelamp · 4 months ago
If you dig deep enough isn’t the same thing true of people like yourself? Do you truly believe that the large language models we currently have, not some fantasy AI of the distant future, are emotional and intellectual beings? Or, are you more interested in the short term economic gains of using them? Does this invalidate your beliefs? I don’t think so, most everyday beliefs are related to economic conditions.

How could a practical LLM enthusiast make a non-economic argument in favor of their use? They’re opaque usually secretive jumbles of linear algebra, how could you make a reasonable non-economic argument about something you don’t, and perhaps can’t, reason about?

eaglelamp commented on Is the A.I. Boom Turning Into an A.I. Bubble?   newyorker.com/news/the-fi... · Posted by u/FinnLobsien
it_citizen · 4 months ago
> The global pandemic will tank the stock market

> War in Ukraine will tank the stock market

> High interest rates will tank the stock market

> Tariffs will tank the stock market

> IA will tank the stock market <- We are here

All those statements made sense to me at the time. And I have no doubt that one of these days, someone will make a correct prediction. But who the hell know what and when.

Diversify, be reasonable and be prepared for it to happen someday. But freaking out with any new prediction of doom is not the winning strategy.

eaglelamp · 4 months ago
The primary harm of a bubble is *not* a crash in equity values, it is the misallocation of capital. The worst outcome would be for the misallocation to continue due to the intervention of asset owners with the most to lose who are also in control of the state.

All of the events you listed have had significant economic effects and required massive intervention from the state to buoy asset prices. The longer this continues the more our economy becomes geared to producing "value" for this small, and shrinking, group of owners at the expense of everyone else.

eaglelamp commented on Meta invests $14.3B in Scale AI to kick-start superintelligence lab   nytimes.com/2025/06/12/te... · Posted by u/RyanShook
krosaen · 6 months ago
Anyone know what scale does these days beyond labeling tools that would make them this interesting to Meta? Data labeling tools seem more of a traditional software application and not much to do with AI models themselves that would be somewhat easily replicated, but guessing my impression is out of date. Also now apparently their CEO is leaving [1], so the idea that they were super impressed with him doesn't seem to be the explanation.

[1] https://techcrunch.com/2025/06/13/scale-ai-confirms-signific...

eaglelamp · 6 months ago
It looks like security/surveillance play more than anything. Scale has strong relationships with the US MIC, the current administration (predating Zuck's rebranding), and gulf states.

Their Wikipedia history section lists accomplishments that align closely with DoD's vision for GenAI. The current admin, and the western political elite generally, are anxious about GenAI developments and social unrest, the pairing of Meta and Scale addresses their anxieties directly.

eaglelamp commented on I am disappointed in the AI discourse   steveklabnik.com/writing/... · Posted by u/steveklabnik
eaglelamp · 7 months ago
There is no meaningful discourse because there is no meaningful decision at stake. The owning class has decided that “AI” will be shoved into any plausible orifice and the “discourse” online is just a reaction to a decision that has already been made.

Frankly the noise being made online about AI boils down to social posturing in nearly all cases. Even the author is striking a pose of a nuanced intellectual, but this pose, like the ones he opposes, will have no impact on events.

eaglelamp commented on An image of an archeologist adventurer who wears a hat and uses a bullwhip   theaiunderwriter.substack... · Posted by u/participant3
mlsu · 9 months ago
I was really hoping that the conversation around AI art would at least be partially centered on the perhaps now dated "2008 pirate party" idea that intellectual property, the royalty system, the draconian copyright laws that we have today are deeply silly, rooted in a fiction, and used over and over again, primarily by the rich and powerful, to stifle original ideas and hold back cultural innovation.

Unfortunately, it's just the opposite. It seems most people have fully assimilated the idea that information itself must be entirely subsumed into an oppressive, proprietary, commercial apparatus. That Disney Corp can prevent you from viewing some collection of pixels, because THEY own it, and they know better than you do about the culture and communication that you are and are not allowed to experience.

It's just baffling. If they could, Disney would scan your brain to charge you a nickel every time you thought of Mickey Mouse.

eaglelamp · 9 months ago
If we are going to have a general discussion about copyright reform at a national level, I'm all for it. If we are going to let billion dollar corporations break the law to make even more money and invent legal fictions after the fact to protect them, I'm completely against it.

Training a model is not equivalent to training a human. Freedom of information for a mountain of graphics cards in a privately owned data center is not the same as freedom of information for flesh and blood human beings.

eaglelamp commented on Visualize Ownership and Lifetimes in Rust   github.com/cordx56/rustow... · Posted by u/ljahier
gpm · 10 months ago
The borrow checker and lifetimes aren't simply a matter of performance, they are a matter of correctness. Languages without them (go, java, etc) allow for bugs that they prevent - dataraces, ConcurrentModificationException, etc. The fact that you can only write through pointers that guarantee they have unique access is what lets the language statically guarantee the absence of a wide category of bugs, and what makes it so easy to reason about rust code as a human (experienced with rust). You can't have that without the borrow checker, and without that you lose what makes rust different (it would still be a fine language, but not a particularly special one).

You could simplify rust slightly by sacrificing performance. For example you could box everything by default (like java) and get rid of `Box` the type as a concept. You could even make everything a reference counted pointer (but only allow mutation when the compiler can guarantee that the reference count is 1). You could ditch the concept of unsized types. Things like that. Rust doesn't strive to be the simplest language that it could be - instead it prefers performance. None of this is really what people complain about with the language though.

eaglelamp · 10 months ago
Couldn't the same guarantees be achieved with immutability? Of course this would be setting aside concerns with performance/resource usage, but the parent is describing an environment where these concerns are not primary.

Personally I find it much easier to grok immutable data, not just understand when concentrating on it, then ownership rules.

eaglelamp commented on Musk-led group makes $97B bid for control of OpenAI   reuters.com/markets/deals... · Posted by u/jdoliner
eaglelamp · 10 months ago
ChatGPT gov launched in January. Musk is using DOGE to hoover up tons of government data and reportedly using 'AI' technology to analyze it. There seems to be a rush to insert 'AI' into government processes, and with the government, unlike the consumer market, being the first to market will build a significant moat.

Of course this will lead to conflict between Altman and Musk as they rush to entrench themselves within the current administration. This buyout offer could be an effective tactic to delay the pending funding from Softbank, and in turn the kick off of stargate, while DOGE gets up to speed. Even a short delay could be impactful in the early days of an aggressive and fickle administration.

u/eaglelamp

KarmaCake day44August 13, 2021View Original