Readit News logoReadit News
obirunda commented on Polymarket gamblers threaten to kill me over Iran missile story   timesofisrael.com/gambler... · Posted by u/defly
waffletower · 15 hours ago
[flagged]
obirunda · 9 hours ago
The problem is that you are identifying a symptom of an ongoing societal moral decline with an economic system. Spend some time on Dostoevsky or Kant. When morality is solely based on our current nationalism/hedonism hybrid in the West you end up at these extreme morally dubious exploits whether in a free or autocratic society. I don't claim to know what can help society become more ethical. What do you propose will improve society's ethical foundations? It's often the case that staunch critics of a system or another don't usually have much to offer in terms of ethics, they pontificate in favor or counter a given system without exploring whether society's ethics dwell on shaky foundations, they spend a lot of time talking about the technicalities, the merits of this or that implementations when in reality all societies whose ethical foundations crater, do not recover their ethics through policy reform alone.

There are so many examples of this disconnect. Prohibition did not make for a society that wanted less alcohol, or felt that consuming alcohol was unethical, it actually had the counter effect, opening the door for a considerable larger problem of crime supported production. The war on drugs was/has been no different.

To see this as the top comment here, where you picked Adam Smith as your straw man, could have been Karl Marx or any other thinker, and say it boils down to this or that simple mistake.. Are you serious? There are much deeper issues driving this thirst for Gambling we have embraced of late amongst other dubious things that have been normalized. But pick whatever economic system you want, install it anywhere and the existing ethical issues you currently have will still be there.

obirunda commented on Software factories and the agentic moment   factory.strongdm.ai/... · Posted by u/mellosouls
kaffekaka · a month ago
If the output is (dis)proportionally larger, the cost trade off might be the right thing to do.

And it might be the tokens will become cheaper.

obirunda · a month ago
Tokens will become significantly more expensive in the short term actually. This is not stemming from some sort of anti-AI sentiment. You have two ramps that are going to drive this. 1. Increase demand, linear growth at least but likely this is already exponential. 2. Scaling laws demand, well, more scale.

Future better models will both demand higher compute use AND higher energy. We cannot underestimate the slowness of energy production growth and also the supplies required for simply hooking things up. Some labs are commissioning their own power plants on site, but this is not a true accelerator for power grid growth limits. You're using the same supply chain to build your own power plant.

If inference cost is not dramatically reduced and models don't start meaningfully helping with innovations that make energy production faster and inference/training demand less power, the only way to control demand is to raise prices. Current inference costs, do not pay for training costs. They can probably continue to do that on funding alone, but once the demand curve hits the power production limits, only one thing can slow demand and that's raising the cost of use.

obirunda commented on Claude Opus 4.6   anthropic.com/news/claude... · Posted by u/HellsMaddy
NiloCK · a month ago
I'm getting astrology when I search for this. Any links on this?
obirunda commented on Claude Opus 4.6   anthropic.com/news/claude... · Posted by u/HellsMaddy
ck_one · a month ago
It didn't use web search. But for sure it has some internal knowledge already. It's not a perfect needle in the hay stack problem but gemini flash was much worse when I tested it last time.
obirunda · a month ago
This underestimates how much of the Internet is actually compressed into and is an integral part of the model's weights. Gemini 2.5 can recite the first Harry Potter book verbatim for over 75% of the book.
obirunda commented on Software is mostly all you need   softwarefordays.com/post/... · Posted by u/jbmilgrom
perfmode · 2 months ago
how’s your reasoning different from LLM reasoning?
obirunda · 2 months ago
What humans are known to do, and apparently there is no limit to what they won't, is anthropomorphizing. I think there's not been a single one of these discussions where someone inevitably says LLM's don't do X as well as a human and someone interjects in cult-like fashion.
obirunda commented on Ask HN: Is it still worth pursuing a software startup?    · Posted by u/newbebee
obirunda · 2 months ago
Software moats were never really a moat in and of themselves. You always had to be a first mover. It's true that there are fewer and fewer first mover opportunities, but that has less to do with recent LLMs advancements and more that we have already solved a lot of software problems on first principles. It's partially why LLMs work so well, they are pulling the "widgets" from distribution and synthesizing into your requirements. Before, we probably thought we were writing novelty when it was literally solved 1000x over.

If you aren't a first mover, your success was always dependent on other skills and great execution across multiple disciplines, and also a lot of stubbornness. The software layer has always been important, but a support role of successful enterprises. Start-ups have always been hard to pull together successfully for a lot of other reasons unrelated to code.

If you find a disruptive algorithm (like pagerank) there is little evidence that LLMs will infer your solution by looking at your app. Anything else, they are just design choices and have never been moats either, but say you have a qualitative edge, you'll make the choices that can create a recognizable brand where someone vibing a copycat may not care as much. Nothing has changed on this. Your chance of succeeding rests on your ability to reach your users and iterate in a crowded space, this is what you always had to do anyway.

There are things, however, that aren't worth working on anymore with the advent of LLMs. Some of these have been fully dismissed, for example sentiment analysis. A single API call for the cheapest (even local) LLM vendor will give you SOTA classification. There are many more examples but they are so obvious. Essentially, the "build me 1 billion dollar app" prompt will never work, so if you have a burning desire to build something, do it. Just remember, there never was and never will be a promise of unlimited fortunes whatever you do.

obirunda commented on Don't fall into the anti-AI hype   antirez.com/news/158... · Posted by u/todsacerdoti
obirunda · 2 months ago
The claim that users who don't adopt AI now will pay for it later or some other notion is a contradiction of their position. People who are bullish on AI should support this view wholesale. Opus 4.5 is easier to use than GPT 3.5. It can actually code a full toy project one shot where you couldn't dream of it before. Opus 4.5 isn't perfect, so people have a lot of things they do for a competitive advantage. Though anything you think you're building with all the prompt alchemy and .md rules or whatever will be useless and futile on Opus 10, every "really good practice" is instantly absorbed by labs so when something great is in the wild everyone eventually benefits by the base .md or system prompts. So even if you feel like you have a competitive advantage right now, it will evaporate by either the labs improving their tools or become generally unnecessary in future versions of the models.

The goal of the labs is to continue these leaps will get even bigger with every generation. Unless you secretly believe that some portion of the craft will be left unexplored by the labs or the things that are still relatively borked now will not be worked on or fixed later is a silly notion to me. Future versions will be easier to prompt and the tools will do more of the heavy lifting of following up and re-rolling misinterpretations. I argue that a user sleeping through all of this is likely to use a future version better than someone who is obsessing with all their assumptions on how to coerce these models to work right now, current version hyper users will likely bring unnecessary baggage imo.

For now, even with Opus 4.5 the time horizon for delivering a full-stack project is not significantly different than before, it's still limited by how much you can push it. I'd argue that someone without understanding of how things work is unlikely to succeed in getting production-grade outcomes from these current versions. The point is, if you choose to learn more and get better in understanding and building things that work (with AI or otherwise) you'll be just fine to use the versions that have fully or mostly automated the entire process. Nobody will be left behind, only those who stop building altogether.

Deleted Comment

Deleted Comment

obirunda commented on LLM Inevitabilism   tomrenner.com/posts/llm-i... · Posted by u/SwoopsFromAbove
positron26 · 8 months ago
I'm going to hold onto the Segway as an actual instance of hype the next time someone calls LLMs "hype".

LLMs have hundreds of millions of users. I just can't stress how insane this was. This wasn't built on the back of Facebook or Instagram's distribution like Threads. The internet consumer has never so readily embraced something so fast.

Calling LLMs "hype" is an example of cope, judging facts based on what is hoped to be true even in the face of overwhelming evidence or even self-evident imminence to the contrary.

I know people calling "hype" are motivated by something. Maybe it is a desire to contain the inevitable harm of any huge rollout or to slow down the disruption. Maybe it's simply the egotistical instinct to be contrarian and harvest karma while we can still feign to be debating shadows on the wall. I just want to be up front. It's not hype. Few people calling "hype" can believe that this is hype and anyone who does believes it simply isn't credible. That won't stop people from jockeying to protect their interests, hoping that some intersubjective truth we manufacture together will work in their favor, but my lord is the "hype" bandwagon being dishonest these days.

obirunda · 8 months ago
It's an interesting comparison, because Segway really didn't have any real users or explosive growth, so it was certainly hype. It was also hardware with a large cost. LLMs are indeed more akin to Google Search where adoption is relatively frictionless.

I think the core issue is separating the perception of value versus actual value. There have been a couple of studies to this effect, pointing to a misalignment towards overestimating value and productivity boosts.

One reason this happens imo, is because we sequester a good portion of the cognitive load of our thinking to the latter parts of the process so when we are evaluating the solution we are primed to think we have saved time when the solution is sufficiently correct, or if we have to edit or reposition it by re-rolling, we don't account for the time spent because we may feel we didn't do anything.

I feel like this type of discussion is effectively a top topic every day. To me, the hype is not in the utility it does have but in its future utility. The hype is based on the premise that these tools and their next iteration can and will make all knowledge-based work obsolete, but crucially, will yield value in areas of real need; cancer, aging, farming, climate, energy and etc.

If these tools stop short of those outcomes, then the investment all of SV has committed to it at this point will have been over invested and

u/obirunda

KarmaCake day33July 1, 2021View Original