Readit News logoReadit News

Deleted Comment

prohobo commented on The Adolescence of Technology   darioamodei.com/essay/the... · Posted by u/jasondavies
Lerc · 14 days ago
One of my formative impressions of AI came from the depiction of the Colligatarch from Alan Dean Foster's The I Inside.

The AI in the book is almost feels like it is the main message masquerading as a subplot.

Asimov knew the risks, and I had assumed until fairly recently that the lessons and explorations that he had imparted into the Robot books had provided a level of cultural knowledge of what we were about to face. Perhaps the movie of I Robot was a warning of how much the signal had decayed.

I worry that we are sociologically unprepared, and sometimes it seems wilfully so.

People discussed this potential in great detail decades ago, Indeed the Sagan reference at the start of this post points to one of the significant contributors to the conversation, but it seems by the time it started happening, everyone had forgotten.

People are talking in terms of who to blame, what will be taken from me, and inevitability.

Any talk of a future we might want dismissed as idealistic or hype. Any depiction of a utopian future is met with derision far too often. Even worse the depiction can be warped to an evil caricature of "What they really meant".

How do we know what course to take if we can't talk about where we want to end up?

prohobo · 12 days ago
I'll take a swing.

Dario's essay carefully avoids its own conclusion. He argues that AI will democratize mass casualty weapons (biology especially), that human coordination at civilizational scale is impossible, and that human-run surveillance states inevitably corrupt. But he stops short of the obvious synthesis: the only survivable path is an AI-administered panopticon.

That sounds dystopian until you think it through:

    - The panopticon is coming regardless. The question is who runs it.

    - Human-run = corruption, abuse, "rules for thee but not for me."

    - AI-run = potentially incorruptible, no ego, no blackmail, no bullshit.

    - An AI doesn't need to watch you in any meaningful sense. It processes, flags genuine threats, and ignores everything else. No human ever sees your data.

    - Crucially: it watches the powerful too. Politicians, corporations, billionaires finally become actually accountable.
This is the Helios ending from Deus Ex, and it's the Culture series' premise. Benevolent AI sovereignty isn't necessarily dystopia, and it might be the only path to something like Star Trek.

The reason we can't talk about this is that it's unspeakable from inside the system. Dario can't say it (he's an AI company CEO.) Politicians can't say it because it sounds insanely radical. So the discourse stays stuck on half-measures that everyone knows won't work.

I honestly believe this might be the future to work toward, because the alternatives are basically hell.

prohobo commented on Ask HN: Is the rise of AI tools going to be the next 'dot com' bust?    · Posted by u/Dicey84
prohobo · 6 months ago
Yes and no, for the respective reasons:

- Yes, I think people are already wary and have "AI fatigue" from all the chatbots and AI add-ons people have released over the past 2 years. It's already hard to convince people that your AI project isn't a wrapper flip.

- No, it's fundamentally useful in almost every industry, context, and environment.

Very similar to the 2000's internet on the one hand, and orders of magnitude more useful on the other. The difference that I see happening is like "AI literacy" taking over and thousands of failed startups getting immediately overrun by thousands of viable startups.

prohobo commented on Ask HN: How are you scaling AI agents reliably in production?    · Posted by u/nivedit-jain
prohobo · 6 months ago
I'm using LangGraph for my app which is an AI ecommerce analyst with multiple modes (report builder, and chatbot). It consumes API data and visitor sessions to build a giant report then compress it back down to actionable insights for online store owners. The report runs for each customer once a day, queued up with BullMQ.

It's not super complex, in fact that seems to be the only way to get a more or less reliable agent right now. Keep the graph small, the prompts concise, the nodes and tools atomic in function, etc.

* Orchestrator choice and why: LangGraph because it seems the most robust and well established from my research at the time (about 6 months ago). It has decent documentation, and includes community-built graphs and nodes. People complain a lot about LangChain, but the general vibe around LangGraph is that it's a maturely designed framework.

* State and checkpointing: I'm using a memory checkpointer after every state change. Why? Reports can just re-run at negligible cost. For chats, my users' requirements just don't need persistent thread storage. Persistence is better managed through RAG entries.

* Concurrency control: I don't use parallel tool calling for most of my agents because it adds too much instability to graph execution. This is actually fine for chatbots and my app's reporting system (which doesn't need many tools), but I can see this being an issue for more complex agents.

* Autoscaling and cost: Well I use foundation models, not local ones. I swap out models for various tasks and customer subscription levels (e.g., gpt-5-nano with low reasoning effort for trial users, and gpt-5-mini for paying customers).

* Memory and retrieval: Vector DB for RAG tooling, normal DB for everything else. Sometimes I use the same Postgres database for both vector and normal data, to simplify architecture. I load raw contextual data into prompts (JSON dump). In my app's case, I use a 30-day rolling window of store data so I never keep anything longer than 30 days. I instead keep distilled information as permanent context, which I let the AI control the lifecycle of (create, update, delete).

* Observability: The only thing I would use evals for are prompts, but haven't found a good tool for that yet. I use sentiment analysis for chats the AI deems "interesting" just to see if people are complaining about something.

* Safety and isolation: For reports, I filter out PII before giving data to the AI. For chats, memory checkpointing makes threads ephemeral anyway - and I just add a rate limit + message length limit. The sentiment analysis doesn't include their original messages, only a thematic summary by the AI.

* A war story: I spent weeks trying to fine-tune a prompt for the reporting agent, in which one node was tasked with A) analyzing multiple 30-day ecommerce reports, B) generating findings, C) comparing the findings to existing insights and mutating them, and finally: D) creating short and punchy copy for new insights (title, description). I re-wrote it like 100 times, and every time I ran it it would screw up in a new way or a way that occurred 5 revisions ago. Sometimes it would work perfectly, then the next time it ran it would screw up again, with the same data and temperature set to 0.

This, honestly, is the main problem with modern AI. My solution was to decompose the node into 4 separate ones that each handle a single task - and they still manage to screw it up quite often. It's much better, but not 100% reliable.

prohobo commented on Are a few people ruining the internet for the rest of us?   theguardian.com/books/202... · Posted by u/pseudolus
tolerance · 7 months ago
This article is rife with wishful thinking and honestly I don't know if society has ever been as harmonious as I feel like it alleges. As if there was a time where everyone just got along and things were great and you could get an egg cream for a quarter after the sock hop.

I seriously question whether there was ever a time where the masses weren't influenced by "a few people", for better or worse.

The numbers don't move me and can't be the sole arbitrator of truth when the direction of humanity is involved.

So while I'm not surprised that people report feeling less inclined toward inflammatory media after disengaging it, I just don't believe that there is a grand collective that we can return to that is free from the influential few.

The issue is that there are many masses and many fews at odds to find their pair and wont to view the others as the outrageous ones.

People can hardly curate outfits at their own discretion. They're going to defer to people who are deferring to what amounts to a cell of 3-4 guys linked to a larger apparatus of taste to find out what to wear, what to watch and what to think.

That's just the way it is.

The average person is well-meaning and reasonable up unto the this eerie point in their life where they feel existentially threatened and thrust on the stage of public opinion for the criticism of others.

So I think that suggesting that society isn't toxic in it's current form and all it is is that we're just viewing the world through this funhouse lens because of a few bad guys on social media is a conceited perspective because the world as it is indeed is a carnival of ideas surrounding the marketplace and the internet is its pavilion, not its public square.

And to dare to suggest that there is in fact one single true direction for people to choose demands contending against all the goofy ways people are turning and admitting that things are as bad as they appear, in spite of whatever ways we can come up with to assume the good faith of the common man.

The irony is that this same outlet will unapologetically make its bones off the incessant reporting on all the ways that society is under peril. Sometimes obscuring these reports with solicitations to fund this effort.

prohobo · 7 months ago
> The average person is well-meaning and reasonable up unto the this eerie point in their life where they feel existentially threatened and thrust on the stage of public opinion for the criticism of others.

This is how I feel for the past 8 years or so; like I've been forced to become more and more deranged, because it seems like everyone either fully supports or tacitly agrees with an insane narrative that one way or another paints me or people like me as an enemy.

I can't just take what anyone says at face value anymore, or give the benefit of the doubt. I know that as soon as they say a key word, or behave in a specific way, or even just dress in a specific way that I'm dealing with some kind of narrative that is openly hostile. It may not even be that I disagree, just that I don't want to signal myself that way. I just want to form my own opinions, but that's usually difficult and often insulting to other people. People flip like a switch as soon as they sense you're not going to fully agree with them.

The postmodern bent of our discourse is really hard to deal with because you get immediately deconstructed into one of maybe a dozen categories when you say/do anything: lib, grifter, shill, racist, snowflake, bootlicker, chud, commie, fascist, creep, etc.

I can't even cut my hair without someone categorizing me based off of it.

I mostly consume media through an RSS feed nowadays, and it hasn't helped at all, although I now don't have as much "content" to deal with emotionally.

With RSS I don't have to relitigate arguments and ideas in my own head in order to feel secure as much as before, but the way I interact with people is still deeply warped by the entire discourse.

prohobo commented on Ask HN: Go deep into AI/LLMs or just use them as tools?    · Posted by u/pella_may
antirez · 9 months ago
My 2 centes:

1. Learn basic NNs at a simple level, build from scratch (no frameworks) a feed forward neural network with back propagation to train against MNIST or something as simple. Understand every part of it. Just use your favorite programming language.

2. Learn (without having to implement with the code, or to understand the finer parts of the implementations) how the NN architectures work and why they work. What is an encoder-decoder? Why the first part produces an embedding? How a transformer works? What are the logits in the output of an LLM, and how sampling works? Why is attention of quadratic? What is Reinforcement Learning, Resnets, how do they work? Basically: you need a solid qualitative understanding of all that.

3. Learn the higher level layer, both from the POV of the open source models, so how to interface to llama.cpp / ollama / ..., how to set the context window, what is quantization and how it will affect performances/quality of output, and also, how to use popular provider APIs like DeepSeek, OpenAI, Anthropic, ... and what model is good for what.

4. Learn prompt engineering techniques that influence the qualtily of the output when using LLMs programmatically (as a bag of algorithms). This takes patience and practice.

5. Learn how to use AI effectively for coding. This is absolutely non-trivial, and a lot of good programmers are terrible LLMs users (and end believing LLMs are not useful for coding).

6. Don't get trapped into the idea that the news of the day (RAG, MCP, ...) is what you should spend all your energy. This is just some useful technology surrounded by a lot of hype of all the people that want to get rich with AI and understand they can't compete with the LLMs themselves. So they pump the part that can be kinda "productized". Never forget that the product is the neural network itself, for the most part.

prohobo · 9 months ago
Agreed with most of this except the last point. You are never going to make a foundational model, although you may contribute to one. Those foundational models are the product, yes, but if I could use an analogy: foundational models are like the state of the art 3D renderers in games. You still need to build the game. Some 3D renderers are used/licensed for many games.

Even the basic chat UI is a structure built around a foundational model; the model itself has no capability to maintain a chat thread. The model takes context and outputs a response, every time.

For more complex processes, you need to carefully curate what context to give the model and when. There are many applications where you can say "oh, chatgpt can analyze your business data and tell you how to optimize different processes", but good luck actually doing that. That requires complex prompts and sequences of LLM calls (or other ML models), mixed with well-defined tools that enable the AI to return a useful result.

This forms the basis of AI engineering - which is different from developing AI models - and this is what most software engineers will be doing in the next 5-10 years. This isn't some kind of hype that will die down as soon as the money gets spent, a la crypto. People will create agents that automate many processes, even within software development itself. This kind of utility is a no-brainer for anyone running a business, and hits deeply in consumer markets as well. Much of what OpenAI is currently working on is building agents around their own models to break into consumer markets.

I recommend anyone interested in this to read this book: https://www.amazon.com/AI-Engineering-Building-Applications-...

prohobo commented on 4chan Sharty Hack And Janitor Email Leak   knowyourmeme.com/memes/ev... · Posted by u/LookAtThatBacon
brigandish · 10 months ago
That's not what the paradox of tolerance says, nor is it relevant. Popper gave two explicit standards for working out who is intolerant:

- they shun debate ("begin by denouncing all argument", "forbid their followers to listen to rational argument")

- they use violence instead ("answer arguments by the use of their fists or pistols")

I, for one, prefer having peaceful Nazis to the other sort, and to - as Popper puts it - "counter them by rational argument and keep them in check by public opinion". Unless 4chan officials or the Nazis on 4chan were meeting both standards then I fail to see a connection.

Were 4chan or the 4chan Nazis doing so?

prohobo · 10 months ago
ie. if you're shunning debate and deplatforming people based on ideological disputes, you're also a nazi.
prohobo commented on Regime Change in the West   lrb.co.uk/the-paper/v47/n... · Posted by u/PaulHoule
zombiwoof · 10 months ago
Which side should we be on?
prohobo · 10 months ago
Reform of the economic system, I presume.

u/prohobo

KarmaCake day1633July 13, 2020View Original