Readit News logoReadit News
sonink commented on When does MCP make sense vs CLI?   ejholmes.github.io/2026/0... · Posted by u/ejholmes
sonink · 12 days ago
Agree. MCP isn't really required. Skills/CLI/API is good enough.

At the AI startup I work on, we never bothered building MCP's - it just never made sense.

And we were using skills before Claude started calling them skills, so they are kind of supported by default. Skills, CLI, Curl API requests - thats pretty much all you need.

sonink commented on A Model of a Mind   tylerneylon.com/a/mind_mo... · Posted by u/adamesque
mylastattempt · 2 years ago
I very much agree with this line of thought. It seems for humans it is the default mode of operation to just think of what is possible within the foreseeable future, rather than thinking of a reality that includes the seemingly impossible (at the time of the thought).

In my opinion, this is easily noticeable when you try to discuss any system, be it political or economical, that spans multiple countries and interests. People will just revert to whatever is closest to them, rather than being able to foresee a larger cascading result from some random event.

Perhaps this is more of a rant than a comment, apologies, I suppose it would be interesting to have an online space to discuss where things are headed on a logical level, without emotion and ideals and the ridiculous idea that humanity must persevere. Just thinking out what could happen in the next 5, 10 and 99 years.

sonink · 2 years ago
> I suppose it would be interesting to have an online space to discuss where things are headed on a logical level, without emotion and ideals and the ridiculous idea that humanity must persevere.

Absolutely. Happy to be part of it if you are able to set it up.

sonink commented on A Model of a Mind   tylerneylon.com/a/mind_mo... · Posted by u/adamesque
tsimionescu · 2 years ago
Model training seems to me to be much closer to simulating the evolution of the human mind starting from single cell bacteria, rather than the development of the mind of a baby up to a fully functional human. If so, then sensory inputs and interaction with the physical through them were absolutely a crucial part of how minds evolved, so I find your approach a priori very unlikely to have a chance at success.

To be clear, my reasoning is that this is the only plausible explanation for the extreme difference in how much data an individual human needs to learn language, and how much data an LMM needs to reach its level of simulation. Humanity collectively probably needed similar amounts of data as LLMs do to get here, but it was spread across a billion years of evolution from simple animals to Homo Sapiens.

sonink · 2 years ago
> If so, then sensory inputs and interaction with the physical through them were absolutely a crucial part of how minds evolved, so I find your approach a priori very unlikely to have a chance at success.

If that was the case, people who were born blind would demonstrate markedly reduced intelligence. I dont think that is the case, but you can correct me if I am wrong. A blind person might take longer to truly 'understand' and 'abstract' something but there is little evidence to believe that capability of abstraction isnt as good as people who can see.

Agree that sensory inputs and interaction were absolutely critical for how the minds evolved, but model training replaces that part when we talk about AI, and not just the evolution.

Evolution made us express emotions when we are hungry for example. But your laptop will also let you know when its battery is out of juice. Human design inspired by evolution can create systems which mimic its behaviour and function.

sonink commented on A Model of a Mind   tylerneylon.com/a/mind_mo... · Posted by u/adamesque
ilaksh · 2 years ago
It's a really fascinating topic, but I wonder if this article could benefit from any of the extensive prior work in some way. There is actually quite a lot of work on AGI and cognitive architecture out there. For a more recent and popular take centered around LLMs, see David Shapiro.

Before that you can look into the AGI conference people like Ben Goertzel, Pei Wang. And actually the whole history of decades of AI research before it became about narrow AI.

I'd also like to suggest that creating something that truly closely simulates a living intelligent digital person is incredibly dangerous, stupid, and totally unnecessary. The reason I say that is because we already have superhuman capabilities in some ways, and the hardware, software and models are being improved rapidly. We are on track to have AI that is dozens if not hundreds of times faster than humans at thinking and much more capable.

If people succeed in making that truly lifelike and humanlike, it will actually out-compete us for resource control. And will no longer be a tool we can use.

Don't get me wrong, I love AI and my whole life is planned around agents and AI. But I no longer believe it is wise to try to go all the way and create a "real" living digital species. And I know it's not necessary -- we can create effective AI agents without actually emulating life. We certainly don't need full autonomy, self preservation, real suffering, reproductive instincts, etc. But that is the goal he seems to be down in this article. I suggest leaving some of that out very deliberately.

sonink · 2 years ago
> If people succeed in making that truly lifelike and humanlike, it will actually out-compete us for resource control. And will no longer be a tool we can use.

I believe it is almost certain that we will make something like this and that they will out-compete us. The bigger problem here is that too few people believe this to be a possibility. And when this becomes certainty becomes apparent to a larger set of people, it might be too late to tone this down.

AI isn't like the Atom Bomb (AB). AB didn't have agency. Once AB was built we still had time to think how to deploy it, or not. We had time to work across a global consensus to limit use of AB. But once AI manifests as AGI, it might be too late to shut it down.

sonink commented on A Model of a Mind   tylerneylon.com/a/mind_mo... · Posted by u/adamesque
sonink · 2 years ago
The model is interesting. This is similar in parts to what we are building at nonbios. So for example sensory inputs are not required to simulate a model of a mind. If a human cannot see, the human mind is still clearly human.
sonink commented on Q&A: Douglas Hofstadter on why AI is far from intelligent   qz.com/1088714/qa-douglas... · Posted by u/pilingual
sonink · 8 years ago
> What frightens me is the scenario of human thought being overwhelmed and left in the dust. Not being aided or abetted by computers, but being completely overwhelmed, and we are to computers as cockroaches or fleas are to us. That would be scary.

I suspect our expectation of GAI is unreasonable and we will sooner or later have to reconcile it with a different and less anthropomorphic expression of intelligence and consciousness. It might not be required for AI to be (anthrophomorphically) intelligent or conscious for it to 'take over'. Infact it might be a huge advance over mankind that it is not.

sonink commented on GM Apples That Don’t Brown to Reach U.S. Shelves This Fall   technologyreview.com/s/60... · Posted by u/rbanffy
sonink · 8 years ago
It was a bit shocking to me when I first walked into an American super market to see tomatoes all the same big size and shining red color. This is in sharp contrast to what we get in India - they come in all sizes and different shades of orange, green and red.

Even though the American ones were instantly attractive, it slowly dawned on me that perhaps something is wrong. Now I appreciate the Indian vegetables a lot more.

sonink commented on Bitcoin is fiat money, too   economist.com/blogs/freee... · Posted by u/davidw
8973417983461 · 8 years ago
> But there are countries in Africa who can already do better by simply leapfrogging to bitcoin and ditching their national currencies.

I doubt this would do any good for them.

* Their currency would be totally exposed to 3rd parties.

* They would loose the control over the rates, which are an important tool to attract investments, if are stable and controlled well.

* AFAIK some Chinese private companies control large part of the mining network. Basically the central bank would be in private, and foreign hands.

* The slow transactions would make it totally infeasable for use in everyday life, especially as people there have limited access to necessary technologies (stable network connections all round the countries, stable electric power everywhere), so daily transactions of the ordinary people would either fall back to barters, or use some fiat paper money, eg. USDs.

I totally don't get how could you reach tis conclusion, your whole post is a SV bubble wishful thinking with some trendy bullshit, eg. software eating the FED, fed is replaced by code. Bitcoin does better than centralbanks. If some currency looses 30% of its value a single day, that is not a sign of health, and this happended the very week with bitcoin. Actually Bitcoin does its job worse than an african dictatorship's currency, if its job is being a fiat currency, which is useful for the people in daily life.

I doubt its job is that, so it may do its job well, but for this task it is unsuitable.

sonink · 8 years ago
Everything that you mentioned is most likely a shortcoming of the current version of Bitcoin. That being said, it is not a big leap of faith that each of these will be rectified in due course. If not with updates to Bitcoin, then with another coin.

My post wasnt just about Bitcoin specifically, but around the entire blockchain ecosystem.

u/sonink

KarmaCake day299November 23, 2007
About
nonbios.ai
View Original