Readit News logoReadit News

Deleted Comment

lukeplato commented on Stargate Project: SoftBank, OpenAI, Oracle, MGX to build data centers   apnews.com/article/trump-... · Posted by u/tedsanders
non- · 7 months ago
Any clues to how they plan to invest $500 billion dollars? What infrastructure are they planning that will cost that much?
lukeplato · 7 months ago
hopefully nuclear power plants
lukeplato commented on Norepinephrine-mediated slow vasomotion drives glymphatic clearance during sleep   cell.com/cell/abstract/S0... · Posted by u/Jimmc414
lukeplato · 7 months ago
Michael Edward Johnson has an interesting theory called vasocomputation with the core hypothesis:

> vasomuscular tension stabilizes local neural patterns. A sustained thought is a pattern of vascular clenching that reduces dynamic range in nearby neurons. The thought (congealed pattern) persists until the muscle relaxes.

https://opentheory.net/2023/07/principles-of-vasocomputation...

https://x.com/johnsonmxe/status/1863603206649208983

lukeplato commented on Agents Are Not Enough   arxiv.org/abs/2412.16241... · Posted by u/awaxman11
tonetegeatinst · 8 months ago
Somewhat related but here's my take on super intelligence or AGI. I have worked with CNN,GNN and other old school AI methods, but don't have the resources to build a real SOT LLM, but I do use and tinker with LLM's occasionally.

If AGI or SI(super intelligence)/is possible, and that is an if...I don't think LLM's are going to be this silver bullet solution Just as we have in the real world of people who are dedicated to a single task in their field like a lawyer or construction workers or doctors and brain surgeons, I see the current best path forward as being a "mixture of experts". We know LLM's are pretty good for what iv seen some refer to as NLP problems, where the model input is the tokenized string input. However I would argue an LLM will never built a trained model like stockfish or deepseek. Certain model types seem to be suited to certain issues/types of problems or inputs. True AGI or SI would stop trying to be a grand master of everything but rather know what best method/model should be applied to a given problem. We still do not know if it is possible to combine the knowledge of different types of neural networks like LLMs, convolutional neural networks, and deep learning...and while its certainly worth exploring, it is foolish to throw all hope on a single solution approach. I think the first step would be to create a new type of model where given a problem of any type. It knows the best method to solve it. And it doesn't rely on itself but rather the mixture of agents or experts. And they don't even have to be LLMs. They could be anything.

Where this really would explode is, if the AI was able to identify a problem that it can't solve and invent or come up with a new approach, multiple approaches, because we don't have to be the ones who develop every expert.

lukeplato · 8 months ago
I don't see why a mixture of experts couldn't be distilled into a single model and unified latent space

Deleted Comment

lukeplato commented on Mysterious New Jersey drone sightings prompt call for 'state of emergency'   theguardian.com/us-news/2... · Posted by u/anigbrowl
lukeplato · 9 months ago
reading this thread was a good reminder that being intelligent but closed off to alternative hypotheses is the same thing as being ignorant

u/lukeplato

KarmaCake day1469March 8, 2014
About
https://github.com/lukepereira
View Original