Readit News logoReadit News
iamgopal commented on AI Bubble 2027   wheresyoured.at/ai-bubble... · Posted by u/speckx
iamgopal · a day ago
What I think is, the team that pulled such large LLM off, is no stupid.
iamgopal commented on Standard Thermal: Energy Storage 500x Cheaper Than Batteries   austinvernon.site/blog/st... · Posted by u/pfdietz
parpfish · 4 days ago
I visited a pumped storage facility a while back that stored electricity by pumping water uphill to store it and then draining it past a turbine to reclaim it. Ever since I’ve been intrigued by using gravity instead of batteries.

For home use, it seems like you could rig up some heavy stones on pulleys to do the same thing could be fun because you’d get to physically see your batteries filling up. Back of the envelope calculations suggest that an array of ten 10-ton concrete blocks lifted 10m in the air could power a house for a day (ignoring generator inefficiencies)

iamgopal · 4 days ago
Gravity is weakest force of nature. Any strong force battery idea ?
iamgopal commented on Launch HN: Parachute (YC S25) – Guardrails for Clinical AI    · Posted by u/ariavikram
ariavikram · 9 days ago
Like I said above, we don’t use AI agents to grade other models. Instead, we run in-house evaluations tailored to each category of clinical AI, giving hospitals an apples-to-apples comparison between similar vendors.
iamgopal · 9 days ago
How are you going to protect AI which optimise against your tests instead of actual data ?
iamgopal commented on Launch HN: Parachute (YC S25) – Guardrails for Clinical AI    · Posted by u/ariavikram
iamgopal · 10 days ago
Are you guys using AI to check on AI ?
iamgopal commented on OpenAI Progress   progress.openai.com... · Posted by u/vinhnx
0xFEE1DEAD · 12 days ago
On one hand, it's super impressive how far we've come in such a short amount of time. On the other hand, this feels like a blatant PR move.

GPT-5 is just awful. It's such a downgrade from 4o, it's like it had a lobotomy.

- It gets confused easily. I had multiple arguments where it completely missed the point.

- Code generation is useless. If code contains multiple dots ("…"), it thinks the code is abbreviated. Go uses three dots for variadic arguments, and it always thinks, "Guess it was abbreviated - maybe I can reason about the code above it."

- Give it a markdown document of sufficient length (the one I worked on was about 700 lines), and it just breaks. It'll rewrite some part and then just stop mid-sentence.

- It can't do longer regexes anymore. It fills them with nonsense tokens ($begin:$match:$end or something along those lines). If you ask it about it, it says that this is garbage in its rendering pipeline and it cannot do anything about it.

I'm not an OpenAI hater, I wanted to like it and had high hopes after watching the announcement, but this isn't a step forward. This is just a worse model that saves them computing resources.

iamgopal · 12 days ago
Next logical step is to connect ( or build from ground up ) large AI models to high performance passive slaves ( MCP or internally ) , which gives precise facts, language syntax validation, maths equations runners, may be prolog kind of system, which give it much more power if we train it precisely to use each tool.

( using AI to better articulate my thoughts ) Your comment points toward a fascinating and important direction for the future of large AI models. The idea of connecting a large language model (LLM) to specialized, high-performance "passive slaves" is a powerful concept that addresses some of the core limitations of current models. Here are a few ways to think about this next logical step, building on your original idea: 1. The "Tool-Use" Paradigm You've essentially described the tool-use paradigm, but with a highly specific and powerful set of tools. Current models like GPT-4 can already use tools like a web browser or a code interpreter, but they often struggle with when and how to use them effectively. Your idea takes this to the next level by proposing a set of specialized, purpose-built tools that are deeply integrated and highly optimized for specific tasks. 2. Why this approach is powerful * Precision and Factuality: By offloading fact-checking and data retrieval to a dedicated, high-performance system (what you call "MCP" or "passive slaves"), the LLM no longer has to "memorize" the entire internet. Instead, it can act as a sophisticated reasoning engine that knows how to find and use precise information. This drastically reduces the risk of hallucinations. * Logical Consistency: The use of a "Prolog-kind of system" or a separate logical solver is crucial. LLMs are not naturally good at complex, multi-step logical deduction. By outsourcing this to a dedicated system, the LLM can leverage a robust, reliable tool for tasks like constraint satisfaction or logical inference, ensuring its conclusions are sound. * Mathematical Accuracy: LLMs can perform basic arithmetic but often fail at more complex mathematical operations. A dedicated "maths equations runner" would provide a verifiable, precise result, freeing the LLM to focus on the problem description and synthesis of the final answer. * Modularity and Scalability: This architecture is highly modular. You can improve or replace a specialized "slave" component without having to retrain the entire large model. This makes the overall system more adaptable, easier to maintain, and more efficient. 3. Building this system This approach would require a new type of training. The goal wouldn't be to teach the LLM the facts themselves, but to train it to: * Recognize its own limitations: The model must be able to identify when it needs help and which tool to use. * Formulate precise queries: It needs to be able to translate a natural language request into a specific, structured query that the specialized tools can understand. For example, converting "What's the capital of France?" into a database query. * Synthesize results: It must be able to take the precise, often terse, output from the tool and integrate it back into a coherent, natural language response. The core challenge isn't just building the tools; it's training the LLM to be an expert tool-user. Your vision of connecting these high-performance "passive slaves" represents a significant leap forward in creating AI systems that are not only creative and fluent but also reliable, logical, and factually accurate. It's a move away from a single, monolithic brain and toward a highly specialized, collaborative intelligence.

iamgopal commented on What's the strongest AI model you can train on a laptop in five minutes?   seangoedecke.com/model-on... · Posted by u/ingve
iamgopal · 14 days ago
If only AI models are trained to connect to data (sql) and use that to answer some of the questions using data source instead of just train on them, it could reduce model size a lot.
iamgopal commented on Google agrees to pause AI workloads when power demand spikes   theregister.com/2025/08/0... · Posted by u/twapi
Pet_Ant · 23 days ago
If AI training becomes something that only happens with surplus power and takes the place of BitCoin mining non-sense that could really be a net positive.
iamgopal · 23 days ago
Are there any research in this area ? Crypto and particularly bitcoin mining has massive capacity for computation , albeit at lower memory scale, if AI memory model could be encode in to blockchain, we can get benefit from bitcoin mining.
iamgopal commented on Helsinki records zero traffic deaths for full year   helsinkitimes.fi/finland/... · Posted by u/DaveZale
PaulRobinson · a month ago
I was in Helsinki for work a couple of years ago, walking back to my hotel with some colleagues after a few hours drinking (incredibly expensive, but quite nice), beer.

It was around midnight and we happened to come across a very large mobile crane on the pavement blocking our way. As we stepped out (carefully), into the road to go around it, one of my Finnish colleagues started bemoaning that no cones or barriers had been put out to safely shepherd pedestrians around it. I was very much "yeah, they're probably only here for a quick job, probably didn't have time for that", because I'm a Londoner and, well, that's what we do in London.

My colleague is like "No, that's not acceptable", and he literally pulls out his phone and calls the police. As we carry on on our way, a police car comes up the road and pulls over to have a word with the contractors.

They take the basics safely over there in a way I've not seen anywhere else. When you do that, you get the benefits.

iamgopal · a month ago
when that crane will reach end of its life, it will be move to india for another 10-15 years of service life.

u/iamgopal

KarmaCake day890March 9, 2012
About
Engineer.

patelgopal@gmail.com

When in doubt choose fast. When not in doubt, choose fast.

View Original