Readit News logoReadit News
noiv commented on The Singularity will occur on a Tuesday   campedersen.com/singulari... · Posted by u/ecto
stego-tech · 2 days ago
This is delightfully unhinged, spending an amazing amount of time describing their model and citing their methodologies before getting to the meat of the meal many of us have been braying about for years: whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

And, yep! A lot of people absolutely believe it will and are acting accordingly.

It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.

noiv · 2 days ago
"If men define situations as real, they are real in their consequences."

Thomas theorem is a theory of sociology which was formulated in 1928 by William Isaac Thomas and Dorothy Swaine Thomas.

https://en.wikipedia.org/wiki/Thomas_theorem

noiv commented on Learning from context is harder than we thought   hy.tencent.com/research/1... · Posted by u/limoce
cs702 · 6 days ago
The problem is even more fundamental: Today's models stop learning once they're deployed to production.

There's pretraining, training, and finetuning, during which model parameters are updated.

Then there's inference, during which the model is frozen. "In-context learning" doesn't update the model.

We need models that keep on learning (updating their parameters) forever, online, all the time.

noiv · 6 days ago
> models that keep on learning

These will just drown in their own data, the real task is consolidating and pruning learned information. So, basically they need to 'sleep' from time to time. However, it's hard to sort out irrelevant information without a filter. Our brains have learned over Milenial to filter because survival in an environment gives purpose.

Current models do not care whether they survive or not. They lack grounded relevance.

noiv commented on Show HN: Zero – Serverless ECMWF weather visualization (WebGPU)   zero.hypatia.earth/... · Posted by u/noiv
noiv · 18 days ago
Zero is a serverless weather globe rendering ECMWF forecast data directly in your browser using WebGPU.

Zero backend. Zero servers. Zero cost.

As climate extremes become more frequent, understanding forecast hazards becomes survival literacy. Zero makes professional ECMWF IFS data accessible without commercial infrastructure — forkable, self-hostable, resilient. Inspired by Cameron Beccario's earth.nullschool.net, which pioneered browser atmospheric visualization.

Happy to discuss implementation details.

Technical highlights:

- No backend - runs entirely client-side - Native O1280 grid (6.6M points) sampled directly in fragment shaders - no regridding to textures - HTTP Range requests fetch ~500KB slices from 4-8MB forecast files on S3 - Works offline after first load (Service Worker caching) - Animated LOD transitions for graticule grid - line density adapts to zoom level

GPU pipeline:

- Binary search in WGSL for irregular Gaussian grid lookup (precomputed LUTs for latitude positions and ring offsets) - Marching squares compute shader for isobar contours - Streamline tracing with Rodrigues rotation for wind flow animation - Fibonacci sphere for uniform seed point distribution (8K-32K wind lines) - Globe rendered via fullscreen triangle (ray-sphere intersection in fragment shader) - Sub-3ms frame times on M1

What didn't work:

- Regridding to textures first - too slow for 6.6M points, quality loss from interpolation - Geometry-based globe mesh - vertex count explosion at high detail - CPU-side contour generation - latency killed interactivity

Storage: Caches weather data locally for offline use. Can grow to several GB with extended exploration. Use the "nuke" option in settings to clear everything.

Data hosted by Open-Meteo via the AWS Open Data Sponsorship Program — bandwidth is free for everyone.

Stack: TypeScript, WebGPU, Mithril, Zod, Immer

Mirror: https://hypatia-earth.github.io/zero

Source: https://github.com/hypatia-earth/zero

noiv commented on Ask HN: How can I get better at using AI for programming?    · Posted by u/lemonlime227
noiv · 2 months ago
I learned the hard way, when Claude has 2 conflicting information in Claude.md it tends to ignore both. So, precise language is key, don't use terms like 'object', which may have different meanings in different fields.
noiv commented on Bronze Age mega-settlement in Kazakhstan has advanced urban planning, metallurgy   archaeologymag.com/2025/1... · Posted by u/CGMthrowaway
noiv · 2 months ago
Looking at properly aligned buildings I realized school never prepared me into thinking city planner might have been a bronze age job. How come we call mobile phones progress?
noiv commented on The evolution of rationality: How chimps process conflicting evidence   arstechnica.com/science/2... · Posted by u/rbanffy
noiv · 3 months ago
I hope, we never find out how chimps discuss the last paragraph:

... Sometimes, at least in humans, social interactions can also increase our irrationality instead. But chimps don’t seem to have this problem. Engelmann’s team is currently running a study focused on whether the choices chimps make are influenced by the choices of their fellow chimps. “The chimps only followed the other chimp’s decision when the other chimp had better evidence,” Engelmann says. “In this sense, chimps seem to be more rational than humans.”

noiv commented on The Case That A.I. Is Thinking   newyorker.com/magazine/20... · Posted by u/ascertain
brabel · 3 months ago
This argument comes up often but can be easily dismissed. Make up a language and explain it to the LLM like you would to a person. Tell it to only use that language now to communicate. Even earlier AI was really good at this. You will probably move the goal posts and say that this is just pattern recognition, but it still fits nicely within your request for something that no one ever came up with.
noiv · 3 months ago
Ask ChatGPT about ConLang. It knows. Inventing languages was solved a hundred years ago with Esperanto.
noiv commented on The Case That A.I. Is Thinking   newyorker.com/magazine/20... · Posted by u/ascertain
tkz1312 · 3 months ago
Having seen LLMs so many times produce coherent, sensible and valid chains of reasoning to diagnose issues and bugs in software I work on, I am at this point in absolutely no doubt that they are thinking.

Consciousness or self awareness is of course a different question, and ones whose answer seems less clear right now.

Knee jerk dismissing the evidence in front of your eyes because you find it unbelievable that we can achieve true reasoning via scaled matrix multiplication is understandable, but also betrays a lack of imagination and flexibility of thought. The world is full of bizarre wonders and this is just one more to add to the list.

noiv · 3 months ago
Different PoV: You have a local bug and ask the digital hive mind for a solution, but someone already solved the issue and their solution was incorporated... LLMs are just very effficient at compressing billions of solutions into a few GB.

Try to ask something no one ever came up with a solution so far.

u/noiv

KarmaCake day511November 27, 2010View Original