Readit News logoReadit News
2snakes commented on 1D Conway's Life glider found, 3.7B cells long   conwaylife.com/forums/vie... · Posted by u/nooks
dkural · 15 days ago
Only about 1.5% of the human genome is protein coding. The human genome is about 3 billion base pairs long.
2snakes · 14 days ago
Also share about 60 percent with bananas.
2snakes commented on Reverse math shows why hard problems are hard   quantamagazine.org/revers... · Posted by u/gsf_emergency_6
shevy-java · 16 days ago
I recently had, for various reasons, improve my math skills.

I was surprised at how difficult I found math. Now, I was never really great at math; logic and calculation in the head I could do fairly well (above average), but just foundational knowledge was hard and mathematical theory even harder. But now I even had trouble with integration and differentiation and even with understanding a problem to put it down into a formula. I am far from being the youngest anymore, but I was surprised at how shockingly bad I have become in the last some +25 years. So I decided to change this in the coming months. I think in a way computers actually made our brains worse; many problems can be auto-solved (python numpy, sympy etc...) and the computers work better than hand-held calculators, but math is actually surprisingly difficult without a computer. (Here I also include algorithms by the way, or rather, the theory behind algorithms. And of course I also forgot a lot of the mathematical notation - somehow programming is a lot easier than higher math.)

2snakes · 16 days ago
Neuroscience suggests global connectivity changes after 40 instead of specialized areas. Overall declines do not start until late 40s though.
2snakes commented on Solar energy is now the cheapest source of power, study   surrey.ac.uk/news/solar-e... · Posted by u/giuliomagnifico
2snakes · 2 months ago
Does this include the lifetime costs of need to replace the panels? Also batteries though I'd imagine that is a separate. But the panels would seem to be direct additional costs.
2snakes commented on How has mathematics gotten so abstract?   lcamtuf.substack.com/p/ho... · Posted by u/thadt
Tazerenix · 3 months ago
The practical experience of doing mathematics is actually quite close to a natural science, even if the subject is technically a "formal science* according to the conventional meanings of the terms.

Mathematicians actually do the same thing as scientists: hypothesis building by extensive investigation of examples. Looking for examples which catch the boundary of established knowledge and try to break existing assumptions, etc. The difference comes after that in the nature of the concluding argument. A scientist performs experiments to validate or refute the hypothesis, establishing scientific proof (a kind of conditional or statistical truth required only to hold up to certain conditions, those upon which the claim was tested). A mathematician finds and writes a proof or creates a counter example.

The failure of logical positivism and the rise of Popperian philosophy is obviously correct that we can't approach that end process in the natural sciences the way we do for maths, but the practical distinction between the subjects is not so clear.

This is all without mention the much tighter coupling between the two modes of investigation at the boundary between maths and science in subjects like theoretical physics. There the line blurs almost completely and a major tool used by genuine physicists is literally purusiing mathematical consistency in their theories. This has been used to tremendous success (GR, Yang-Mills, the weak force) and with some difficulties (string theory).

————

Einstein understood all this:

> If, then, it is true that the axiomatic basis of theoretical physics cannot be extracted from experience but must be freely invented, can we ever hope to find the right way? Nay, more, has this right way any existence outside our illusions? Can we hope to be guided safely by experience at all when there exist theories (such as classical mechanics) which to a large extent do justice to experience, without getting to the root of the matter? I answer without hesitation that there is, in my opinion, a right way, and that we are capable of finding it. Our experience hitherto justifies us in believing that nature is the realisation of the simplest conceivable mathematical ideas. I am convinced that we can discover by means of purely mathematical constructions the concepts and the laws connecting them with each other, which furnish the key to the understanding of natural phenomena. Experience may suggest the appropriate mathematical concepts, but they most certainly cannot be deduced from it. Experience remains, of course, the sole criterion of the physical utility of a mathematical construction. But the creative principle resides in mathematics. In a certain sense, therefore, I hold it true that pure thought can grasp reality, as the ancients dreamed. - Albert Einstein

2snakes · 3 months ago
An alternative to abstraction is to use iconic forms and boundary math (containerization and void-based reasoning). See Laws of Form and William Bricken's books recently. Using a unary operator instead of binary (Boolean) does indeed seem simpler, in keeping with Nature. Introduction: https://www.frontiersin.org/journals/psychology/articles/10....
2snakes commented on Cache of devices capable of crashing cell network is found in NYC   nytimes.com/2025/09/23/us... · Posted by u/adriand
jacquesm · 3 months ago
I've already narrowed it down to four buildings for you, so we can consider that all of those methods worked. What is your next move?

I'm not saying it can't be done, clearly it can be done otherwise this article wouldn't exist. But it is not quite as easy as pointing a magic wand (aka an antenna) at a highrise and saying '14th floor, apartment on the North-West corner', though that would obviously make for good cinema.

2snakes · 3 months ago
There used to be a thing called Waterwitch in the NSA ANT catalog. Would that help?
2snakes commented on How to become a pure mathematician or statistician (2008)   hbpms.blogspot.com/... · Posted by u/ipnon
pgustafs · 3 months ago
Nah, just study linear algebra (Shilov or Hoffman & Kunze) and baby Rudin. Then read the most famous books in geometry, analysis, and algebra (do proofs + get a mentor). All these roadmap things are meaningless. It’s like “how to join the NBA.” Lift weight, condition, and practice fundamentals. Nothing else matters.
2snakes · 3 months ago
2snakes commented on Fartscroll-Lid: An app that plays fart sounds when opening or closing a MacBook   github.com/iannuttall/far... · Posted by u/gaws
2snakes · 3 months ago
What's next, pr0n sounds? lulz
2snakes commented on A non-anthropomorphized view of LLMs   addxorrol.blogspot.com/20... · Posted by u/zdw
orbital-decay · 5 months ago
>I am baffled by seriously intelligent people imbuing almost magical human-like powers to something that - in my mind - is just MatMul with interspersed nonlinearities.

I am baffled by seriously intelligent people imbuing almost magical powers that can never be replicated to to something that - in my mind - is just a biological robot driven by a SNN with a bunch of hardwired stuff. Let alone attributing "human intelligence" to a single individual, when it's clearly distributed between biological evolution, social processes, and individuals.

>something that - in my mind - is just MatMul with interspersed nonlinearities

Processes in all huge models (not necessarily LLMs) can be described using very different formalisms, just like Newtonian and Lagrangian mechanics describe the same stuff in physics. You can say that an autoregressive model is a stochastic parrot that learned the input distribution, next token predictor, or that it does progressive pathfinding in a hugely multidimensional space, or pattern matching, or implicit planning, or, or, or... All of these definitions are true, but only some are useful to predict their behavior.

Given all that, I see absolutely no problem with anthropomorphizing an LLM to a certain degree, if it makes it easier to convey the meaning, and do not understand the nitpicking. Yeah, it's not an exact copy of a single Homo Sapiens specimen. Who cares.

2snakes · 5 months ago
There is this thing called Brahman in Hinduism that is interesting to juxtapose when it comes to sentience, and monism.
2snakes commented on Bohemians at the Gate?   inferencemagazine.substac... · Posted by u/surprisetalk
niemandhier · 7 months ago
I live in walking distance from the place the brothers Grimm sourced their version of Snow White.

Ai image generator frequently refuse to create illustrations featuring the character, everybody is afraid of Disney.

Similar, Disney’s Winnie the Puh just looks like Magarete Steiffs plush bear with a red shirt.

Very often those who claim to have created an original work themselves just produced derivatives, at least those should not be protected to the detriment of humankind.

2snakes · 7 months ago
Pooh. Winnie the Pooh. <3
2snakes commented on The hidden cost of AI coding   terriblesoftware.org/2025... · Posted by u/Sharpie4679
iamleppert · 8 months ago
There's nothing stopping you from coding if you enjoy it. It's not like they have taken away your keyboard. I have found that AI frees me up to focus on the parts of coding I'm actually interested in, which is maybe 5-10% of the project. The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about. I care about certain things that I know will make the product better, and achieve its goals in a clever and satisfying way.

Even when I'm stuck in hell, fighting the latest undocumented change in some obscure library or other grey-bearded creation, the LLM, although not always right, is there for me to talk to, when before I'd often have no one. It doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.

2snakes · 8 months ago
I read one characterization which is that LLMs don't give new information (except to the user learning) but they reorganize old information.

u/2snakes

KarmaCake day92October 26, 2014View Original