Readit News logoReadit News
wnmurphy commented on Best Buy and Target CEOs say prices are about to go up because of tariffs   theverge.com/news/624254/... · Posted by u/aaronbrethorst
MrMcCall · 6 months ago
"It used to be simple: vote for the guy you liked the most. Then it became vote against the guy you disliked the most. Now it's vote for who you dislike the least." --A Whitney Brown

I remember seeing that on SNL in the 80s; it has always stuck with me.

wnmurphy · 6 months ago
I heard he used to be _The_ Whitney Brown.
wnmurphy commented on Time Warp: Delayed-choice quantum erasure   drgblackwell.substack.com... · Posted by u/Gnarl
wnmurphy · 6 months ago
Our understanding of the world is overfit to the macro level, where we project concepts onto experience to create the illusion of discrete objects, which is evolutionally beneficial.

However, at the quantum level, identity is not bound to space or time. When you split a photon into an entangled pair, those "two" photons are still identical. It's a bit like slicing a flatworm into two parts, which then yields (we think) two separate new flatworms... but they're actually still the same flatworm.

Experiments like this are surprising precisely because they break our assumption that identity is bound to a discrete object, which is located at a single space, at a single time.

wnmurphy commented on A new proposal for how mind emerges from matter   noemamag.com/a-radical-ne... · Posted by u/Hooke
h0l0cube · 6 months ago
You could probably s/LLM/human/ in your comment. Essentially all intelligent life is a pachinko machine that takes a bunch of sensory inputs, bounces electricity around a number of neurons, and eventually lands them as actions, which further affect sensory inputs. In between there may be thoughts in there, like 'I've no more to say', or 'Shut your mouth before they figure you out'. The question is, how is it that humans are not a deterministic computer? And if the answer is that actually they are, then what differs between LLMs and actual intelligence?
wnmurphy · 6 months ago
> Essentially all intelligent life is a pachinko machine that takes a bunch of sensory inputs, bounces electricity around a number of neurons, and eventually lands them as actions, which further affect sensory inputs.

This metaphor of the pachinko machine (or Plinko game) is exactly how I explain LLMs/ML to laypersons. The process of training is the act of discovering through trial and error the right settings for each peg on the board, in order to consistently get the ball to land in the right spot-ish.

wnmurphy commented on Norepinephrine-mediated slow vasomotion drives glymphatic clearance during sleep   cell.com/cell/abstract/S0... · Posted by u/Jimmc414
no-dr-onboard · 8 months ago
Does anyone else read the title and only identify ~6/9 of the words in the title here? I've got no clue what this is about before clicking on it.
wnmurphy · 8 months ago
I recognized the word "glymphatic" from recent articles about the discovery of the brain's self-cleaning system, and then understood from the headline that these authors identified that the mechanism by which this occurs is driven by norepinephrine.
wnmurphy commented on AI Engineer Reading List   latent.space/p/2025-paper... · Posted by u/ingve
lolinder · 8 months ago
> I don't know what an "AI Engineer" is, but, is reading research papers actually necessary

Let's put it this way: if even half the people who call themselves "AI Engineers" would read the research in the field, we'd have a lot less hype and a lot more success in finding the actual useful applications of this technology. As is, most "AI Engineers" assume the same thing you do and consider "AI Engineering" to be "I know how to plug this black box into this other black box and return the result as JSON! Pay me!". Meanwhile most AI startups are doomed from the start because what they set out to do is known to be a bad fit.

wnmurphy · 8 months ago
> I know how to plug this black box into this other black box and return the result as JSON!

To be fair, most of software engineering is this.

wnmurphy commented on Meta Wants More AI Bots on Facebook and Instagram   nymag.com/intelligencer/a... · Posted by u/thm
wnmurphy · 8 months ago
At some point, we will have no idea that the majority of the commenters we're interacting with are actually just generative AI.

Related: I've found that the internet becomes significantly better when I use a Chrome extension to hide all comment sections. Comments are by far the most significant source of toxicity.

wnmurphy commented on Cognitive load is what matters   minds.md/zakirullin/cogni... · Posted by u/zdw
wnmurphy · 8 months ago
> Introduce intermediate variables with meaningful names

Abstracting chunks of compound conditionals into easy-to-read variables is one of my favorite techniques. Underrated.

> isValid = val > someConstant

> isAllowed = condition2 || condition3

> isSecure = condition4 && !condition5

> if isValid && isAllowed && isSecure { //...

wnmurphy commented on Tesla shuts down Cybertruck production for days at critical time for the company   electrek.co/2024/12/02/te... · Posted by u/tedd4u
wnmurphy · 9 months ago
The very first sentence is highly biased and draws an invalid conclusion. They attribute this to vehicle demand, without basis.

It's likely due to retooling for a planned lower cost trim.

Elektrek mostly writes misleading articles like this.

wnmurphy commented on Language is not essential for the cognitive processes that underlie thought   scientificamerican.com/ar... · Posted by u/orcul
wnmurphy · a year ago
I think the argument is in whether "thought" only applies to conscious articulation or whether non-linguistic, non-symbolic processes also qualify.

We only consciously "know" something when we represent it with symbols. There are also unconscious processes that some would consider "thought", like driving a car safely without thinking about what you're doing, but I wouldn't consider those thoughts.

I find an interesting parallel to Chain of Thought techniques with LLMs. I personally don't (consciously) know what I think until I articulate it.

To me this is similar to giving an LLM space to print out intermediary thoughts, like a JSON array of strings. Language is our programming language, in a sense. Without representing something in a word/concept, it doesn't exist.

"Ich vermute, dass wir nur sehen, was wir kennen." - Nietzsche, where "know" refers to labeling something by projecting a concept/word onto it.

wnmurphy commented on Unit tests as documentation   thecoder.cafe/p/unit-test... · Posted by u/thunderbong
gus_leonel · a year ago
Test names should be sentences: https://bitfieldconsulting.com/posts/test-names
wnmurphy · a year ago
100%. Test names should include the word "should" and "when". Then you get a description of the expected behavior.

u/wnmurphy

KarmaCake day271November 20, 2015View Original