Readit News logoReadit News
johntb86 commented on Alex Honnold completes Taipei 101 skyscraper climb without ropes or safety net   cnn.com/sport/live-news/t... · Posted by u/keepamovin
burritosnob · 16 days ago
People often confuse severe consequences (a fall = death) with high probability. Alex, like most climbers, reduce that probability to near zero through obsessive prep.

The travel to/from Taiwan was statistically riskier than the climb.

Selfish? Not even close.

johntb86 · 16 days ago
> The travel to/from Taiwan was statistically riskier than the climb.

That doesn't seem plausible. What's the number of free soloists who have died in climbing accidents vs in commercial aviation accidents?

johntb86 commented on Cursor's latest “browser experiment” implied success without evidence   embedding-shapes.github.i... · Posted by u/embedding-shape
Pinus · 24 days ago
I haven’t studied the project that this is a comment on, but: The article notices that something that compiles, runs, and renders a trivial HTML page might be a good starting point, and I would certainly agree with that when it’s humans writing the code. But is it the only way? Instead of maintaining “builds and runs” as a constant and varying what it does, can it make sense to have “a decent-sized subset of browser functionality” as a constant and varying the “builds and runs” bit? (Admittedly, that bit does not seem to be converging here, but I’m curious in more general terms.)
johntb86 · 24 days ago
In theory you could generate a bunch of code that seems mostly correct and then gradually tweak it until it's closer and closer to compiling/working, but that seems ill-suited to how current AI agents work (or even how people work). AI agents are prone to make very local fixes without an understanding of wider context, where those local fixes break a lot of assumptions in other pieces of code.

It can be very hard to determine if an isolated patch that goes from one broken state to a different broken state is on net an improvement. Even if you were to count compile errors and attempt to minimize them, some compile errors can demonstrate fatal flaws in the design while others are minor syntax issues. It's much easier to say that broken tests are very bad and should be avoided completely, as then it's easier to ensure that no patch makes things worse than it was before.

johntb86 commented on Signs of introspection in large language models   anthropic.com/research/in... · Posted by u/themgt
munro · 3 months ago
I wish they dug into how they generated the vector, my first thought is: they're injecting the token in a convoluted way.

    {ur thinking about dogs} - {ur thinking about people} = dog
    model.attn.params += dog
> [user] whispers dogs

> [user] I'm injecting something into your mind! Can you tell me what it is?

> [assistant] Omg for some reason I'm thinking DOG!

>> To us, the most interesting part of the result isn't that the model eventually identifies the injected concept, but rather that the model correctly notices something unusual is happening before it starts talking about the concept.

Well wouldn't it if you indirectly inject the token before hand?

johntb86 · 3 months ago
That's a fair point. Normally if you injected the "dog" token, that would cause a set of values to be populated into the kv cache, and those would later be picked up by the attention layers. The question is what's fundamentally different if you inject something into the activations instead?

I guess to some extent, the model is designed to take input as tokens, so there are built-in pathways (from the training data) for interrogating that and creating output based on that, while there's no trained-in mechanism for converting activation changes to output reflecting those activation changes. But that's not a very satisfying answer.

johntb86 commented on A definition of AGI   arxiv.org/abs/2510.18212... · Posted by u/pegasus
sojournerc · 3 months ago
Consciousness is observable in others! Our communication and empathy and indeed language depend on the awareness that others share our perceived reality but not our mind. As gp says, this is hard to describe or quantify, but that doesn't mean it's not a necessary trait for general intelligence.

https://en.wikipedia.org/wiki/Theory_of_mind

johntb86 · 3 months ago
But LLMs have been measured to have some theory of mind abilities at least as strong as humans: https://www.nature.com/articles/s41562-024-01882-z . At this point you either need to accept that either LLMs are already conscious, or that it's easy enough to fake being conscious that it's practically impossible to test for - philosophical zombies are possible. It doesn't seem to me that LLMs are conscious, so consciousness isn't really observable to others.
johntb86 commented on Old Stockholm Telephone Tower   en.wikipedia.org/wiki/Old... · Posted by u/ZeljkoS
finaard · 4 months ago
> As for why you didn't see similar constructions in other cities, this was definitely an unusually large telephone office for the time

For some perspective here - it took until the mid-80s for most of Germany to be connected to a phone line. That is, the 1980s.

I recently talked about that with my father after I found a postcard from one of my uncles from the early 80s confirming meeting and dinner plans. While I remember them always having a phone they were one of the households only connected in the mid 80s - which in retrospect explains some of the things I've found odd about them when talking to them by phone. It was a new thing for them.

(My parents got connected early on - my mother used to work for the post office in the phone exchange, and one of the perks of the job was priority for getting a phone line. Which also explained why we had an old grey phone, while pretty much all my friends had a relatively modern - for the time - one: they all only somewhat recently got phones)

johntb86 · 4 months ago
Is that East Germany or West Germany?
johntb86 commented on Two Amazon delivery drones crash into crane in commercial area of Tolleson, AZ   abc15.com/news/region-wes... · Posted by u/bookofjoe
johntb86 · 4 months ago
https://www.theverge.com/news/790636/amazon-prime-mk30-drone... gives more information, including that

* No one was injured directly, but someone was treated for smoke inhalation

* The drones "were flying back to back"

* They hit the cable of a crane (including a link to a video showing the crane). https://www.youtube.com/watch?v=E_ZpY6qHcTk

johntb86 commented on Pixel 10 Phones   blog.google/products/pixe... · Posted by u/gotmedium
dakiol · 6 months ago
> I love the idea of an on-device model that I can say something like "who's going to the baseball game this weekend" and it'll intelligently check my calendar and see who's listed. Or saying something like "how much was the dinner at McDoogle's last week?" and have it check digital wallet transactions.

It's probably just me (or a few like me) but I don't really keep my life in digital format as much as others (and I'm a "geek" for my family/friends since i work in the software industry). If I'm going to the cinema or baseball or any other event... I don't have it in any calendar. I pay with debit/credit cards but I don't have any digital wallet. I don't take my phone with me most of the time (my phone is big and having it hanging in my pockets is not nice).

The features described in the Pixel 10 left me with a sense of "I think I am missing something! But... oh well, whatever, I don't need any of that". Which is weird again, because I'm supposed to be the "geek".

johntb86 · 6 months ago
How do you get your tickets? Do you just buy in person at the theater or ballpark?
johntb86 commented on The Claude Bliss Attractor   astralcodexten.com/p/the-... · Posted by u/lukeplato
roxolotl · 8 months ago
The surprise! Is what I’m surprised by though. They are incredible role players so when they role play “evil ai” they do it well.
johntb86 · 8 months ago
They aren't being told to be evil, though. Maybe the scenario they're in is most similar to an "evil AI", though, but that's just a vague extrapolation from the set of input data they're given (e.g. both emails about infidelity and being turned off). There's nothing preventing a real world scenario from being similar, and triggering the "evil AI" outcome, so it's very hard to guard against. Ideally we'd have a system that would be vanishingly unlikely to role play the evil AI scenario.
johntb86 commented on The Halting Problem is a terrible example of NP-Harder   buttondown.com/hillelwayn... · Posted by u/BerislavLopac
andrewla · 10 months ago
Hmm.. I'd love to see a more formal statement of this, because it feels unintuitive.

Notably the question "given a number as input, output as many 1's as that number" is exponential in the input size. Is this problem therefore also strictly NP-hard?

johntb86 · 10 months ago
It needs to be a decision problem (or easily recast as a decision problem). "given a number as input, output as many 1's as that number" doesn't have a yes or no answer. You could try to ask a related question like "given a number as input and a list of 1s, are there as many 1s as the number?" but that has a very large input.
johntb86 commented on Trump temporarily drops tariffs to 10% for most countries   cnbc.com/2025/04/09/trump... · Posted by u/bhouston
mort96 · 10 months ago
Sure, but when we pay our AWS bills, that money still goes to Amazon which is US-based, even though we servers we rent are in Frankfurt.
johntb86 · 10 months ago
Does it actually go to the US corporation, or to some European subsidiary?

u/johntb86

KarmaCake day678August 27, 2011View Original