Readit News logoReadit News
movpasd commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
jqpabc123 · 2 days ago
He wants educators to instead teach “how do you think and how do you decompose problems”

Ahmen! I attend this same church.

My favorite professor in engineering school always gave open book tests.

In the real world of work, everyone has full access to all the available data and information.

Very few jobs involve paying someone simply to look up data in a book or on the internet. What they will pay for is someone who can analyze, understand, reason and apply data and information in unique ways needed to solve problems.

Doing this is called "engineering". And this is what this professor taught.

movpasd · 2 days ago
I agree with the overall message, but I will say that there is still a great deal of value in memorisation. Memorising things gives you more internal tools to think in broader chunks, so you can solve more complicated problems.

(I do mean memorisation fairly broadly, it doesn't have to mean reciting a meaningless list of items.)

movpasd commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
kleyd · 9 days ago
Current LLMs look a lot like a very advanced 'old brain' to me. While context engineering looks like optimizing the working memory.

What's missing is a part with more plasticity that can work in parallel and bi-directionally interact with the current static models in real-time.

This would mean individually trained models based on their experience so that knowledge is not translated to context, but to weight adjustments.

movpasd · 9 days ago
That's also my view. It's clear that these models are more than pure language algorithms. Somewhere within the hidden layers are real, effective working models of how the world works. But the power of real humans is the ability to learn on-the-fly.

Disclaimer: These are my not-terribly-informed layperson's thoughts :^)

The attention mechanism does seem to give us a certain adaptability (especially in the context of research showing chain-of-thought "hidden reasoning") but I'm not sure that it's enough.

Thing is, earlier language models used recurrent units that would be able to store intermediate data, which would give more of a foothold for these kind of on-the-fly adjustments. And here is where the theory hits the brick wall of engineering. Transformers are not just a pure machine learning innovation, the key is that they are massively scalable, and my understand is part of this comes from the _lack_ of recurrence.

I guess this is where the interest in foundation models comes from. If you could take a codebase as a whole and turn it into effective training data to adjust the weights of an existing, more broadly-trained model, But is this possible with a single codebase's worth of data?

Here again we see the power of human intelligence at work: the ability to quite consciously develop new mental models even given very little data. I imagine this is made possible by leaning on very general internal world-models that let us predict the outcomes of even quite complex unseen ("out-of-distribution") situations, and that gives us extra data. It's what we experience as the frustrations and difficulties of the learning process.

movpasd commented on Use Your Type System   dzombak.com/blog/2025/07/... · Posted by u/ingve
presz · a month ago
In TypeScript you can enable this by using BrandedTypes like this:

  type UserId = string & { readonly __tag: unique symbol };

In Python you can use `NewType` from the typing module:

  from typing import NewType
  from uuid import UUID

  UserId = NewType("UserId", UUID)

movpasd · a month ago
In Python 3.12 syntax, you can use

    type UserIs = UUID

movpasd commented on Parse, Don’t Validate – Some C Safety Tips   lelanthran.com/chap13/con... · Posted by u/lelanthran
myaccountonhn · a month ago
From experience, parsing input into data structures that fit the problem domain once at the "edge" is a good idea. The code becomes a lot more maintainable without a bunch of validation checks scattered all over the place, picking a data structure for the problem at hand usually leads to cleaner solutions, and errors usually show up much earlier and are easier to debug.

From experience though I've found that wrapping all data in newtypes adds too much ceremony and boilerplate. If the data can reasonably be expressed as a primitive type, then you might as well express that way. I can't think of a time where newtype wrapping would have saved me from accidentally not validating or accidentally inputting the wrong data as a parameter. Especially the email example is quite weak, with ~30 lines of code just being ceremony due to wrapping a string, and most likely it's just going to be fed as is to various crud operations that will cast the data to a string immediately.

Interacting with Haskell/elm libraries that have pervasive use of newtypes everywhere can be painful, especially if they don't give you a way to access the internal data. If a use-case comes up that the library developer didn't account for, then you might have no way of modifying the data and you end up needing to patch the library upstream.

movpasd · a month ago
I think it can be useful to think of the parsing and logic parts both as modules, with the parsing part interfacing with the outside world via unstructured data, and the parsing and logic parts interfacing via structured data, i.e.: the validated types.

From that perspective, there is a clear trade-off on the size of the parsing–logic interface. Introducing more granular, safer validated types may give you better functionality, but it forces you to expand that interface and create coupling.

I think there is a middle ground, which is that these safe types should be chunked into larger structures that enforce a range of related invariants and hopefully have some kind of domain meaning. That way, you shrink the conceptual surface area of the interface so that working with it is less painful.

movpasd commented on Caching is an abstraction, not an optimization   buttondown.com/jaffray/ar... · Posted by u/samuel246
necovek · 2 months ago
On top of the other things mentioned (caching always introduces complexity with lifetime tracking, and thus can't make things simple), the article's got it the wrong way around.

When code has abstract interfaces for data access, introducing caching can be simpler (but not simple) by localizing it in the abstraction implementation which has or doesn't have caching.

But it is not an abstraction (you can perfectly well do caching without any abstractions, and it's frequently done exactly that way).

movpasd · 2 months ago
I think you and the article are referring to abstractions over different concerns.

The concern you're talking about is about the actual access to the data. My understanding of the article is that it's about how caching algorithms can abstract the concern of minimising retrieval cost.

So in some ways you're coming at it from opposite directions. You're talking about a prior of "disk by default" and saying that a good abstraction lets you insert cache layers above that, whereas for the author the base case is "manually managing the layers of storage".

movpasd commented on Is TfL losing the battle against heat on the Victoria line?   swlondoner.co.uk/news/160... · Posted by u/zeristor
OJFord · 3 months ago
You could say things like that with anything in percentages? 100% increase in your pension from 100k to 200k is only 10% (increase, to 20% total) of your target 1M, or whatever.
movpasd · 3 months ago
100k to 200k is a 100% increase in absolute, but a 10 percentage point increase to your target of 1M. The difference between the example you give and the one in the article is that 0 in the case of your pension meaningfully refers to its emptiness, but in the case of Celsius, it has no "emptiness" interpretation.

The equivalent would be saying that going from 600k to 700k was a 100% increase... compared to 500k.

It's not completely meaningless, to be fair. Saying 10°C to 20°C is a 100% increase has the meaning of "it's twice as far from freezing", which isn't totally meaningless (kind of like saying Everest is twice as high as Mont Blanc, which really means "its summit is twice as far from sea level").

movpasd commented on Is TfL losing the battle against heat on the Victoria line?   swlondoner.co.uk/news/160... · Posted by u/zeristor
strken · 3 months ago
In both it makes a sort of intuitive sense. 7% of the way from freezing to boiling is a meaningful way to visualise temperature; 7% of the way from ice melting in a bath of salt to slightly above Mrs Fahrenheit's armpit temperature is also meaningful, although perhaps a little idiosyncratic.

Edit: this comment was deeply stupid for obvious reasons and I regret trying to interact with other people when I should be asleep.

movpasd · 3 months ago
The issue is a percentage of a Celsius value is not that. For example, an increase from 1°C to 2°C is a "100% increase", but is only 1 percentage point from freezing to boiling.
movpasd commented on Why I no longer have an old-school cert on my HTTPS site   rachelbythebay.com/w/2025... · Posted by u/mcbain
zubspace · 3 months ago
Wouldn't it just solve a whole lot of problems if we could just add optional type declarations to json? It seems so simple and obvious that I'm kinda dumbfounded that this is not a thing yet. Most of the time you would not need it, but it would prevent the parser from making a wrong guess in all those edge cases.

Probably there are types not every parser/language can accept, but at least it could throw a meaningful error instead of guessing or even truncating the value.

movpasd · 3 months ago
This is actually a deliberate design choice, which the breathtakingly short JSON standard explains quite well [0]. The designers deliberately didn't introduce any semantics and pushes all that to the implementors. I think this is a defensible design goal. If you introduce semantics, you're sure to annoy someone.

There's an element of "worse is better" here [1]. JSON overtook XML exactly because it's so simple and solves for the social element of communication between disparate projects with wildly different philosophies, like UNIX byte-oriented I/O streams, or like the C calling conventions.

---

[0] https://ecma-international.org/publications-and-standards/st...

[1] https://en.wikipedia.org/wiki/Worse_is_better

movpasd commented on Too Much Go Misdirection   flak.tedunangst.com/post/... · Posted by u/todsacerdoti
jchw · 3 months ago
The biggest issue here IMO is the interaction between two things:

- "Upcasting" either to a concrete type or to an interface that implements a specific additional function; e.g. in this case Bytes() would probably be useful

- Wrapper types, like bufio.Reader, that wrap an underlying type.

In isolation, either practice works great and I think they're nice ideas. However, over and over, they're proving to work together poorly. A wrapper type can't easily forward the type it is wrapping for the sake of accessing upcasts, and even if it did, depending on the type of wrapper it might be bad to expose the underlying type, so it has to be done carefully.

So instead this winds up needing to be handled basically for each type hierarchy that needs it, leading to awkward constructions like the Unwrap function for error types (which is very effective but weirder than it sounds, especially because there are two Unwraps) and the ResponseController for ResponseWriter wrappers.

Seems like the language or standard library needs a way to express this situation so that a wrapper can choose to be opaque or transparent and there can be an idiomatic way of exposing this.

movpasd · 3 months ago
I'm not sure I fully understand the issue as I don't know Go, but is this something that a language-level delegation feature could help with?
movpasd commented on Thoughts on thinking   dcurt.is/thinking... · Posted by u/bradgessler
curl-up · 3 months ago
> The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI already produces—or soon will.

So the fun, all along, was not in the process of creation itself, but in the fact that the creator could somehow feel superior to others not being able to create? I find this to be a very unhealthy relationship to creativity.

My mixer can mix dough better than I can, but I still enjoy kneading it by hand. The incredibly good artisanal bakery down the street did not reduce my enjoyment of baking, even though I cannot compete with them in quality by any measure. Modern slip casting can make superior pottery by many different quality measures, but potters enjoy throwing it on a wheel and producing unique pieces.

But if your idea of fun is tied to the "no one else can do this but me", then you've been doing it wrong before AI existed.

movpasd · 3 months ago
Sometimes the fun is in creating something useful, as a human, for humans. We want to feel useful to our tribe.

u/movpasd

KarmaCake day368January 28, 2023
About
Currently work on mathematical modelling of UK electricity wholesale markets and some data science–adjacent stuff. Theoretical physics background. Interested in systems and programming languages.
View Original