Readit News logoReadit News
somethingsome commented on Stanford to continue legacy admissions and withdraw from Cal Grants   forbes.com/sites/michaelt... · Posted by u/hhs
ungreased0675 · 15 days ago
I taught a critical thinking course to junior analysts in my organization. I did not observe any correlation between people with college degrees and critical thinking skills. If anything, people with life experience (multiple previous jobs) seemed to come in with higher critical thinking skills.
somethingsome · 15 days ago
I would be interested in looking at such a course, all the ones I saw were pretty dull.

I found the most systematic way of teaching critical thinking is to do a lot of maths.

somethingsome commented on Leonardo Chiariglione – Co-founder of MPEG   leonardo.chiariglione.org... · Posted by u/eggspurt
thinkingQueen · 18 days ago
Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them. JVET has an attendance of about 350 such engineers each meeting (four times a year).

Not to mention the computer clusters to run all the coding sims, thousands and thousands of CPUs are needed per research team.

People who are outside the video coding industry do not understand that it is an industry. It’s run by big companies with large R&D budgets. It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.

MPEG and especially JVET are doing just fine. The same companies and engineers who worked on AVC, HEVC and VVC are still there with many new ones especially from Asia.

MPEG was reorganized because this Leonardo guy became an obstacle, and he’s been angry about ever since. Other than that I’d say business as usual in the video coding realm.

somethingsome · 18 days ago
Hey, I attend MPEG regularly (mostly lvc lately), there's a chance we’ve crossed paths!
somethingsome commented on Belgium bans Internet Archive's ‘Open Library’   torrentfreak.com/belgium-... · Posted by u/gslin
RamblingCTO · 24 days ago
Hot take: it's the right decision. Why? I'm scoping my opinion on copyrighted material only. A state that is a lawful state follows its own laws. That includes copyright laws most of us don't like. So yes, I think that was the right decision for this court (as it's not the constitutional court of Belgium afaict) as it holds up the laws of Belgium and defends the rights of copyright holders (which can be individuals as well, not only evil corporation extortionists).

I know that there is a slippery slope here, but we need to change laws and make systems that are resilient against censorship. That's the only long term solution imho.

Let's be honest: it's piracy. They are not banning books. They're fighting illegal distribution. Just use a VPN and pirate the books. We gotta be honest to ourselves here.

somethingsome · 24 days ago
In Belgium... It is already extremely difficult to get the books you want to buy if they are not the popular ones..

I need to import many of my books from America by resellers and pay many duties..

Sometimes a book at $20 is sold >$200..

somethingsome commented on Belgium bans Internet Archive's ‘Open Library’   torrentfreak.com/belgium-... · Posted by u/gslin
Insanity · 24 days ago
As a Belgian, sad to see this. I rarely follow any news from Belgium (not living there anymore) so I'm somewhat unaware of what's happening in the tech landscape, but this does surprise me.

Curiously - I tried to find any news on this from Belgian sources, but couldn't find it (in my quick search).

somethingsome · 24 days ago
Belgium is worse and worse, news laws all the time, very oppressing ones, much more surveillance and way less liberties.
somethingsome commented on Show HN: I built an AI that turns any book into a text adventure game   kathaaverse.com/... · Posted by u/rcrKnight
somethingsome · a month ago
Aaah I wanted to try the infinite napkin, and see what happens with maths book, it should be interesting, but I didn't find a way to upload the pdf, and I don't have an API key.
somethingsome commented on Simplify, then add delightness: On designing for children   shaneosullivan.wordpress.... · Posted by u/shaneos
darkwater · a month ago
Learning can and should be also through practice and raising the bar, I agree, but don't mix learning as a general concept applied to a population with your own survivorship bias.
somethingsome · a month ago
I think both kind of applications should exist in parallel. I generally dislike current trend in professional softwares that try to be easy to use, at the cost of less power or more clicks to do a simple action.

Professional apps should stay professional and more time should be spent in training power users.

I'm trying not to mix with my own survivorship bias,but I tend to believe that current trends of design remove the existance of advanced users at a young age. The applications are so polished and limiting that you don't spend time trying to do complex things with them.

I find bugs in old apps were a feature for learning. If it doesn't work, you try to understand why. Curiosity is intrinsic to young children, until we remove it by giving them something that never bug or limits their possibilities.

somethingsome commented on Simplify, then add delightness: On designing for children   shaneosullivan.wordpress.... · Posted by u/shaneos
shaneos · a month ago
Author here. I find this to be a pretty cynical take. I tried to express that if I build something and it makes my kids smile then it stays in the app. You appear to have a different take on it. Should we not try to make children enjoy using the tools they use? What's the alternative? Make you app actively hostile and difficult to they'll go touch grass? I'm honestly not clear on the point you're making here.
somethingsome · a month ago
Struggle is very good for learning, I remember as a child enjoying very difficult interfaces because I was proud of being able to navigate the software when I finally get there, becoming very efficient with it, and I learned way more about the domain. (I'm thinking for example Cubase, reason, Photoshop at the time, most linux softwares, vim,...)

While easy to use softwares are more 'enjoyable' and the dopamine reward is high for small actions, it also prevent to develop some ability and resilience in navigating harder things. When the software complexity increases, users get annoyed and don't use it correctly because they were never exposed to much complexity before. (thinking about medical softwares that require many many actions to encode a patient, finance softwares, etc..)

Now, not all softwares are made to improve one efficiency. In your case, the app allows a children to express its creativity in other ways, which is very valuable also. So I think it is good that the interface and the interaction are easy, the focus should be on the creativity and not on the manipulation.

somethingsome commented on Simplify, then add delightness: On designing for children   shaneosullivan.wordpress.... · Posted by u/shaneos
jstummbillig · a month ago
I would put another spin on it: We place more value on not being hostile to readers and users in general. For example, I noticed that the good papers are less horribly written now then they were in the past. In academia, being difficult to read and understand used to be a sign of sophistication (but more realistically serves as a way to cover up bad thinking and overall slow down progress). Today, people are actually willing to point it out and treat it with the little patience this nonsense deserves.

That is not to say that complexity does not have merits, but I say let the pendulum swing. I think we could do with a lot less in most areas still.

somethingsome · a month ago
I find papers nowadays contain way less content than before, yes the writing is easier to read, but the page count didn't increase, that means that there is less information per page now.

A scientific paper is written for a specific audience, experts in that field, and when you read many papers, it's very annoying 'easy writing' because you need to rapidly understand the meat of the paper, not being introduced again and again to your subject. Now it's more difficult to find the details that you need, if they're even written.

It makes maybe the job of a PhD easier when he start studying in the field, but I think we lost something there..

Not all fields are equal, deep learning papers are very easy to read, but also very annoying to read, too many repetitions of something that is explained in another paper, I don't need to read for the 100th time what NeRF is, only what is different in this paper compared to the previous ones. While many mathematical papers are way more dense and target the intended audience.

Increasing the page count is not really a solution either, it is a burden for the writer to continue writing easy things, and for the reader to never find the interesting parts.

On the other hand, when I read a paper that is not in my field, I appreciate the easy to read paper.

I think papers should return to dense readings by experts, but authors should also maintain blogs where the paper is simplified, and those blogs should be included in the evaluation for a PhD. In this way, if you are an expert, you get the interesting parts, and at the same time, if it is not your field, you can be introduced with many good blogs to the field.

somethingsome commented on Use Your Type System   dzombak.com/blog/2025/07/... · Posted by u/ingve
Mawr · a month ago
If I understood the problem correctly, you should try calculating each format of the data once and reusing it. Something like:

    type ID {
        AsString string
        AsInt int
        AsWhatever whatever
    }

    function new type ID:
        return new ID {
            AsString: calculateAsString()
            AsInt: calculateAsInt()
            AsWhatever: calculateAsWhatever()
        }
This does assume every representation will always be used, but if that's not the case it's a matter of using some manner of a generic only-once executor, like Go's sync.Once.

somethingsome · a month ago
But the data changes very often in place with the functions calls on it.

I agree that would be a good solution, despite that my data is huge, but it assumes the data doesn't change, or doesn't change that much.

somethingsome commented on Use Your Type System   dzombak.com/blog/2025/07/... · Posted by u/ingve
tomtom1337 · a month ago
Why would it be difficult to monitor the slowness? Wouldn’t a million function calls to the from_F_to_K function be very noticeable when profiling?

On your case about swapping between image representations: let’s say you’re doing a FFT to transform between real and reciprocal representations of an image - you probably have to do that transformation in order to do the the work you need doing on reciprocal space. There’s no getting around it. Or am I misunderstanding?

Please don’t take my response as criticism, I’m genuinely interested here, and enjoying the discussion.

somethingsome · a month ago
I have many functions written by many scientists in a unique software over many years, some expect a data format the others another, it's not always the same function that is called, but all the functions could have been written using a unique data format. However, they chose the data format when writing the functions based on the application at hand at that moment and the possible acceleration of their algorithms with the selected data structure.

When I tried to refactor using types, this kind of problems became obvious. And forced more conversions than intended.

So I'm really curious because, a part from rewriting everything, I don't see how to avoid this problem. It's more natural for some applications to have the data format 1 and for others the data format 2. And forcing one over the other would make the application slow.

The problem arises only in 'hybrid' pipelines when new scientist need to use some existing functions some of them in the first data format, and the others in the other.

As a simple example, you can write rotations in a software in many ways, some will use matrix multiply, some Euler angles, some quaternions, some geometric algebra. It depends on the application at hand which one works the best as it maps better with the mental model of the current application. For example geometric algebra is way better to think about a problem, but sometimes Euler angles are output from a physical sensor. So some scientists will use the first, and the others the second. (of course, those kind of conversions are quite trivial and we don't care that much, but suppose each conversion is very expensive for one reason or another)

I didn't find it a criticism :)

u/somethingsome

KarmaCake day205June 13, 2020
About
Contact me at somethingsomehn@protonmail.com
View Original