Readit News logoReadit News
Lwepz commented on White House announces $13B to modernize the US power grid   electrek.co/2022/11/18/wh... · Posted by u/mfiguiere
Lwepz · 3 years ago
This is promising. I suppose these funds will also help address the vulnerability of the US power grid to cyberattacks, due to its heavy reliance on a small set of critical nodes.

The following article from early 2022 gives a thorough yet concise overview of the subject: https://semiengineering.com/power-grids-under-attack/

Lwepz commented on How James Clear is writing his next book (2021)   every.to/superorganizers/... · Posted by u/nyc111
Lwepz · 3 years ago
>When you have a big concept in the back of your mind, it becomes a filter that everything you experience runs through

This is so true, to the point where sometimes you feel possessed by the concept and not the other way around. The concept harvests your human experience to make its way into the real world. It defines your notion of signal and noise, it restlessly samples patterns in the real world that might help its incarnation. Sometimes due to predisposition, part of the concept is hardware implemented which means you never really get to experience what it's like to live without this concept driving your life.

The concept carefully arranges your dreams, strikes you with overwhelming visions that feel more real that your clearest memories and skilfully crafts your personality.

For those who feel as though they are concept integrators, do not allow concepts to mistreat you. They don't belong to our world, they don't care about the totality of human experience, they operate on timescales far greater than that of our precious lives and our civilisation as a whole is still far too primitive to bear their throughput.

Lwepz commented on The Problem with Intelligence   oreilly.com/radar/the-pro... · Posted by u/BerislavLopac
jononomo · 3 years ago
Apparently dogs have been objectively measured to be twice as intelligent as cats: https://www.nationalgeographic.com/animals/article/dog-cat-b...
Lwepz · 3 years ago
Dogs might might be better learners and problem solvers than cats but your statement doesn't make any sense.
Lwepz commented on Gödel, Escher, Bach: an in-depth explainer   alignmentforum.org/posts/... · Posted by u/behnamoh
Lwepz · 3 years ago
For those interested, here is a talk by Gemma De La Cuevas that tackles those fascinating tangled hierarchies: https://www.youtube.com/watch?v=0Q2gF1PImZw
Lwepz commented on Be good-argument-driven, not data-driven   twitchard.github.io/posts... · Posted by u/historynops
safety1st · 3 years ago
I would start by simply putting everyone through a course in deductive reasoning at the earliest age possible: https://en.wikipedia.org/wiki/Deductive_reasoning

From there you can go into the whole spectrum of critical thinking approaches, and then on to what's basically the liberal arts e.g. philosophy, social sciences etc. as you desire. But the value you get from all of those things depends heavily on the framework you have for thinking about them going in.

Claiming random things are "fake news" would be a lot harder if people could work out what is and isn't fake by themselves!

Lwepz · 3 years ago
>I would start by simply putting everyone through a course in deductive reasoning at the earliest age possible

Indeed. This would help ensure people's brains' transition function is stable enough to perform faultless computation. We forget that our brains aren't wired for exact computation. They're wired to perform approximations of computation that are good enough for survival.

As a result, you end up with myriads students who go through the school system via memorization and emergent fuzzy computation.

They reach an adult age without possessing the cognitive tool-set to grasp the subtleties and nuances of the world they live in. The fact that such people are also preyed on by charlatans, ad companies and politicians(intersection of charlatans and ad companies) obviously doesn't help.

Lwepz commented on Cognitive loads in programming   rpeszek.github.io/posts/2... · Posted by u/ajdude
ABS · 3 years ago
it's going to take quite some time to read it all since it's long and deserves the time but since it's soliciting early feedback here it is: research and quote all the works done over the last 10 or so years by others in this space!!

The topic of cognitive load in software development is far from rarely considered and in fact it's been somewhat "popular" for several years depending on what communities and circles you participate it on- and off-line.

I'm surprised not to find any mentions to things like:

- the Team Topologies book by Skelton and Pais, published in 2019 where they cover the topic. Particularly of note here is the fact that Skelton has a Computer Science BSc and a Neuroscience MSc

- the many, many, many articles, posts, discussions and conference sessions on congnitive load from the same authors and connected people in subsequent years (I'd say 2021 was a particularly rich year for the topic)

- Dan North sessions, articles and posts from around 2013/2014 in which he talks about code that fits in your head but no more, referencing James Lewis original... insight. E.g. his GOTO 2014 session "Kicking the Complexity Habit" https://www.youtube.com/watch?v=XqgwHXsQA1g&t=510s a quick search returns references to it even in articles from 2020 https://martinfowler.com/articles/class-too-large.html

- Rich Hickey's famous 2011 Simple Made Easy talk https://www.infoq.com/presentations/Simple-Made-Easy/

Lwepz · 3 years ago
>research and quote all the works done over the last 10 years or so by researchers in this space!!

I totally understand your point and appreciate you linking those resources, however I think it's important to remember that the author's post is from a personal blog, not from a scientific journal or arxiv.

Perhaps OP would've never posted this if he felt that his "contribution" wasn't novel enough. Additionally, there's a chance that the wording and tone the author used might speak to people who found the articles you mentioned opaque(and vice versa, obviously).

If the author, feeling the urge to write something up, had looked very hard for "prior work" instead of following the flow of their insights gained through experience, perhaps they would've felt compelled to use the same vocabulary as the source, which has its pros(forwarding instead of reinventing knowledge) and cons(propagating opaque terms, self censoring because of a feeling of incompetence in the face of the almighty researchers).

That's one of the great things about blog posts: to be able to write freely without being blamed for incompleteness or prior art omission.

On a different note, I think this may also highlight the fact that the prior work you mentioned isn't easy enough to find. Perhaps knowledge isn't circulating well enough outside of particular circles.

Lwepz commented on Cognitive loads in programming   rpeszek.github.io/posts/2... · Posted by u/ajdude
Lwepz · 3 years ago
Splendid article.

I was thinking that perhaps walking the readers of our code through our architectural decisions(and not just through what our code does) is a good way to lessen their cognitive load. This helps identify decisions that have been taken to look smart or because the foie-gras the author ate on that day went down really well with the Chardonnay and made them feel extra stylish.

This also helps us understand how well we know the tools we're using versus how much we do simply through pattern repetition.

Lwepz commented on Software engineering research questions   neverworkintheory.org/202... · Posted by u/pabs3
qsort · 3 years ago
These all stem from the complete lack of empiricism and scientific method in this discipline. I'm pretty sure we all have opinions on most of that stuff. None of which are backed by any evidence whatsoever, we are basically always going with our gut.
Lwepz · 3 years ago
"23. Has anyone ever compared how long it takes to reach a workable level of understanding of a software system with and without UML diagrams or other graphical notations? More generally, is there any correlation between the amount or quality of different kinds of developer-oriented documentation and time-to-understanding, and if so, which kinds of documentation fare best?"

This is such an important question and it's just the tip of the iceberg of a very deep problem that is rotting our software systems. We are absolutely pathetic at dealing with complexity and we actually enjoy complexity. We don't tackle questions such as 23. ANYWHERE near as seriously as we should.

Developers overestimate their mental bandwidth which leads them to pompously build over-complicated tech stacks despite only having archaic tools to mitigate and navigate their complexity.

Companies don't need to hire more devs to deal with their complex software systems, they need better tools to navigate their software systems. But because companies don't truly value their money and devs don't truly value their time, we end up in the situation we are in now. We should have hundreds of companies investing on initiatives akin to Moldable Development[1], instead they play the following bingo: 1) let's just hire more devs and hope to land on a 10xer 2) let's build our own framework

Additionally, we overvalue specialization. By overloading developer brains with complex tech stacks, we encourage a culture of specialized profiles who find solace in trivia. Doing so, we limit cross-pollination and stifle true innovation. This attitude is actively killing-off thousands of valuable ideas. Every second, there's a coder out there who thinks of something wild, which requires very specific tools from different fields and finds out that the people who built such tools couldn't be bothered making them accessible under a sensible time-budget to people outside of their niche/ivory tower. So the dev either drops the idea or gets sucked up into a niche.

This is tragic, but hey look! We have a new (totally not low-hanging fruit that could be predicted 10 years ago) Generative Model, WOW! "What a time to be alive"!

[1] https://moldabledevelopment.com/

Lwepz commented on Stable Diffusion animation   replicate.com/andreasjans... · Posted by u/gcollard-
Lwepz · 3 years ago
It's clear that the next frontier is to have 3D-space instead of image space transitions. Language itself is very static and action verbs are not enough to specify scene dynamics. I suppose we would need: A. an enriched version of natural language that refines the dynamic processes that occur in a scene B. a data set of isolated processes labeled in the language described in A.

I've had a hard time finding ongoing work on A. and B, perhaps it isn't much of a priority for research groups.

u/Lwepz

KarmaCake day116December 14, 2020View Original