the ikea effect in action...
I don't think it applies in this case however. There is no explicit cognitive bias at play. There would be if he considered his job is as good or better than plumber's (although actually, unless he has a very particular toilet setup, doing a good job after meticulous research and preparation is not out of reach at all).
In any case, even doing an OK job which is definitely worse than a plumber's but holds (and will presumably hold) is still a win if you're short on money but can spare the time/motivation.
Also granted you can, my instinctive reaction, the instinctive reaction is IMO try to get out, not push against the wall.
Seems to me it's not such a simple problem to solve unless you are ok with (i) some level of discomfort (ii) getting pets stuck inside and traumatised for life (granted the device does stop it's movement before crushing it).
Code can be studied and improved gradually. Since it guarantees repeatable results, you can try tweaking parameters and measuring the outcome, to achieve even better result. It's a bit like taking the highest-skilled workers, merging their knowledge, and extending their lifespan indefinitely so that they can continuously improve.
The only disadvantage I can see is that you're skewing the playing field in favor of those who have access to the most data, and by reducing the workforce size, make it difficult for someone else to obtain the same data.
At the end, a skilled worker can leave a company and make its own, or join a rival one. An algorithm can't do that, further entrenching a select few companies'positions, thus locking knowledge away from humankind. Companies probably contribute to the loss of information over time (see lost knowledge: Roman cement, massive bronze castings, etc), as free-flowing information is against their interests, in most cases.
I'd like to add a little context around the quote [0], because I do think it makes another issue very apparent:
> Building a dam requires knowledge and skill developed through years of experience. Obayashi's automated system is expected to be a game-changer in dam construction, as well as in other applications.
> "By transferring expert techniques to machines, we're able to analyze what was once implicit knowledge," said Akira Naito, head of Obayashi's dam technology unit.
> Every process for constructing the 334-meter-wide dam will involve some form of automation. That includes the initial work of establishing the foundation, and pouring concrete to form the body.
Implicit knowledge here refers to dam builders' workmanship and experience. It is an empirically constructed knowledge which is "stored" in the worksman's mind as instincts, concepts, know-how... While these may "be analyzed" and re-used for automation purposes, I have very strong doubts whether such implicit knowledge may even be truly understood and translated (be it in code or any other media) if there is such a precise goal. Specifically because the knowledge is implicit and applies situationally, one may never be able to grasp it all without some kind of very complex knowledge transfer set-up that could cover a broad range of situations, allowing all of the patterns to emerge and be identified.
Enough knowledge may be gathered that a company could successfully apply it to dam building by machines. But as automation replaces dam builders, chances are that the portion of the extracted knowledge which is not necessary to the company will join the expertise which did not transfer from the workmans' minds to the company to lay forever in the land of lost knowledge (...until it is found again through experience).
The rest will most likely be preserved "forever" through more or less obscure patents and "hard", explicit knowledge in the company's hands.
[0] https://asia.nikkei.com/Business/Engineering-Construction/Da...
The fact that he only lists the formal article as a reference instead of the announcement, video, and accessible blog post by Po-Shen Loh really baffles me.
The original "disclosure" by Po-Shen Loh [0] is much less sensational and gives some context for his work (teaching middle school students). In the formal article, he is also stating that the method is very likely not __new__, but that he wants to popularize it in teaching.
I think, as many other commenters pointed out, that there is no great breakthrough here. However "his" method may have the advantage of training the intuition of young students, by helping them understand the concepts of average and "deviation" (I'm not really sure how to call it in that case), and maybe visualizing them.
So your example would be:
function f() {
lastTime = Time.now();
}
And now if we had threads T1 and T2 calling f concurrently, you say this would be a data race?So I'm confused here. There's no bug, so a data race is not a category of defect the way race conditions are? Both your example and the article's example was a data race that is not a bug. Is there an example of a data race that is not a race condition yet leads to a bug?
Truly I'm still not sure why the author had to make a point and write a blog post about the conceptual difference between data races and race conditions.
For example, with this line:
a = a + 1;
That's really multiple operations:
1. Read a
2. Add 1 to the read a
3. Put the new value over the old a memory location.
And the data race is due to this. If this happened concurrently, for example: 1. T1 reads a which is 10
2. T2 reads a which is 10
3. T1 adds 1 to its read value 11
4. T2 adds 1 to its read value 11
5. T1 puts 11 in location a
6. T2 puts 11 in location a
And now if these had been synchronized, a would actually be 12, since you had two things increment it. But due to this data race, a is still at 11 only.Now if you deconstruct the actual operations performed by a single line of code as I did here, you see that it is exactly similar to a race condition.
In practice, I think you are almost right. There are some cases which exhibit data races, while being free of race conditions, but I think the linked post overstates the importance of differentiating between data races leading to and free of race conditions.
> That's really multiple operations > [...] > if you deconstruct the actual operations performed by a single line of code as I did here
It is not a given that the increment operation is to be divided into individual operations.
All of this is very dependent on the language (interpreted vs compiled (and how it's compiled)), how the concurrency is implemented (managed by the kernel vs software threads), as well as the ordering model of the hardware (whether the processor takes the liberty of reordering instructions).
The data races you describe lead to a race condition under the assumption that `a` or a derivative is used in some conditional branching later on. Arguably, this is always the case, otherwise, there would be no point in having this variable in the first place. But this is just one example.
Now for an example of a data race without race conditions which, IMO, is a bit more explicit than the one from the linked post. Imagine the following: we want each of our threads to write a timestamp of its execution into a shared variable, in order to know when the last thread was executed. This situation contains a data race by definition, as we do not know which thread will write its value into the shared variable. However, both values are similar enough for our monitoring needs (e.g. correct), and do not lead to race conditions.
Is it the typing speed that limits the amount and quality of code one can write in a day? To me, it's mostly finding out things about the subject area, finding the right APIs, code search.
Of course, when typing is fast enough so that you never break your flow, it is important.