Readit News logoReadit News
Lxr commented on Ask HN: Show me your half baked project    · Posted by u/notoriousarun
Lxr · 5 years ago
I’m working on https://www.dragondictionary.com, a Chinese-English dictionary aimed at Chinese learners. I am building it because there’s no good desktop alternative to Pleco at the moment.
Lxr commented on The Dunning-Kruger Effect Is Probably Not Real   mcgill.ca/oss/article/cri... · Posted by u/ingve
Lxr · 5 years ago
The article doesn't explain what is actually going on to produce the second chart, but here is a guess. Assume people have a true ability x, an estimated ability y which is x + some unbiased noise, and a test score z which is also x + some unbiased noise. If you take a sample and collect the lowest quartile of test scores z, then the average of the estimated abilities y in that group is higher than z because the y are produced centered at x rather than centered at z. By collecting the lowest quartile of test scores, you didn't just get the dumbest people, you also got the people who happened to perform badly on the test that day. These people may be estimating their ability accurately, in which case the estimate would be higher than their randomly bad performance that day.

This seems like too obvious a mistake to not have been noticed for this long though.

Lxr commented on Deep learning job postings have collapsed in the past six months   twitter.com/fchollet/stat... · Posted by u/bpesquet
eric_b · 5 years ago
I've worked in lots of big corps as a consultant. Every one raced to harness the power of "big data" ~7 years ago. They couldn't hire or spend money fast enough. And for their investment they (mostly) got nothing. The few that managed to bludgeon their map/reduce clusters in to submission and get actionable insights discovered... they paid more to get those insights than they were worth!

I think this same thing is happening with ML. It was a hiring bonanza. Every big corp wanted to get an ML/AI strategy in place. They were forcing ML in to places it didn't (and may never) belong. This "recession" is mostly COVID related I think - but companies will discover that ML is (for the vast majority) a shiny object with no discernible ROI. Like Big Data, I think we'll see a few companies execute well and actually get some value, while most will just jump to the next shiny thing in a year or two.

Lxr · 5 years ago
ML is a shiny object with often no discernible ROI but occasionally very large ROI, and companies are understandably nervous about missing out. Spending a small amount to hedge their bets isn't necessarily irrational.
Lxr commented on Why are some things darker when wet?   aryankashyap.com/why-are-... · Posted by u/aryankashyap
rahuldottech · 6 years ago
Lxr · 6 years ago
This seems like a better explanation, because the effect is stronger for things that absorb water than for things where the water sits on top.
Lxr commented on Auto-Antonyms   fun-with-words.com/nym_au... · Posted by u/rsj_hn
Grue3 · 6 years ago
"Terrific" used to be a synonym for "terrible", and "awesome" for "awful". Even the word "bad" has a slang meaning of "good".

In Japanese 適当 is supposed to mean "appropriate, proper", but in practice it almost always means "unserious, sloppy, careless". Not sure how that came about.

Lxr · 6 years ago
Similar to 厉害 in Chinese - it often translates as terrible, but mostly used to mean great.
Lxr commented on Can I Email: ‘Can I Use’ for email   caniemail.com/... · Posted by u/heidijavi
Lxr · 6 years ago
Why use HTML in emails though?

u/Lxr

KarmaCake day976April 30, 2016
About
andrew at wrigley . io

- Full time trader (appliedpredictionmarkets.com)

- Part time Chinese learner and maintainer of dragondictionary.com.

View Original