Readit News logoReadit News
dlkf commented on Annual 'winners' for most egregious US healthcare profiteering announced   theguardian.com/us-news/2... · Posted by u/hkmaxpro
rednafi · a year ago
America is a rich country, but the majority of middle-class Americans are poorer than people in the Southeast Asian backwater I’m from. I emigrated to the U.S. in 2022 but left after a year. Life is plenty hard for citizens there, and despite working in tech, I constantly had this fear of needing healthcare in the back of my mind.

I didn’t want to risk bankruptcy because of an insurance denial, so I left quickly. Aside from the dismal state of transportation, unwalkable cities, and a self-sabotaging healthcare system, I actually kind of enjoyed my time there. Now living in Europe, I’m poorer but happier too.

dlkf · a year ago
What country are you referring to? The American middle class is objectively very wealthy: https://www.noahpinion.blog/p/no-the-us-is-not-a-poor-societ...

Deleted Comment

dlkf commented on Does current AI represent a dead end?   bcs.org/articles-opinion-... · Posted by u/jnord
imtringued · a year ago
The neurons are always learning whereas the matrices don't change.
dlkf · a year ago
I mean, the matrices obviously change during training. I take it your point is that LLMs are trained once and then frozen, whereas humans continuously learn and adapt to their environment. I agree that this is a critical distinction. But it has nothing to do with “meaningful internal structure.”
dlkf commented on Does current AI represent a dead end?   bcs.org/articles-opinion-... · Posted by u/jnord
gizmo · a year ago
Previous generations of neural nets were kind of useless. Spotify ended up replacing their machine learning recommender with a simple system that would just recommend tracks that power listeners had already discovered. Machine learning had a couple of niche applications but for most things it didn't work.

This time it's different. The naysayers are wrong.

LLMs today can already automate many desk jobs. They already massively boost productivity for people like us on HN. LLMs will certainly get better, faster and cheaper in the coming years. It will take time for society to adapt and for people to realize how to take advantage of AI, but this will happen. It doesn't matter whether you can "test AI in part" or whether you can do "exhaustive whole system testing". It doesn't matter whether AIs are capable of real reasoning or are just good enough at faking it. AI is already incredibly powerful and with improved tooling the limitations will matter much less.

dlkf · a year ago
> Previous generations of neural nets were kind of useless. Spotify ended up replacing their machine learning recommender with a simple system that would just recommend tracks that power listeners had already discovered.

“Previous generations of cars were useless because one guy rode a bike to work.” Pre-transformer neural nets were obviously useful. CNNs and RNNs were SOTA in most vision and audio processing tasks.

dlkf commented on Does current AI represent a dead end?   bcs.org/articles-opinion-... · Posted by u/jnord
singingfish · a year ago
I've been following the whole thing low key since the 2nd wave of neural networks in the mid 90s - and made a very very minor contribution to the field which has applications these days back then too.

My observation is that every wave of neural networks has resulted in a dead end. In my view, this is in large part caused by the (inevitable) brute force mathematical approach used and the fact that this can not map to any kind of mechanistic explanation of what the ANN is doing in a way that can facilitate intuition. Or as put in the article "Current AI systems have no internal structure that relates meaningfully to their functionality". This is the most important thing. Maybe layers of indirection can fix that, but I kind of doubt it.

I am however quite excited about what LLMs can do to make semantic search much easier, and impressed at how much better they've made the tooling around natural language processing. Nonetheless, I feel I can already see the dead end pretty close ahead.

dlkf · a year ago
> Current AI systems have no internal structure that relates meaningfully to their functionality

In what sense is the relationship between neurons and human function more “meaningful” than the relationship between matrices and LLM function?

You’re correct that LLMs are probably a dead end with respect to AGI, but this is completely the wrong reason.

dlkf commented on More men are addicted to the 'crack cocaine' of the stock market   wsj.com/finance/stocks/st... · Posted by u/thm
epolanski · a year ago
I remember a controversial scamming figure in Italy that made millions by selling lucky numbers for lotteries or exploiting people's stupidity to part them with their life savings.

Her motto was "idiots _have_ to be scammed".

While this figure was morally and ethically deplorable, it always made me think how her motto essentially implied that there was an element of pure natural selection happening and she saw herself as the executor of the oldest survival of the fittest balancing act.

At the same time, as someone who follows places like WSB seeing people getting in life-ruining leverage just for the adrenaline of it, or even worse, just for internet karma always made me think of that scammer's words: this is just natural selection doing it's thing as it has been for billions of years. And there's no third party scamming them. They know what they risk, and will do it anyway.

The world is full of physical and non physical dangers, we are naturally programmed to push for self preservation and yet there are people that willingly and consciously decide to put it all on risk, how can it be anything but natural selection doing it's thing?

dlkf · a year ago
Now do drug-development.
dlkf commented on OpenAI O3 breakthrough high score on ARC-AGI-PUB   arcprize.org/blog/oai-o3-... · Posted by u/maurycy
zamadatix · a year ago
I don't follow how 10 random humans can beat the average STEM college grad and average humans in that tweet. I suspect it's really "a panel of 10 randomly chosen experts in the space" or something?

I agree the most interesting thing to watch will be cost for a given score more than maximum possible score achieved (not that the latter won't be interesting by any means).

dlkf · a year ago
If you take a vote of 10 random people, then as long as their errors are not perfectly correlated, you’ll do better than asking one person.

https://en.m.wikipedia.org/wiki/Ensemble_learning

dlkf commented on Why America's economy is soaring ahead of its rivals   ft.com/content/1201f834-6... · Posted by u/kvee
sriram_malhar · a year ago
I like Vinod Khosla's take (on Twitter):

   Stop measuring GDP and instead measure the total income of the bottom 50% of the population and optimize policy for that. Additions and exits from this group can also be tracked. We get what we measure. Will this raise income inequality?

dlkf · a year ago
If we did this fifty years ago, we’d be having this discussion by snail mail.
dlkf commented on In Praise of Print: Reading Is Essential in an Era of Epistemological Collapse   lithub.com/in-praise-of-p... · Posted by u/bertman
dlkf · a year ago
There is no epistemological collapse. Access to accurate information has never been so fast nor so easy. To be sure, lies are spread on the internet - but people believed all sorts of bullshit before the internet. Those who want to claim there is a crisis don’t have a principled argument as to how things are worse.
dlkf commented on In Praise of Print: Reading Is Essential in an Era of Epistemological Collapse   lithub.com/in-praise-of-p... · Posted by u/bertman
sourcepluck · a year ago
Someone mentions Postman below so I'm tempted to add: can the tech crowd try a bit of Neil Postman, Jean Baudrillard, Guy Debord and the Situationists, Mark Fisher, Marhsall McLuhan, presumably loads of others I don't know about who have done work in these areas, and then maybe Michel Desmurget on the more science-based side of it if they want to avoid any airy-fairy theory.

It's arguably especially wild that Desmurget doesn't get a mention in these discussions. Or, I mean, it would be wild in a world where there was a smooth and effortless flow of good ideas and arguments between people, maybe over some sort of transcontinental network...

A lot of the topics that people have opinions about when it comes to screens and devices and health and etc have loads of studies on them. Which doesn't mean that everything is all solved, there are unexplored and uncertain areas, but reading these discussions you'd think there was no data out there whatsoever. There's tons!

It doesn't mean either that people can't enjoy sharing opinions, some of the anecdotes are interesting and insightful, but there seems to be a few obvious arguments which are basically non-arguments that get trotted out, and which seem to be hindering a more fruitful discussion.

How many times have we seen someone make a point about the bad type of screen-use for someone to say: "yeah, but I use ________ like _________." or "yeah, but when you read books you're being antisocial as well." and so on. The research on the topic distinguishes carefully between the different types of use! Etc etc, I could go on.

This comment is intended constructively

dlkf · a year ago
I implore everyone reading this to google the Sokal hoax before decide whether these guys are worthwhile.

u/dlkf

KarmaCake day1223November 20, 2016View Original