When you talk to most EU business owners, even in tech, the limiting factor isn't regulations. This being the #1 reason is such a tired trope.
Ironically, China has in some ways a bigger regulatory burden when it comes to software, as there if the government doesn't approve the business is dead in the water. I doubt that Klarna would've gotten off the ground there, for one, I could see them being shut down much earlier there. In the EU only now very slowly are some governments even starting to talk about some weak measures around their business model. But I've never, not once in my life, heard "Chinese software companies can't get off the ground due to the regulatory burden".
The same people who clamor about the EU regulations are the ones who hate on the EU for their protectionist measures against US tech. Yet another bout of irony here - China's software industry has flourished exactly thanks to 10 times stronger protectionist measures against US tech. So has Korea's, and their protectionism has never even been anywhere on the China level, more inbetween EU and China. No, if there's anything that would help, it's much more tech protectionism in the EU.
Pieter Levels is at the end of the day an influencer, not a serious founder.
What it's terribly good at is adding burdens that the US giants don't face early on, slowing down the early growth between 28 fragmented markets. I don't know specifically about how China works, but the question is proving product-market fit, and for that, you need a lot of users fast.
In the EU, it's a different battle country to country as the media environment, the markets, the regulation etc. are all fractured.
So when is it gonna kill Google?
Now, everyone who's running a website for a living is doing platform-native content for traffic and pairing it up with a newsletter-backed website or straight up investing in brand advertising campaigns to have access to their own audiences still, without relying on Google to deliver them.
My guess is that we're in for the second wave of Big Aggregators, but it's tough to say what the technological twist behind it will be, so it's not just a reddit 2.0.
So while he makes sense, no one wants to discuss his work, because then they must also come to a lot of the same conclusions he did, which is: the global society we have today is a lost cause, and a lot of it needs to be torn down. Which of course goes against the status quo.
It's a lot different than the fluffy, weak criticism of many today that recommend making changes that don't change anything. But then at least people reading that stuff can convince themselves that they are doing something, when they are not.
The entire article is saying "it looks kinds like a human in some ways, but people are being fooled!"
You can't really say that without at least attempting the admittedly very deep question of what an authentic human is.
To me, it's intelligent because I can't distinguish its output from a person's output, for much of the time.
It's not a human, because I've compartmentalized ChatGPT into its own box and I'm actively disbelieving. The weak form is to say I don't think my ChatGPT messages are being sent to the 3rd world and answered by a human, though I don't think anyone was claiming that.
But it is also abundantly clear to me that if you stripped away the labels, it acts like a person acts a lot of the time. Say you were to go back just a few years, maybe to covid. Let's say OpenAI travels back with me in a time machine, and makes an obscure web chat service where I can write to it.
Back in covid times, I didn't think AI could really do anything outside of a lab, so I would not suspect I was talking to a computer. I would think I was talking to a person. That person would be very knowledgeable and able to answer a lot of questions. What could I possibly ask it that would give away that it wasn't real person? Lots of people can't answer simple questions, so there isn't really a way to ask it something specific that would work. I've had perhaps one interaction with AI that would make it obvious, in thousands of messages. (On that occasion, Claude started speaking Chinese with me, super weird.)
Another thing that I hear from time to time is an argument along the line of "it just predicts the next word, it doesn't actually understand it". Rather than an argument against AI being intelligent, isn't this also telling us what "understanding" is? Before we all had computers, how did people judge whether another person understood something? Well, they would ask the person something and the person would respond. One word at a time. If the words were satisfactory, the interviewer would conclude that you understood the topic and call you Doctor.
You call a Doctor 'Doctor' because they're wearing a white coat and are sitting in a doctor's office. The words they say might make vague sense to you, but since you are not a medical professional, you actually have no empirical grounds to judge whether or not they're bullshitting you, hence you have the option to get a second or third opinion. But otherwise, you're just trusting the process that produces doctors, which involves earlier generations of doctors asking this fellow a series of questions with the ability to discern right from wrong, and grading them accordingly.
When someone can't tell if something just sounds about right or is in fact bullshit, they're called a layman in the field at best or gullible at worst. And it's telling that the most hype around AI is to be found in middle management, where bullshit is the coin of the realm.
It is anti-intellectual to blather on over pages and pages, trying to tell people how to live their lives without giving any information to show if ones statements are correct or not.
"Intellectual" is, for some reason, accepted in some circles as meaning longwinded unscientific musings, to which whole university departments are devoted. That does not mean it is not an utter waste of time and effort, that would be better spent on measuring things and giving advice that is actually shown to improve people's lives.
Like I get the acclaim, if you're raised in this environment, the business tech vocab will feel more familiar. Is it a good / better way to describe the world than the established scientific field? No.
But reading the "Peasant and his body" by Bourdieu instead would not have the same... social coinage in tech as reading the influencer of the realm.
Vaporware the whole lot of them, with spoofed ARR numbers to trigger investor FOMO.
Fast forward 28 years later, and now everyone has an amazing TV in their pocket at all times when they commute, sit in their work space, go out for coffee or lunch, or go sit down in the bathroom, all with a near infinite collection of video via youtube, netflix, and even massive amounts of porn. How little did I know. And that's to say nothing of texting and twitter and reddit and instant messaging and discord and ...
Several years ago, I was working on a college campus, and there were giant corporate-flavored murals beside some of the city blocks students walked, full of happy multicultural clip art people and exciting innovative technological innovation, and adorned with the message, "Imagine a borderless world!" Clearly that message was meant to be rhetorical, not a call to reflection, critique, or reevaluation. There did not seem to be the suggestion that one might imagine the borderless world and then, having done so, decide it was a problem to be corrected.
I wonder a lot, these days, if we're not deep into a Chesterton's Fence situation, where we have to rediscover the hard way the older wisdom about having separate spheres with separate hard constraints and boundaries on behaviors, communities, and communication pathways to facilitate all sorts of important activities that simply don't happen otherwise - something like borders and boundaries as a crucial social technology, specifically about directing attention productively. Phones and tablets are, in their own Turing complete way, portals to a borderless world that pierces the older intentional classroom boundaries.