Natural language isn't best described as data transfer. It's primarily a mechanism for collaboration and negotiation. A speech act isn't transferring data, it's an action with intent. Viewed as such the key metrics are not speed and loss, but successful coordination.
This is a case where a computer science stance isn't fruitful, and it's best to look through a linguistics lens.
Failing businesses.
> I would have a purpose for it, stemming from my own ethics.
I never said that business is inherently in conflict with ethics, and I, as an entrepreneur myself, believe that ethics are necessary for business: https://news.ycombinator.com/item?id=41951447
> I think it quite naive to consider Bezos has not done the same and that this decision is simply in line with his personal political interests.
I claimed that his decision is simply in line with his personal interests. Whether those are financial interests or political interests is difficult to determine. Nonetheless, the decision was bad for The Washington Post. Compare to Twitter/X: Elon Musk is indisputably using the social network he acquired for his personal political interests, and that has indisputably been bad for the business, driven away users and advertisers, and his creditors have vastly downgraded the value of the investment.
> Neoliberalism is a really poor substitute for personal morality and accountability.
This seems like a nonsequitur. How is "Neoliberalism" relevant? Is that what you believe I proposed? If so, you're wrong.
Leaving money on the table does not necessarily mean a failing business, except for some extreme definition of 'failing'.
I meant to argue against your first assertion, not your second; I'm not concerned with whether it's a bad financial decision or not.
The problematic aspect here is that the current business owner, Jeff Bezos, has a conflict of interest. Bezos is making a bad business decision for The Washington Post, sacrificing it and losing readers for the sake of his other business interests, i.e., government contracts. It's unlikely that an independent owner with no conflict of interest would make the same decision.
I really want to challenge this idea. Businesses can have missions quite distinct from what the majority of their prospective customers would want.
If I had practically unlimited money I wouldn't ever think of funding a news organisation and then only have it produce content that customers wanted. I would have a purpose for it, stemming from my own ethics.
I think it quite naive to consider Bezos has not done the same and that this decision is simply in line with his personal political interests.
Neoliberalism is a really poor substitute for personal morality and accountability.
Tacit knowledge is not the same as explicit knowledge that just happens to not be documented. Tacit knowledge is not documented because it is undocumentable. You cannot avoid tacit knowledge. Human language fundamentally cannot express all the knowledge that experts develop through years of practice and experience. Even if an expert can express their experience, there are simply ideas and skills that people cannot learn exclusively from words.
This is an important distinction. Trying to eliminate undocumented explicit knowledge is useful for teams. Trying to eliminate tacit knowledge is disastrous. I've seen attempts to make tacit knowledge legible to management—it is one of the most direct ways to sabotage your own experts because experts fundamentally cannot make all their knowledge explicit. Expertise is tacit knowledge and tacit knowledge is expertise.
People generally understand that design by committee or design by regulation has awful results; one of the main reasons for this is that committees require decisions to be fully explicit and explainable—which hamstrings creative problem-solving and pushes for poor design decisions. Exclusively explicit processes get you lowest-common-denominator results: not the lowest common denominator of the expertise present, but the lowest common denominator of what people can explicitly communicate and understand, which is even lower! If you don't let experts be experts, you won't get expert-level work.
Donald Schön’s work on this topic is really enlightening. It’s not just that there’s knowledge in the heads of people that can’t be linguistically expressed well, but also that expression of it requires interaction with a specific situation.
This is also represents a massive gap for AI systems to become actual in-the-world problems solvers.
Acting successfully in the world when faced with complex issues requires learning useful ad-hoc concepts from the specific situation you find yourself in. It's plausible an AI can learn template tactics from large datasets, but I don't think that's enough.
There are various fields, like creativity research and design thinking, where it's understood that non-trivial problems need interaction with the environment to frame a problem in a way that allows an approach to a solution. This is because of the uniqueness and novelty in the situation itself.
It might be my lack of imagination, but I don't see how a deep learning on a large data set will get there.