I can imagine no ways in which this ends poorly. Like, News Corp in particular, but news in general, has become an entertainment product with heavy biases. Are we really thinking it's a good idea to just wire those into systems we don't fully understand, and haven't yet reckoned with?
I'm not opposed to AI, but it all feels so ham fisted and reckless as everyone tries to be the first to some imaginary finish line.
What’s the real strategy here by OpenAI? A temporary cushion and a reprieve from complaints by big publishers?
All these deals they’re making will force other brands/orgs to follow, I am just wondering - once they get a taste of that free money, are they going to continue to follow those “highest journalistic standards” and talk about OpenAI in the context of copyright for everyone else.
Wild Ass Guess: Everyone OpenAI is negotiating with is likely considering how much they can gain from potential future revenue. This typically leads them to demand high rates.
However, this move flips the question: instead of asking how much they can make, they must consider what happens if their competitors partner with OpenAI. The concern then shifts to what competitors can achieve in the market that could threaten not just their economic interests but potentially their entire business model and worldview.
Suddenly, the desire to be the one partnering with OpenAI isn't just about accessing a lucrative revenue stream; it becomes a strategic imperative to mitigate risks and maintain competitive parity.
If those brands do not follow, is there a risk that the nature of most of News Corps content and its volume will skew the behavior of OpenAI’s models? They either don’t consider the former a risk, don’t consider the latter a risk, or just don’t care about skew (which would sadden me).
That’s terrible news… unclear how deep and subtle the ramifications will be, but I guess it does highlight that propaganda has never been more empowered.
I feel like I've seen a dozen of these partnership announcements from OpenAI, but I've never actually seen ChatGPT make use of this sort of thing:
>Through this partnership, OpenAI has permission to display content from News Corp mastheads in response to user questions
Is there some mode or tool where ChatGPT includes links to or content from these licensed news stories? Or is this just OpenAI making a payoff to protect itself in cases of unforeseen memorization of training data, like what set off the NYT lawsuit?
Hard to take this statement seriously when they are the owners of tabloids like The Sun and New York Post
Murdoch is a propagandistic blight and what Walter Lippmann warned us about over a century ago
I'm not opposed to AI, but it all feels so ham fisted and reckless as everyone tries to be the first to some imaginary finish line.
That's a hoot. Total breach in trust to integrate AI with xenophobia news outlets.
All these deals they’re making will force other brands/orgs to follow, I am just wondering - once they get a taste of that free money, are they going to continue to follow those “highest journalistic standards” and talk about OpenAI in the context of copyright for everyone else.
Or even say a bad word of them altogether.
However, this move flips the question: instead of asking how much they can make, they must consider what happens if their competitors partner with OpenAI. The concern then shifts to what competitors can achieve in the market that could threaten not just their economic interests but potentially their entire business model and worldview.
Suddenly, the desire to be the one partnering with OpenAI isn't just about accessing a lucrative revenue stream; it becomes a strategic imperative to mitigate risks and maintain competitive parity.
PaaS: Propaganda as a service. You give us the story, we give you propaganda.
>Through this partnership, OpenAI has permission to display content from News Corp mastheads in response to user questions
Is there some mode or tool where ChatGPT includes links to or content from these licensed news stories? Or is this just OpenAI making a payoff to protect itself in cases of unforeseen memorization of training data, like what set off the NYT lawsuit?
ChatGPT says climate change effects overstated.
Deleted Comment
Deleted Comment