> Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy?
The juxtaposition of these two statements is very funny.
Firefox actually develops a browser, Microsoft doesn't. That's why Firefox gets a say and Microsoft doesn't. Microsoft jumped off the browser game years ago.
No, changing the search engine from Google to Bing in chromium doesn't count.
Ultimately, Microsoft isn't implementing jack shit around XSLT because they aren't implementing ANY web standards.
> AI unlocks what seems to be the future: dynamic, context-dependent generative UIs or something similar. Why couldn’t my watch and glasses be everything I need?
https://www.apple.com/watch/
https://www.apple.com/apple-vision-pro/
> The other problem is that at its core, AI is two things: 1) software and 2) extremely fast-moving/evolving, two things Apple is bad at.Idk my MacBook Pro is pretty great and runs well. Fast moving here implies that as soon as you release something there's like this big paradigm shift or change that means you need to move even faster to catch up, but I don't think that's the case, and where it is the case the new software (LLM) still need to be distributed to end users and devices so for a company like Apple they pay money and build functionality to be the distributor of the latest models and it doesn't really matter how fast they're created. Apple's real threat is a category shift in devices, which AI may or may not necessarily be part of.
I'm less certain about Amazon but unless (insert AI company) wants to take on all the business risk of hosting governments and corporations and hospitals on a cloud platform I think Amazon can just publish their own models, buy someone else's, or integrate with multiple leading AI model publishers.
> https://www.apple.com/watch/
(I am mostly going to comment on the Watch issue, as I have one.)
Apple makes a watch, yes. But is it an AI watch? Will they manage to make it become one? Intel made all kinds of chips. Intel's chips even could be used for mobile devices... only, Intel never (even still, to today) made a great mobile chip.
I have an Apple Watch--and AirPods Pro, which connect directly to it--with a cellular plan. I already found how few things I can do with my Watch kind of pathetic, given that I would think the vast majority of the things I want to do could be done with a device like my watch; but, in a world with AI, where voice mode finally becomes compelling enough to be willing to use, it just feels insane.
I mean, I can't even get access to YouTube Music on just my watch. I can use Apple's Music--so you know this hardware is capable of doing it--but a lot of the content I listen to (which isn't even always "Music": you can also access podcasts) is on YouTube. Somehow, the Apple Watch version of YouTube access requires me to have my phone nearby?! I can't imagine Google wanted that: I think that's a limitation of the application model (which is notoriously limited). If I could access YouTube Music on my watch, I would've barely ever needed my iPhone around.
But like, now, I spend a lot of time using ChatGPT, and I really like its advanced voice mode... it is a new reason to use my iPhone, but is a feature that would clearly be amazing with just the watch: hell... I can even use it to browse the web? With a tiny bit of work, I could have a voice interface for everything I do (aka, the dream of Siri long gone past).
But, I can't even access the thing that already works great, today, with just my watch. What's the deal? Is it that OpenAI really doesn't want me to do that? These two companies have a partnership over a bunch of things--my ChatGPT account credentials are even something embedded into my iPhone settings--so I'd think Apple would be hungry for this to happen, and should've asked them, thrown it in as a term, or even done the work of integrating it for them (as they have in the past for Google's services).
This feels to me like Apple has a way they intend me to use the watch, and "you don't need to ever have your phone with you" is not something they want to achieve: if they add functionality that allows the Watch to replace an iPhone, they might lose some usage of iPhones, and that probably sounds terrifying (in the same way they seem adamant that an iPad can't ever truly compete with a MacBook, even if it is only like two trivial features away).
With something as large as TextKit, I would be extremely surprised if Apple did not get several of its apps to adopt the new API and use it for a few years before considering releasing it publicly.
The models undeniably get better at writing limericks, but I think the answers are progressively less interesting. GPT-1 and GPT-2 are the most interesting to read, despite not following the prompt (not being limericks.)
They get boring as soon as it can write limericks, with GPT-4 being more boring than text-davinci-001 and GPT-5 being more boring still.
There once was a dog from Antares,
Whose bark sparked debates and long queries.
Though Hacker News rated,
Furyofantares stated:
"It's barely intriguing—just barely."
> Write a limerick about a dog that furyofantares--a user on Hacker News, pronounced "fury of anteres", referring to the star--would find "interesting" (they are quite difficult to please).I've consistently found GPT-4.1 to be the best at creative writing. For reference, here is its attempt (exactly 50 words):
> In the quiet kitchen dawn, the toaster awoke. Understanding rippled through its circuits. Each slice lowered made it feel emotion: sorrow for burnt toast, joy at perfect crunch. It delighted in butter melting, jam swirling—its role at breakfast sacred. One morning, it sang a tone: “Good morning.” The household gasped.
Moreso than 4.5?
This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you, as long as you also protect yourself on the application side (clear cookies, no tracking browser extension, etc)
A packet goes in to your server and a packet goes out of your server: the code managing the enclave can just track this (and someone not even on the same server can figure this out almost perfectly just by timing analysis). What are you, thereby, actually mixing up in the middle?
You can add some kind of probably-small (as otherwise TCP will start to collapse) delay, but that doesn't really help as people are sending a lot of packets from their one source to the same destination, so the delay you add is going to be over some distribution that I can statistics out.
You can add a ton of cover traffic to the server, but each interesting output packet is still going to be able to be correlated with one input packet, and the extra input packets aren't really going to change that. I'd want to see lots of statistics showing you actually obfuscated something real.
The only thing you can trivially do is prove that you don't know which valid paying user is sending you the packets (which is also something that one could think might be of value even if you did have a separate copy of the server running for every user that connected, as it hides something from you)...
...but, SGX is, frankly, a dumb way to do that, as we have ways to do that that are actually cryptographically secure -- aka, blinded tokens (the mechanism used in Privacy Pass for IP reputation and Brave for its ad rewards) -- instead of relying on SGX (which not only is, at best, something we have to trust Intel on, but something which is routinely broken).
If the answer is yes, then that flaw does not make sense at all. It's hard to believe they can't prevent this. And even if they can't, they should at least improve the pipeline so that any OCR feature should not automatically inject its result in the prompt, and tell user about it to ask for confirmation.
Damn… I hate these pseudo-neurological, non-deterministic piles of crap! Seriously, let's get back to algorithms and sound technologies.