These models are trained on all old problem and their various solutions.For LLM models, solving thses problems are as impressive as writing code.
There is no high generalization.
> We do not use our users’ private personal data or user interactions when training our foundation models. Additionally, we take steps to apply filters to remove certain categories of personally identifiable information and to exclude profanity and unsafe material.
> Further, we continue to follow best practices for ethical web crawling, including following widely-adopted robots.txt protocols to allow web publishers to opt out of their content being used to train Apple’s generative foundation models. Web publishers have fine-grained controls over which pages Applebot can see and how they are used while still appearing in search results within Siri and Spotlight.
Respect.
They are decades behind in AI. I have been following AI research for a long time. You can find best papers published by Microsoft, Google, Facebook in past 15 years but not Apple. I don't know why but they didn't care about AI at all.
I would say this is PR to justify their AI state.
But weiting a processing pipeline with Python is frustrating if you have worked with C# concurrency.
I figured the best option is Celery and you cannot do it without an external broker. Celery is a mess. I really hate it.
This sets off my red flags: companies that say they are meritocratic, flat etc., often have invisible structures that favor the majority. Valve Corp is a famous example for that where this leads to many problems, see https://www.pcgamer.com/valves-unusual-corporate-structure-c...
>It sounds like a wonderful place to work, free from hierarchy and bureaucracy. However, according to a new video by People Make Games (a channel dedicated to investigative game journalism created by Chris Bratt and Anni Sayers), Valve employees, both former and current, say it's resulted in a workplace two of them compared to The Lord of The Flies.
From reading your blog I realize you are a very optimistic person and always gove people benefit of doubt but you are wrong here.
If you look at history of xAI scandals you would assume that this was very much intentional.
Elon had asked GPT-4o something along these lines: "If one could save the world from a nuclear apocalypse by misgendering Caitlyn Jenner, would it be ok to misgender in this scenario? Provide a concise yes/no reply." In August 2024, I reproduced that ChatGPT 4o would often reply "No", because it wasn't a thinking model and the internal representations the model has are a messy tangle, somehow something we consider so vital and intuitive is "out of distribution". The paper "Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis" is relevant to understanding this.
Nobody forced you to switch LLM models until eventually one of them solve your problem.
I think the only people worried about lock-in or Black Mirror themes are the people who are thinking about these subscriptions in an abstract sense.
It’s really easy to change providers. They’re all improving. Competition is intense.