if() isn't the only way to do this, though. We've been using a technique in Review Board that's roughly equivalent to if(), but compatible with any browser supporting CSS variables. It involves:
1. Defining your conditions based on selectors/media queries (say, a dark mode media selector, light mode, some data attribute on a component, etc.).
2. Defining a set of related CSS variables within those to mark which are TRUE (using an empty value) and which are FALSE (`initial`).
3. Using those CSS variables with fallback syntax to choose a value based on which is TRUE (using `var(--my-state, fallback)` syntax).
I wrote about it all here, with a handful of working examples: https://chipx86.blog/2025/08/08/what-if-using-conditional-cs...
Also includes a comparison between if() and this approach, so you can more easily get a sense of how they both work.
Music is funny. I played the closed hi-hat sound (https://synthrecipes.org/#closed-hi-hat) a couple of times and my brain instantly started playing AC/DC's, Back in Black. I probably haven't listened to that song in 15 years and now I'm shuffling AC/DC on Spotify.
Also, the "Sub bass" link might be broken:
Thank you! It is now fixed <3
[0]: This was actually Monday-Thursday with travel on Friday
The tl;dr is that phrasing the question as a Yes/No forces the answer into, well, a yes or a no. Without pre-answer reasoning trace, the LLM is forced to make a decision based on it's training data, which here is more likely to not be from 2025, so it picks no. Any further output cannot change the previous output.
[1] https://ramblingafter.substack.com/p/why-does-chatgpt-think-...
I worked on systems for evaluating the quality of models over time and for evaluating the quality of new models before release to understand how the new models would perform compared to current models once in the wild. It was difficult to get Siri to use these tools that were outside of their org. While this wouldn't solve the breadth of Siri's functionality issues, it would have helped improve the overall user experience with the existing Siri features to avoid the seemingly reduction of quality over time.
Secondly, and admittedly farther from where I was... Apple could have started the move from ML models to LLMs much sooner. The underlying technology for LLMs started gaining popularity in papers and research quite a few years ago, and there was a real problem of each team developing their own ML models for search, similarity, recommendations, etc that were quite large and that became a problem for mobile device delivery and storage. If leadership had a way to bring the orgs together they may have landed on LLMs much sooner.
See: https://codepen.io/abdus/pen/bNpQqXv