Math.floor(Math.random() * (max - min + 1)) + min;
(Google AI summary says this is the thing)The CSS function would be random(min, max)
Also the CSS function seems to take a number of steps, it is not immediately obvious to me how to do that with Math.random()
I imagine there's some deep ideological war over whether to add more programming functionality to css...
So I don't buy the engineering angle, I also don't think LLMs will scale up to AGI as imagined by Asimov or any of the usual sci-fi tropes. There is something more fundamental missing, as in missing science, not missing engineering.
So I do think that's more interpretable in two ways:
1. You can look at specific representations in the model and "see" what they "mean"
2. This means you can give a high-level interpretation to a particular inference run: "X_i is a 7 because it's like this prototype that looks like a 7, and it has some features that only turn up in 7s"
I do think complex models doing complex tasks will sometimes have extremely complex "explanations" which may not really communicate anything to a human, and so do not function as an explanation.
Neutral networks need to be over parameterized to find good solutions, meaning there is a surface of solutions. The optimization procedure tries to walk towards that surface as quickly as possible, and tend to find a low-energy point on the surface of solutions. In particular, a low energy solution isn't sparse, and therefore isn't interpretable.
The 90s were a special time when it came to video games. I'm a bit saddened that we are unlikely to repeat that era. There are some great games today too, but none of them capture that same zeitgeist.
Never is a strong word. I have definitely visited robots.txt of various websites for a variety of random reasons.