> remember when AI couldn’t count the number of Rs in “strawberry”?
GPT-5 still gets this wrong occasionally. Source: I just asked it: How many r's are in "strawberry"?
It said 2.
(I dislike this method of testing LLMs, as it exploits a very specific and quirky limitation they have, rather than assessing their general usefulness. But still, I couldn't resist.)
My favorite test is to ask it to invent a magic trick given a set of constraints and props. Because magic is generally published very secretly, surprisingly little of it is in most training sets. Pretty much just the most common method & gimmick exposures people tend to parrot online, but not the theory or exact routines behind those methods.
The worse an LLM is, the more likely it is to suggest literally impossible actions in the method, like “turn the card over twice to show that it now has three sides. Your spectators can examine the three-sided card.” It can’t tell logic from fantasy, or method from effect.
These days the value in a font isn’t in the letterforms, it’s in the kerning, ligatures, variability, etc. which all flows from the software. It’s also where a significant amount of the labor in creating a typeface comes from. And it’s the thing that sets apart professional-quality fonts from many (but not all!) free ones.
If AI can write new font software by cloning bitmaps of letterforms _and_ get the kerning, ligatures, variability, etc. right… it’ll change the type foundry industry in a big way.
[1]: https://www.copyright.gov/circs/circ33.pdf