I hope that either that's a miscommunication, or I'm wrong about how much of a red flag that seems to be.
The Chinchilla scaling laws allow you to relate, at a somewhat-better-than-rule-of-thumb level, the model size, training data size, and achieved performance of a LLM, without actually training one. So, if for instance you have a certain loss target, and a certain sized corpus of training data, you can use the scaling law to calculate what size of a model to train to hit the target. I can see that being useful to any team.
Chinchilla-optimality on the other hand means finding, for a set loss target, the combination of model size and training data size that minimizes training compute (which, roughly speaking, scales with just the product of those two numbers). But only training compute: Inference compute only scales with model size, regardless of training data. So Chinchilla-optimality is useful only if you expect training to take up most of your compute, i.e. if you are not expecting to actually use the model that much. I'm not in the field myself so I don't know how to quantify "that much", but it's definitely enough to keep those concepts distinct.
Speculative decoding: Sample a linear output (next n tokens) from draft model, submit it to a verifier model. At some index the verifier might reject a token and say that no, actually the next token should be this other token instead ("bonus token" in this paper), and that's your output. Or if it accepts the whole draft, you still get a bonus token as the next token past the draft. Then you draft again from that prefix on.
Tree-based speculation: Sample a tree of outputs from draft model, submit whole tree to verifier, pick longest accepted prefix (and its bonus token).
Speculative speculative decoding: Sample a linear output from draft model, then in parallel both verify it with the verifier model, and produce a tree of drafts branching out from different rejection points and different choices of bonus tokens at those points. When the verifier finishes, you might have have a new draft ready to submit right away.
Combined: Sample a tree from the draft model, submit the whole tree to the verifier and in parallel also plan out drafts for different rejection points with different bonus tokens anywhere in the tree.