I think so. The main idea is the idea of Hopf coherence. The transformer/Hopf algebra update their internal state in order to enforce the Hopf coherence formula (you can find that in the paper).
The idea of streams (as in infinite lists) is related to this via coalgebras.
> Furthermore, if we view transformers as Hopf algebras, one can bring convolutional models, diffusion models and transformers under a single umbrella.
I was so certain this was discussing Transformers like, the action figures, and have never been so confused looking at both a link and the comments section on HN before. Especially considering: https://github.com/xamat/TransformerCatalog/blob/main/02-01.... I'm just going to keep scrolling now :'D
When I was younger I would often encounter mentions of electrical transformers, and be quite disappointed when it wasn't related to the toys or the series. Even in my 40s I still have a bit of disappointment about it...
https://arxiv.org/abs/2302.01834v1
The learning mechanism of transformer models was poorly understood however it turns out that a transformer is like a circuit with a feedback.
I argue that autodiff can be replaced with what I call in the paper Hopf coherence.
Furthermore, if we view transformers as Hopf algebras, one can bring convolutional models, diffusion models and transformers under a single umbrella.
I'm working on a next gen Hopf algebra based machine learning framework.
Join my discord if you want to discuss this further https://discord.gg/mr9TAhpyBW
The idea of streams (as in infinite lists) is related to this via coalgebras.
Have you written any more about this?
I don't think this affirmation is factual. There are people who played with this idea, but it is not part of chatGPT.
> Extension:It can be seen as a generalization of BERT and GPT in that it combines ideas from both in the encoder and decoder
I believe this is an error? Text from BART. And a space missing.
Are pages even needed anymore?
Deleted Comment