I have a hypothesis that an LLM can act as a pseudocode to code translator, where the pseudocode can tolerate a mixture of code-like and natural language specification. The benefit being that it formalizes the human as the specifier (which must be done anyway) and the llm as the code writer. This also might enable lower resource “non-frontier” models to be more useful. Additionally, it allows tolerance to syntax mistakes or in the worst case, natural language if needed.
In other words, I think llms don’t need new languages, we do.
Yesterday I was reminded of “Rapid Serial Visual Presentation” for speed reading, where the words are presented so you do not have to move your eyes. I am currently trying it out with a Chrome extension called SwiftRead. I set the text size so it fits into my fovea area. I used a fovea detector website I saw on HN a while ago: https://www.shadertoy.com/view/4dsXzM (make the pattern full screen, then you can see the size of your fovea).
I also learned that I can reduce some of the strain by moving my head more toward the things I am looking at on the screen.
Deleted Comment