Is that a fair characterization of your comment?
Why?
One of my principles is that we gain control of an uncontrollable environment by relinquishing control to that environment. It may not be obvious, but icy roads are an uncontrollable environment. Hence, the rally driver gains control by relinquishing control, allowing the car to have an imaginary and symbolic role in her success (all hail Michelle Mouton). Think of the best Scandinavian WRC champ. In the real world, abandoning driving is advisable for many, or if continuing to drive in obviously unsafe conditions, controlling what can be controlled by lowering speed, etc.
This may seem improvisational, as some of it is indeed. However, these control schemes may be orchestrated as well. How? By ranking tires by performance in the worst winter conditions on Tire Rack before making a choice. Do that and everyone wins.
For me, it's as if the hauntological presence of David Foster Wallace showed up to match the known and yet unknowable genius that is Satie.
https://en.wikipedia.org/wiki/Gymnopédies#Legacy
I had arranged variations on a theme by Erik Satie when I was in music school so my experience is indeed a wormhole through pop to Satie - very old pop, but pop nonetheless. The involvement of John Cage just makes it more unique and special to me since we had played him too at the time.
Thanks again. Love the writing here. The author met his subject's match!
No one who has "built an AI" and posted it here on HN has actually trained a LLM. They're all wrappers for some service.
This one is at least funny.
Waiting for someone to have it evaluate Apple's new "AR" interface.
Thanks for the highlight. I wasn't aware that Apple had released a new AR interface though I have used the previous VisionPro generation.
FWIW, I find the whole experience disconcerting from a design perspective.
Why?
1) We have been stuck at a Cartesian lossy limit of blocks world keyhole problems for 50 years on desktop and poor imitations of it found in tap interfaces.
2) That edge remains liminal and uncanny, precisely because it is lossy in terms of dimensions and their implied dimensionality and lack of scalability, ignorance of zoomable interface research and so on, from pixels at 2d to voxels at 3d, to 4-tuples (XYZT) in 4D and beyond.
3) Just like LLMs, the exploding dot cloud, or vector spaces of n-dimensional data found in our reality from inner space to outer space haven't really received the quantum-leaning, mirror world treatment they deserve - one which scales with the exploding complexity of its underlying dot cloud of vector spaces in all dimensions.
We are in need of a topos theoretic leap in UX/UI for decades - one which can incorporate the need for disconnected spaces that provide a localized logic to bridge the lack of unification from one uncanny valley to another. Sadly, what I've seen from the dichotomy of dominance in tech titans has not provided a compelling answer, at least not in public. I remain hopeful that R&D labs are still breathing in these sadly corporate places and simply are far more secretive since Steve.
Problem is, outside tiny pizza-team-sized labs, the number of R&D labs realizing the problem of synthesizing hybrids at 2.5d or the like is infinitesimally small.
I get the frame but I don't think arguing the co-opting of Cockburn by the MBA crowd gets us anywhere.
Think about it. GUI - Graphical User Interface - a concept taken from HCI Human Computer Interaction. I think that describes Peek and Poke in BASIC pretty well 50 years ago though nobody attributes those to Dartmouth. It also describes AI at present around the world.
But HCI is lossy. Why?
Exploding n-dimensional dot cloud vectors of language leveled by math are exactly why I fear that GUI should have died with CASE tools as a hauntological debt on our present that is indeed, spectral.
The world doesn't need more clicks and taps. Quite the converse: less. Read Fitts. You don't run a faster race by increasing cadence. You run a faster race by slowing down and focusing on technique. Kipchoge knows this. Contemplative computing could learn too but I'm not sure waiting on the world to change works.
Imagine a world where we simply arrived at the same kind of text interfaces we enjoy now whether they benefit from the browser or are hindered by it. We just needed better, more turnkey tunnels, not more GUI! We sort of have those from meet:team:zoom, but they suck while few realize why or can explain the lossy nature of scaling tunnels when many of us built them impulsively in SSH decades ago for fun.
The present suffers from the long-tail baggage of the keyhole problem Scott Meyers mentioned twenty years ago. Data science has revealed the n-dimensional data underlying many, if not most, modern systems given their complexity.
What we missed is user interface that is not GUI that can actually scale to match the dimensionality of the data without implying a 2D, 2.5D, or 3D keyhole problem on top of n-dimensional data. The gap from system-to-story is indeed nonlinear because so is the data!
I'd argue the missing link is the Imaginary or Symbolic Interface we dream of but to my knowledge, have yet to conceive. Why?
It's as if Zizek has not met his match in software though I suspect there's a Brett Victor of interface language yet to be found, (Stephen Johnson?) because grammatology shouldn't stop at speech:writing.
Grammatology needed to scale into Interface Culture found in software's infinite extensibility in language, since computers were what McLuhan meant when he said, "Media" and I'm pretty sure "Augmentation is Amputation" is absolute truth if we continue down our limited Cartesian frame - we'll lose limbs of agency, meaning, and respond-in-kind social reciprocity in the process, if any of those remain.
The very late binding (no binding?) we see in software now is exactly what research labs were missing in the late sixties to bridge from 1945 to 1965 and beyond. I can't imagine trying to do that with the rigid stacks close-to-metal we had then.
I hope I'm not alone in seeing or saying that the answers should be a lot closer-to-mind now given virtualization from containers to models and everything in-between.
One can only hope.
https://www.eventbrite.ca/e/coach-house-spring-group-launch-...
This is false. PDFs are an object graph containing imperative-style drawing instructions (among many other things). There’s a way to add structural information on top (akin to an HTML document structure), but that’s completely optional and only serves as auxiliary metadata, it’s not at the core of the PDF format.
Indeed. Therein lies the rub.
Why?
Because no matter the fact that I've spent several years of my latent career crawling and parsing and outputting PDF data, I see now that pointing my LLLM stack at a directory of *.pdf just makes the invisible encoding of the object graph visible. It's a skeptical science.
The key transclusion may be to move from imperative to declarative tools or conditional to probabilistic tools, as many areas have in the last couple decades.
I've been following John Sterling's ocaml work for a while on related topics and the ideas floating around have been a good influence on me in forests and their forester which I found resonant given my own experience:
https://www.jonmsterling.com/index/index.xml
https://github.com/jonsterling/forest
I was gonna email john and ask whether it's still being worked on as I hope so, but I brought it up this morning as a way out of the noise that imperative programming PDF has been for a decade or more where turtles all the way down to the low-level root cause libraries mean that the high level imperative languages often display the exact same bugs despite significant differences as to what's being intended in the small on top of the stack vs the large on the bottom of the stack. It would help if "fitness for a particular purpose" decisions were thoughtful as to publishing and distribution but as the CFO likes to say, "Dave, that ship has already sailed." Sigh.
¯\_(ツ)_/¯
Skynet isn't goanna attack you with Terminators wielding a "phased plasma rifle in the 40W range", but will be auto-rejecting your job application, your health insurance claims, your credit score and brain washing your relatives on social media.