This is super interesting. I work in a group where everything is on slack and some pieces are/were super hard. So much so that I want an AI assistant that can manage my slack feed etc... I feel like an AI bot/slack integration is a thing that needs to be done well.
I read Robert Anton Wilson and Philip K Dick many years ago. I've been observing a recurring feature in human thought / organization ever since. People in this thread have done a pretty good job with the functional psychosis part, but I encourage considering percept / concept as well: what this is is the notion that what we see influences our mental model, but it works the other way as well and our mental model influences what we're capable of seeing. Yes, sort of like confirmation bias, but much more disturbing. For example, in the CIA's online library there is a coursebook titled _Psychology of Intelligence Analysis_ (1999) and one of the topics discussed is: "Initial exposure to blurred or ambiguous stimuli interferes with accurate perception even after more and better information be- comes available." Particularly fascinating to me is that people who are first shown a picture which is too blurry to make out take longer to correctly identify it as it is made clearer. https://www.cia.gov/resources/csi/books-monographs/psycholog...
My father was a psychiatrist. I'm interested in various facets of how people come to regard each other and their surroundings. I'm fascinated with the role language plays in this. I personally believe that computer programming languages and tech stacks provide a uniquely objective framework for evaluating the emergence of "personality" in cultures.
"Diagnosticity is the informational value of an interaction, event, or feedback for someone seeking self-knowledge." https://dictionary.apa.org/diagnosticity
Environments which lack information (diagnosticity) encourage the development of neuroses: sadism, masochism, ritual, fetishism, romanticism, hysteria, superstition, etc., etc. I have observed that left to stew in their own juices the spontaneous cultures which emerge around different languages / stacks tend to gravitate towards language-specific constellations of such neuroses; I'm not the only person who has observed this. I tend towards the "radar chart" methodology described in Leary's _Interpersonal Diagnosis of Personality_ (1957); but here's a great talk someone gave at SXSW one year which explores a Lacanian model: https://www.youtube.com/watch?v=mZyvIHYn2zk
Languages like Haskell are really applied type theory etc... In some sense, the academics invent languages for different levels of abstraction to ultimately write papers about how useful they are.
In terms of programming languages, personality wise, in the end it's all javascript. Then there is Java and the Jvm which is on a mission to co-opt multiple personalities.
I graduated into a world without internet (we had it at university, hosted on Unix and Vax machines, but it wasn't available commercially. ) People who had computers were running DOS. Most businesses had no computers at all.
So the job market was both good and bad. We graduated with skills that were hard to find. But we graduated into a world where big companies had computers, small companies had paper.
So huge market opportunity, but also huge challenges. We'd either graduate into big business (banking, insurance, etc) or start something new.
I joined a person doing custom software development. We'd sell both the need, the software, and usually the hardware. ) When we didn't have work we'd work on our own stuff, eventually switching from custom development to products.
We had to bootstrap, there was no investment money in our neck of the woods.
I won't pretend the job market is the same (or even vaguely similar) now, but it seems to me that opportunities for self-employment still exist. Software is still something you can build with basically zero capital.
Ultimately a job is just someone else finding a way to add value to society. Software us one of the few ways you can do that yourself, skipping the employer.
95% of people see "a job" as the goal. I get that. My own kids are like that (zero interest in starting something new.) But there are opportunities for the other 5%. Yes, it's lot more than just coding, and yes it's a lot more risky, but the opportunities are there.
As for me, I'm closing in on retirement, but at the same time building a new (not tech) business from scratch, because there's still value I can add, and a niche I can service.
I say this all to encourage current students. You can see the world as "done" or you can see it as an infant just waiting for you to come and add your unique value. And in 35 year's time feel free to encourage the next generation with your story.
As a CS student I have many thoughts around the reasoning for this (AI reducing need for junior engineers, oversaturated market from COVID bubble, opaque job requirements/too low of bar). As much as I'd like to believe it's just a skill difference on my side, it's hard to deny my peers' and friends' struggle around me. I don't want my livelyhood to come down to a numbers/chance game. But sadly, that's what it is looking like right now.
Tao undervalues the importance of identity. As he says, it's not a single-player game, and as we all know, players will form teams. If you can convince your opponents to instead play solo (by, say, deconstructing their sense of identity), while you keep yours, you've basically won.
> (There is a nascent field of epistemic game theory, as well as some models of social media manipulation, but these fields are still in their infancy.) A more systematic study of such games would help provide a basic conceptual framework to understand these very real dynamics, and develop strategies to counter or mitigate them.
Time for a renaissance! Honestly game theory feels more practically relevant now than earlier with MAD, and it also seems obvious that the "rational actor" posited by classical behavioural economics is a pretty limited abstraction if you're interested in modeling the world. Besides politics/misinformation and wild stuff that happens in aggregate at the highest levels of "rational" economics policy.. it also feels like "management science" never really succeeded in actually saying much about the difference between healthy vs unhealthy bureaucracies, and the varieties and lifecycles of these kinds of systems. Plus epistemic/nonmonotonic logics capable of explicit belief modeling seems very well positioned for analyzing and architecting with AI systems, like checking theoretical properties of agentic interaction protocols, or answering what good mixtures of (credulousness for creativity) vs (skeptics for grounding beliefs) look like, etc.
Here's a really interesting thing, basically TLA+ style model-checking engine that supports agents, environments, protocols etc and explicitly takes into account epistemics: https://sail.doc.ic.ac.uk/software/mcmas/ Anyone else know of similar things? Software suites that are useful for game-theoretical analysis and modeling are kind of hard to find unless it's yet another toy for prisoners dillema.
Belief-and-knowledge stuff seems to be consulted and adopted in robotics/autonomous vehicles research sometimes, a place where wrong answers actually matter. But I sort of expect modeling/specs/invariants/determinism to continue to be kind of neglected almost everywhere else, because resolving ambiguity in advance is kind of threatening for groups that benefit from a zero-theory "just try it!" and "you're doing it wrong, buy more tokens and use this framework" kind of approach with AI and ML. Hope this changes.
Like I'm joking but that's the idea.
There seems to be significant opportunity to zig as others zag. Imagine the Intel letter saying "we are going to take advantage of the current hiring environment to scoop up talent, and push forward on initiatives."
So the "market" demands sacrifice basically and there is cover when everyone else is doing it. You can be contrarian but your stock may get punished. Intel may not have a good plan anyway. The reason the market demands sacrifice is likely because of predicted unfavorable economic headwinds (etc... so signs of recession or what not). These predictions could be wrong though. Companies do constantly realign though, product initiatives fail etc...