All of these arguments seem to think that the brain isn't also generating a statistical ordering of semantic words and actions. Obviously the brain has a way more complex network of subsystems with various specialties than the artificial models we have today and is able to generalize language, math, and positional reasoning and mix it with the older parts of the brain for reward and training mechanisms and it can do it in real time, but it kind of seems like modern models are showing so much flexibility at different tasks and so much generality it's feeling like they have something in common with the building blocks of the brain that might get us there.
And even if something capable of all the aspects of human cognition were created with no internal resemblance to cognition, so what, it still works. There's a reason it's called "artificial" intelligence, not "natural intelligence identical to human brains except in silicon instead of neurons".
It's like when the wheel was invented saying it's not "true" transportation because it doesn't involve legs, or when the phone wasn't invented that it isn't "true" communication because you don't see the other person's mouth or physical expressions.
I agree that you shouldn't dismiss an airplane by the fact that it doesn't flap its wings, but we also don't call them artificial birds. The debate is around the word 'intelligence', which is misleading.
The human brain has always been compared to our most advanced technologies. It used to be compared to the telephone, then digital computers, and now to deep learning models. At least deep learning model has some merit because artificial neural networks are in part inspired by a very simple model of how our brain neurons work in a network, or rather how neuropsychologists would explain reinforcement learning to a computer scientist.
However this is a great simplification, and borders on an absurd reduction. You can model our brains using linear algebra, however that doesn’t mean our brains are linear algebra computer. There is a whole lot more going on than neurons receiving feedbacks from other neurons which adjusts the weight for subsequent firing. A lot of our behavior is actually inherited (I know I spent a whole week here on HN arguing with IQ advocates on the nuance of that statement), neurochemicals and hormones add a whole another level of statefulness not seen in artificial neural networks, the brains ability to make computations is actually pretty limited (especially next to a GPU). I mean, cordiseps exists, meaning a fungus can infect an organic system and control its behavior, there is 100% chance that some yet to be discovered viral and bacterial agents, not sharing any of our DNA—and certainly not “connected” to the “weight matrix”—are also influencing our behavior (just not as dramatically), and there is 100% chance they interact with our DNA also controlling our “innate” behavior.
What is going on in our brains can only be modeled using statistical ordering of semantic words and actions. The real world brain is always going to be infinitely more complicated than this model.
We just can’t accept that we might solve ourselves. People are understandably desperate to understand their experiences as more than an encoding of a thing that might be explained.
And all of our surprising wins and awful mistakes had explainable reasons, dammit; it wasn’t just a misfiring of trained statistical networks!
I find this quite similar to the issue of free will: If we live in a generally deterministic universe, where is the space for independent decisions of individuals? Was every single decision we take already predetermined before we were born? A lot of ink has been spent on this topic, and as far as I know, only a small minority of people actually deny free will. One assumption I have is to why is because it absolutely sucks from an emotional point of view, and it is a terrible idea to base fe. a justice system on.
I think people look more desperate to hype these LLM toys, that they are not the next blockchain or self-driving car. When they fail it's just excuses like you are not using the latest version or "prompting" them right.
The LLM value add for coding is less than the value add of syntax highlighting in my experience.
Alternatively, we just can't accept that we might not solve ourselves. People are understandably desperate trying to find an explanation for everything, but can't admit that's just not ever going to happen.
> We just can’t accept that we might solve ourselves.
To solve ourselves is to know ourselves completely, and to know ourselves completely is to be honest in who and why we are what we are simultaneously across all persons. It assumes perfect knowledge.
There is no statistical approximation nor computational power which can do this.
> People are understandably desperate to understand their experiences as more than an encoding of a thing that might be explained.
Another way to frame this is, "some people are nihilists and do not see life as more than an encoding of a thing that might be explained."
> All of these arguments seem to think that the brain isn’t also generating a statistical ordering of semantic words and actions.
There may be a part of the brain that is modelled well by an LLM, but if so, there seem to also be parts that aren’t, or even existing “multimodal” models like GPT-4, which is more than an LLM.
Humans, including their brains (both in the narrow sense and in the broader sense of “everything that contributes to cognition”, which may extend beyond the brain proper) are machines, and their function probably will, someday, be mimicked by machines. But they are still more complex machines that modern “AI” systems
Agreed. One of the more interesting things about AI is how it forces humanity on a trip to consider what it means to be human. There is no soul, creativity or inspiration. There's only (very complex) agency.
>There is no soul, creativity or inspiration. There's only (very complex) agency.
That's highly debatable. For starters, who said there's no creativity or inspiration in humans, or for that matter, that there can't be in a complex A.I.?
How we achieve that creativity or inspiration is irreleant, as long the entity (human or AI) showcases creativity and inspiration.
Nor is it much clear why all being "only (a very complex) agency" would preclude creativity and inspiration.
That's like a worse version of "a human can't be creative or have feelings because it's all a bunch of molecules".
There is such a thing as emergent properties.
>And that's completely fine.
That's also highly debatable. I mean, that it would be "completely fine" if you were right and there wasn't "creativity of inspiration".
There are deep philosophical implications, just not these.
There has been a long history of reifying the behaviors of living things in general (e.g. vitalism) and humans in particular (e.g. dualism).
The success of LLMs challenges a lot of philosophy dealing with what behaviors are and are not possible in the absence of these categorical districtions.
I have had casual debates years ago, in which strong dualists asserted that the kind of creativity exhibited by today's LLMs is simply impossible. No doubt those folks are busy inventing "special philosophical creativity" that LLMs "aren't really doing," but they've lost credibility.
LLMs have demonstrated that there was never any need to invoke categorical districtions between human behavior and math-as-implemented-by-physics. The gap is closed, there is no more room for gods.
Or you can say the opposite and claim that there is soul, creativity, or inspiration in everything, even in inanimate objects (basically panpsychism, in contrast to materialism). But regardless of what kind of monism you take, the advent of current statistical AI models forces us to reconsider the Cartesian dichotomy between body and mind, which forms the basis of liberal thought.
Yes they seem to take the view into account, but this sentiment as if the question is settled and we aren't anthropically-projecting, or whatever the term is, is premature. Just because two processes produce a superficially similar result, does not make them the same process. What you just described after conceding that though the brain may be a statistical inference machine it still seems like magic.
It is telling that you talk about all those things happening in "real time." Ask any super-regarded philosopher, from Plato to Wittgenstein (yes I'm excluding Dennet et al), and that would be quite the hoot to point out.
Yep. Elevators used to be proof that machines can think. Then compilers, and chess, and go, and search, and …
The problem with AI is that as soon as it works, we stop thinking about it as “artificial intelligence” and it becomes “just automation”. Then AI moves to the next goalpost.
> able to generalize language, math, and positional reasoning and mix it with the older parts of the brain for reward and training mechanisms and it can do it in real time
most brains I run into don't do that much at all, mostly just existing and adaptation-execution
100% true. It is only ego that makes people think we are "unique" and different to AI. If one plugs current AI into a body with external stimuli such as wishing to mate, eat, etc, then the external observer will not be able to spot a difference.
Our minds are in fact same statistical models with a gradually declining ability to learn and driven by exogenous irrational goals to eat and mate.
I generally agree with you, but I think this argument overlooks the lack of an obvious reward function in large language models. GPT-N has no survival instinct, because there is no existential threat of death, no fitness function to optimize to extend its survival at any cost. Without this need to survive, there can be no motivation, and without motivation there can be no intent. And without intent, can there truly be any action?
Intent can be created by a combination of prompting and incorporating the model into a feedback loop of some kind: it has something it was tasked with (via prompting), and the feedback provides info on to what extent the task has been completed; as long as the task is incomplete it may generate more responses.
To crank it up a notch, the assigned task could involve generating itself subtasks which are handled in the same manner. This subtask generation could start to look a bit like will/intentionality.
Now consider if the top-level task is something like maximizing money in a bank account that pays for the compute to keep it running :)
(IMO this is still missing some key pieces around an emotion-like system for modulating intensities of actions and responses based on circumstantial conditions—but the basic structure is kinda there...)
I think this kind of argument is making a similar mistake. In the same way that there's nothing fundamentally special about the computation human brains do, there's nothing fundamentally special about our "fitness function" (to reproduce).
It's just hard coded, whereas GPT's is dictated. More or less anyway.
Also our "fitness function" or motivations & goals aren't even that hard coded. You can easily modify them through drugs.
If you give it a prompt telling it that it's controlling a character in a game that needs to survive or meet some other goal and give it choices of actions to do in the game, it will try to meet the goals given to it. Characters inside of GPT are perfectly capable of having goals.
Assuming you're talking about AGI/consciousness/intelligence, then whether you're right depends on what you mean above by saying that "you cannot solve the problem".
- If you mean "you cannot EXPLAIN AGI/consciousness/intelligence if you don't understand it", then that's true, but it's a trivial tautology.
- If you mean "you cannot DEVELOP AGI/consciousness/intelligence if you don't understand it" then that's very debatable.
Historically we have been able to develop all kinds of things, despite not knowing how they work. Tinkering and trial and error is often enough.
After all that's how evolution solved the problem of creating consciousness/intelligence. There wasn't some entity that "understood" intelligence that created it.
Excellent observation. In fact the language part of the brain is only a (albeit a rather large) portion of the brain (not nearly as large as visual processing though). And people who suffer brain damage which renders them unable to speak (or understand speech; which interestingly is a different portion; albeit close to each other) are still able to demonstrate intelligent behavior with ease.
In fact it is damage to the prefrontal cortex (which has nothing to do with speech) which is mostly correlated with a detriment in intelligent behavior (suspiciously also social behavior; a food for though in what we consider “intelligence”). Victims of lobotomy had their prefrontal cortex destroyed, and their injuries resulted in them loosing their personalities and loosing basic function as human beings, even though they were still able (but perhaps not always willing) to speak and comprehend speech.
I don’t think you have an ‘arithmetic’ part of your brain.
What you have that LLMs lack is a visual part of your brain - one which can instantly count quantities of objects up to about 7. That gives you tools that can be trained to do basic arithmetic operations. Although you have to be taught how to use that natural capability in your brain to solve arithmetic problems.
And of course for more complex things than simple arithmetic, you fall back on verbalized reasoning and association of facts (like multiplication tables) - which an LLM is capable of doing too.
Poor GPT though has only a one dimensional perceptual space - tokens and their embedding from start to end of its attention window - although who’s to say it doesn’t have some sense for ‘quantity’ of repeated patterns in that space too?
That computer scientists think that they are even close to replicating biology and the mammal brain with AI is complete hubris. This is biology inspired engineering, it's not like we are building brains out of silicon here.
Modern AI doesn't replicate biology except that humans find it much easier when you explain something as "Artificial Neurons" versus "Gradient Descent Back-propagation Trained Nested Mathematical Functions." Human neurons don't function anything like deep neural networks nor are the latter based on the former in their current state.
Training the brain seems a lot easier than training an AI wrt the number of iterations. You don't need to process billions or trillions of tokens to understand English.
GPT4 appears very intelligent when you discuss program code with it. It simply crazy; it can write SNOBOL4 code to extract a field from /etc/passwd.
When you discuss other things, it goes off the rails a lot more. For instance, I have had it quote some passages of classic English poetry to me, stating blankly that those passages contain certain words. The passages did not contain any traces of those words or even remotely similar words. In that situation, GPT4 was being dumber than /bin/grep, which can confirm that some piece of text doesn't contain a string.
GPT4 is deliberately trained as a coding technician, so that it will convince programmers who will become its advocates.
1. Programmers are vain and believe that everything they do requires great intelligence. (Not just some of it.)
2. GPT4 can "talk shop" with programmers, saying apparently intelligent things, perform complex refactorings, and intuit understanding of a piece of code with minimal context based on meaningful identifier names, and so on.
3. Thus, some programmers will falsely conclude that it's highly intelligent, like them.
To be sure, what the technology does is unmistakably a form intelligence. It solves complex problems of symbol manipulation and logic, in flimsy contexts. To deny that it is artificial intelligence is silly.
AI /= AGI
That's not where the goalposts are for the definition of AI.
Computers playing chess is AI. Google Assistant is AI. An expert system from 1990 is AI.
The problem seems obviously complex.
People need special education to do it, some never learn how to do it.
Making a system to do it (LLMs) is beyond most peoples understanding and even people who understand usually can't make one for a variety of reasons, e.g. money.
Are we all vain? I guess we are, but I find this take to be a bit too simplistic.
I do agree programmers are arrogant and think everything they do requires vast amounts of intelligence.
I’n not sure what your point is. I am honestly curious. You think having coders as major advocates for your product is somehow a brilliant strategy? If anything, this will work against it.
GPT4 can talk shop, and demonstrate useful code generation and refactoring, as well as understanding.
For instance, as an exercise, I had it write a maze-generating Lisp program that produces ASCII. I wanted it in a relatively little known Lisp dialect, so I described to it some of the features and how they differ from the code it generated.
For instance, GPT4 hypothesized that it has a let which can destructure multiple values: (let ((x1 y1) (function ..)) ((x2 y2) (function ...)) ...).
In plain English, I explained to GPT4 that the target Lisp dialect doesn't have multiple values, but that when we have a pair of values we can return a cons cell (cons a b) instead of (values a b). I also explained to it that we can destructure conses using tree-bind, e.g (tree-bind (a . b) (function ...) ...). And that this only takes one pattern, so it has to be used several times.
GPT4 correctly updated the function to return a cons cell and replaced the flat let with nested tree-bind.
At the end of the chat, when the code was almost working, I made a bugfix to the function which allocates the maze grid, to get it working.
I told GPT4: this version of the grid allocation function makes it work. Without it, the output is almost blank except for the vertical walls flanking the maze left and right. Can you explain why?
GPT4 correctly explained why my function works: that the function it wrote shared a single row vector across the grid rows, giving rise to sharing. It explained it like a computer scientist or seasoned developer would.
It's like a somewhat dim-witted, but otherwise capable coding clerk/technician, which talks smart.
With GPT4, you're Sherlock, and it is Watson so to speak. (Sorry, IBM Watson.) It can speak the language of crime investigation and make some clever inferences. In the end, you do all the real work of designing and debugging, and judging complex requirements against each other in the broader context. It saves you the keystrokes of doing the tedious coding you're used to doing yourself.
On the other hand, you expend some keystrokes explaining yourself. Some of the chat could be saved and used for documentation, because it captures requirements and why some things are done in a certain way and not otherwise (rejected decisions).
> GPT4 is deliberately trained as a coding technician, so that it will convince programmers who will become its advocates.
I'm a programmer and I've used GPT4 for a variety of tasks (well, tried to). The results have been mediocre on average, usually syntactically correct, but more often than not, semantically incorrect. It usually ends in frustration as GPT-4 keeps responding confidently incorrect answers and upon the slightest expression of doubt, it tends to spin in circles.
ChatGPT: <Implausible response #1>
Me: Are you sure? Reasons A, B, C [...]
ChatGPT: I apologize... <implausible response #2>
Me: Are you sure? Reasons D, E, F [...]
ChatGPT: I apologize ... <repeats implausible response #1>
I'd like to know what people who are so impressed with GPT-4's programming capabilities are doing? It must be TODO apps, solving leetcode problems, writing basic Express.js routers, some basic React components and code for <one of the top 10 popular libraries>. The kind of things ChatGPT has seen a million of examples of.
To use the tool effectively, you can't use Socratic questioning on it, or not always.
You have to already know how to code the problem, so that you can spot what is wrong, and, if necessary, tell the thing how to fix it.
The fix is not always going to come from the training data; it needs to scrape it from what you give away in your follow-up question, together with its training data.
I went through an exercise in which I encoded a paragraph from Edgar Allen Poe with a Vigenere cipher. I presented this to GPT4 and asked it to crack it. First I had to persuade it that it's ethical; it's a cipher I made myself and it's not my secret. We worked out a protocol by which it can ask me questions which show that I know the plaintext.
It was a quite a long chat, during which I had to give away many hints.
In the end I basically gave the answer away, and the thing acted like it cracked the key. Then I reproached it and admitted that yes, it didn't crack the key but used my hint to locate the E. A. Poe passage.
Basically, if you know the right answer to something, and GPT4 isn't getting it, it will not get it until you drop enough hints. As you drop the hints, it will produce rhetoric suggesting that it's taking credit for the solution.
But out of all the programmers I know, there are only certain ones that are embracing it, and the rest are still stuck in "I tried it and the code wasn't exactly what I wanted - it's all hype" land. And I think for one of the reasons you say: programmers are vain and believe everything they do requires great intelligence. A lot are still missing the forest from the trees.
I tried so much to use it for my day job but it's next to useless, it takes more time to get it to go where I want than to actually do it myself. My day job is far from requiring a great intelligence, but the tasks are too specific for chatgpt
I've used it for side projects though, especially front end stuff that I absolutely hate (js) and it works fine for that, but that's because I'm absolutely garbage at it in the first place and probably ask it to solve the most answered things ever on stackoverflow &co
I think a lot of it is due to the unresolved legal questions. Basically nobody can use it professionally yet. As soon as that happens I expect it to become a standard tool.
Although... on the other hand there are plenty of programmers that don't even use IDEs still.
I got a coding challenge as part of the interview process. I had 2h to complete it, but I finished in 30mins thanks to ChatGPT. I wrote some test cases and told ChatGPT to generate more. I reviewed them and copied a few relevant ones. It also hinted me that I could add support for fractions.
> That's not where the goalposts are for the definition of AI.
We've well past that goalpost, like decades ago, if you haven't realized.
The simplest form of AI is just a series of if-else statements (expert system). A highly sophisticated network of conditional statements can make fully informed decisions way faster and more accurate than any human mind.
What the hell, we are looking at essentially millions of matrix multiplications done on powerful GPUs. And we're told this is what the brain does. Which is supremely stupid, since as Bernardo Kastrup has emphasized many times, our brains are completely UNLIKE those systems.
The hubris in Big Tech right now is truly pathetic and imho a sign of a real disconnect with a sense of meaning and purpose to human life.
In fact recently I was listening to a podcast with Donald Hoffman,. where he mentioned brain studies done on people with split brains etc and how these people did not even have a single consciousness. So even the assumption that we have like one "AGI" program running in our brain is not at all what goes on in our brain.
Likewise Iain McGilchrist pointed out how the left and right brain hemispheres are two kind of attentions to the world, and almost two different type of consciousness in us.
I quit listening to Rupert Spira many years ago but I found this interaction between him and Donald Hoffman really insightful. DOnald mentions the brain studies in this podcast:
The Convergence of Science and Spirituality | Donald Hoffman & Rupert Spira
The hubris in Big Tech right now is truly pathetic and imho a sign of a real disconnect with a sense of meaning and purpose to human life.
Many people in the developed world, especially tech people are now living online and spend the majority of their lives looking at a screen, chatting to each other on slack and via forums like this and now comparing their intellectual abilities to that of super computers running a language model of extreme complexity, which is basically the sum total of all human technical and intellectual output. All popular LLMs exist because of this online universe we’ve created.
What the online life does is actually reduce one to an “autonomous agent”. You really become ChatGPT when you live this way. Go out for a night of dancing, practice martial arts, paint a real panting with oil on canvas, cook a nice meal, smell and taste it, or hike a mountain, then come back and try tell yourself you’re a LLM, it’s completely preposterous but it does make sense if you live an almost purely “intellectual and online existence”. Which is the world we’ve created in 2023. Science and religion are now merging and it is said science will bring us “immortality” soon through AI advancement and many would immediately upload their brain to a computer if given the chance.
It completely makes sense to forget what you actually are and it makes sense to reduce yourself to “software running on hardware”, or a statistical model, a person who clicks like and dislike on social media most of the day, you become a model trainer etc. Even if there is some truth to the former statements, it’s stupid to discount the rest of existence because you can’t instantly translate text into many languages or whatever “emergent property” is displayed in some complex system.
The intellect is something which should be consulted or used for survival or problem solving, to help one get back to enjoying life and that what’s most important.
Intellect is not something one should strive to actually become.
> What the hell, we are looking at essentially millions of matrix multiplications done on powerful GPUs. And we're told this is what the brain does. Which is supremely stupid, since as Bernardo Kastrup has emphasized many times, our brains are completely UNLIKE those systems.
That's irrelevant. Transformers are Turing complete. Whatever mathematical system underlies human cognition will thus have some expression as a transformer.
These transformers are learning how to reproduce functions by inferring associations between inputs and outputs. The data whose function is being reproduced is the function in the human brain that produces human text. Therefore these transformers are literally learning how the human brain generates text, ie. this is what the brain does.
Furthermore, there have been neuroscience papers correlating spiking behaviour in neural networks and human neurons when processing the same information, lending even more credence to this hypothesis. It's not as far-fetched as you think.
> That's irrelevant. Transformers are Turing complete. Whatever mathematical system underlies human cognition will thus have some expression as a transformer.
That‘s irrelevant, too, because nothing indicates that the actual transformer of, say, GPT or the training is in any way connected to such an expression of the human brain.
Linux is written in C and that‘s Turing complete. And? Therefore Linux is like the brain?
I'd wager that intelligence is nothing but a reified concept. Something which doesn't innately exist other than as an abstract concept which we have defined, to help us explain complex nature, without the need of fully understanding the underlying complexity giving rise to that nature. Like how cave men used the concept of "gods" for a satisfying explanation for why it rains - we use intelligence to ascertain why something is able to do or achieve certain things.
But fundamentally speaking, evidence suggests that the nature of reality follows simple rules. That everything obeys simple algorithmic rules. And therefore, suggests humans are just as robotic as anything else, including the earliest of computers and current "AI". We don't look at it that way, as we don't fully understand the underlying complexity which gives rise to our behavior (whereas for computers we do), so we conveniently call ourselves intelligent as a way to appease the thirst for a satisfactory explanation for our behavior, and are dissatisfied with calling current AI, as true AI.
Thus, I argue, depending on what one wants to define intelligence, AI does not innately exist and can never exist if everything truly obeys simple rules, or, AGI has existed long before humans have come to exist as it's merely a concept which we are free to define.
There's a paper about that, "The Myth of Intelligence" by Henry Schlinger. Abstract:
> Since the beginning of the 20th century, intelligence has been conceptualized as a qualitatively unique faculty (or faculties) with a relatively fixed quantity that individuals possess and that can be tested by conventional intelligence tests. Despite the logical errors of reification and circular reasoning involved in this essentialistic conceptualization, this view of intelligence has persisted until the present, with psychologists still debating how many and what types of intelligence there are. This paper argues that a concept of intelligence as anything more than a label for various behaviors in their contexts is a myth and that a truly scientific understanding of the behaviors said to reflect intelligence can come only from a functional analysis of those behaviors in the contexts in which they are observed. A functional approach can lead to more productive methods for measuring and teaching intelligent behavior.
I have been thinking along the same lines - the fact that something as comparatively simple and barebones as an LLM can manipulate language symbols well enough to carry on a conversation suggests that it's a lot easier than was previously imagined. I used to think that language was one of the defining characteristics of intelligence, like the instruction set of a cpu, but chatgpt seems like persuasive evidence against that.
In order to predict the next token it’s doing something more like simulating the writer of the words and the context they were likely to be in while writing the words. You cannot make accurate predictions without understanding the world that gave rise to these words.
Consider a detective story with all the clues laid out and then at the end the detective says: “I know who it is. It is: …” Correctly predicting the next “tokens” entails that you incorporate all the previous details. Same goes for math questions, emotive statements, etc.
I’d be careful calling it simple. They might be simulating humans including for example a theory of mind just as a side effect.
Our current ‘simple rules’ to explain nature can only account for a small % of the visible universe.
Assuming there are simple rules, we don’t know how, for example, an electron has the wherewithal to follow them (when does it read the rules or check on them, where are they stored, etc.). It’s mystery all the way down (unless you define it as simple using hand-wavey abstractions ;))
In the end we will find out that AI is not that intelligent and since it is so close to mimicking us it will say the same thing about our own intelligence.
Normally the materialistic attitude is based on a conceit that we basically already know everything worth knowing. Your position seems to be that we don't, but that when we do it will be equally empty and meaningless.
Yet you have undeniable proof that your own consciousness is as real as it gets, and that you experience life in a way that isn’t just an abstract concept. It’s absolutely there (at least, from your own point of view).
I’m not a religious person and I don’t believe in a soul or anything magical like that, but it’s just impossible for me to accept that I’m just a bunch of atoms following rules. I know that there’s something there, I see the evidence right before me, even if I can’t explain it or prove it to you.
The elephant in the room in these discussions of AI is the concept of the corporation. Most of the things people are worried about AI doing are things corporations already do. Deceptive marketing, exploiting customer information, conspiring to keep prices high, creating distractions from things companies are doing, setting up monopolies, lying to customers - the usual. With AI, this can be automated.
Most of this is ordinary consumer protection. Regulating AI means regulating corporations. Placing hard limits on what AIs are allowed to do means consumer protection with teeth. Nobody in power wants to talk about that too much.
As for AIs taking over, that will come via the corporate route. Once AI systems can outperform CEOs, investors and boards will insist they be in charge.
This is a great framing. I also appreciated this similar idea in Matthew Butterick's blog:
> Before too long, we will not delegate decisions to AI systems because they perform better. Rather, we will delegate decisions to AI systems because they can get away with everything that we can’t. You’ve heard of money laundering? This is human-behavior laundering. At last—plausible deniability for everything. [1]
While in a corporation there's still a person somewhere who could be held accountable, AI diffuses this even more.
> Before too long, we will not delegate decisions to AI systems because they perform better. Rather, we will delegate decisions to AI systems because they can get away with everything that we can’t.
Here's Frontier Airlines announcing proudly to investors that they will do exactly that.[1] See page 44.
Today: Call center. Avenue for customer negotiation.
Tomorrow: Chatbot efficiently answers questions, reduces contacts and removes negotiation.
> we will delegate decisions to AI systems because they can get away with everything that we can’t.
This is one of the main reasons I quit the Facial Recognition industry; it is being used to delegate decisions, to remove responsibility of those decisions from those that need to be held accountable.
I worked as principal engineer of one of the top 5 enterprise FR systems globally, and the number of end-users fraudulently abusing the software blew my mind. Case in point: police called for a street crime, police ask the victims what celebrity their culprit looks like, police then put images of that celebrity into their FR software to collect suspects, followed by ordinary innocents who happen to look like celebrities being called into lineups and harassed by the police. And this practice is widespread!!!
That is just one example of how incredibly stupid people using our software will use our software, potentially harming large numbers of innocents.
Unfortunately even having humans in charge doesn't mean those humans will be punished for malfeasance. When was the last time you've seen an exec personally pay for their ad conduct?
The elephant in the room is the concept of the State. Bad corporations flurish in the shadow of bad government.
Governments are supposed to safeguard the interests of the many. Over decades they used collective resources to advance the research that made IT/AI possible. They granted corporates (private interests) the license to operate, a financial system to oil them, a security system to protect them.
If current IT has the shape it does (oligopolistic, unaccountable, abusive, potentially a runaway disaster) it is entirely due to complicit, captured governments and malfunctioning political systems.
I believe this is an issue of scale: humans need to operate at a smaller scale for fairness to exist at all. We do not have any evolutionary preparation for operating collectively at global scale. Our small minds try to take advantage of our situation when in positions of power, while being incapable of realizing the mass unfairness that behavior actualizes.
This is an issue with human nature itself. We have to change our innate nature, and we all know... that is not going to happen.
I can't seem to find the source but I remember William Gibson arguing in an interview some years back that corporations were already artificial intelligence, organisms even, serving their own interests and survival rather than the benefit of any person who is part of it
If an AI can outperform, then somebody will set up a company and let an AI lead it (even if just behind the scenes). Incumbent companies will need to adapt.
I just started reading “Life 3.0” and it starts out with an extreme version of this scenario.
Random number generator can outperform more than half of the CEOs.
That's not why we have CEOs. Gamblers are a superstitious bunch and they love their lucky monkeys. So they put into law that they are entitled to one lucky monkey per corporation.
Almost all such legal framings have been motivated by the legal difficulties of applying laws to organized groups of individuals. For example, entering into contracts is legally between two parties. Presumably, the sheer amount of legal reworking necessary to actually create and frame laws addressing legal situations involving groups of individuals as a single organized entity back when these precedents were being set outweighed objections. But today, specially with AI, the "it's too much work" excuse does not fly.
We need laws that are specifically framed in consideration of requirements and practicalities of entities, not slowly parcel out the legal rights of people to entities. Now that machines may soon be able to employed as willful agencies, we are willy-nilly granting legal personhood to machines.
What about limited liability? It seems to me to be an enormous concession to corporations and their shareholders, and which never seems to be questioned in popular consciousness.
Or even continuing the idea further, we might say that capitalism is itself already a form of highly sophisticated artificial intelligence that we’ve created, as a abstract autonomous entity with its only purpose being the accumulation of Capital. As a transnational entity it has no bounds and controls billions of human agents, and shapes the Earth to its will, changing ecosystems and climates. It is to be questioned if humans have lost control of this AI, and now controls the course of humanity itself.
I agree, but I would call it a game or perhaps a dance. One full of contradictions, for sure. But one we can direct to Harmony, as well. Gratuity, for instance, is expanding.
Who better than an AI to 'ride' an AI? AI will be the game master, in role-playing games terms.
It's a 15 minute talking head video from 4 years ago, and when you get to the end, it says that "Corporations are the real misaligned AGI" will be covered in the next video. No sign of that next video.
His big point in the first video is that corporations can achieve only a modest level of superintelligence. A corporation can have more breadth of knowledge than any individual, and for some tasks, that's good enough. But it's mostly using more people for broad coverage.
All this predates GPT. What we're seeing with GPT is good breadth combined with mediocre intelligence. That's very corporate. It handles much of what people do in offices.
We may not get super-intelligence that way. Just super bureaucracy. Which is a problem.
This bit here is what makes a huge difference. It’s very important to note that AI is all of this in scale. Corporations can now do all of this en mass. It will cheaper too, therefore it lowers the barriers to entry and actors who couldn’t afford it, now can.
>Once AI systems can outperform CEOs, investors and boards will insist they be in charge.
If it comes to AIs replacing CEOs and boards, those in power will change the rules so they can't be replaced. Who do you think influenced the laws for corporate governance? The public?
You raise an important point in discussing the role of corporations in the development and deployment of AI. Indeed, many of the concerns surrounding AI are not necessarily new but rather stem from existing corporate practices. Automating these practices through AI might exacerbate existing issues and create new challenges.
Consumer protection is a crucial aspect of addressing these concerns. Regulating AI should involve regulating corporations' use of AI to prevent harmful practices, promote transparency, and ensure ethical applications. This would require a balance between fostering innovation and imposing necessary restrictions to safeguard consumers and society as a whole.
The potential for AI systems to outperform CEOs and other decision-makers raises questions about the future of corporate management. While there is a possibility that AI could be employed to optimize decision-making and increase efficiency, it is important to recognize the limits of AI in understanding human values, ethics, and emotions. Striking the right balance between utilizing AI capabilities and retaining human oversight will be essential to navigate the future of corporate governance.
Moreover, the involvement of policymakers and regulators is crucial in addressing these challenges. As AI continues to advance and integrate into various aspects of society, it becomes more important to have comprehensive regulations that ensure the responsible development and use of AI technologies.
Second. All of this about productivity gains too. One of the dumbest idea I've seen floated is a "robot tax". Because what's a robot? If a new kind of light bulb has an MTBF 10 to 100x better than incandescents, you've reduced the labor of replacing street lights by 90 to 99%. That's a far bigger hit than anything GPT is doing to me, but it won't get covered under a "robot" tax. What about telecom replacing horseback couriers? All technologies effect the nature and scope of labor. Compartmentalization only does one thing; let you fight over definitions to pick winners and losers.
And how would we make a tax that covers all the productivity gains indiscriminately, without pointless compartmentalization?
Well, it's a tax on corporate profits of course. Basic general taxes that we already have.
Most corporations do not plot to kill or destroy their competition or adversaries using weapons, even if that might be in their economic interest. Only countries do this sometimes. I think this is because corporation don't have enough power over society.
If, however, you accelerate competition, eventually you might get just one corporation (perhaps appearing to be several, directed by the same "best-there-is" AI-CEO) that achieves world domination and can never be removed from power. An immortal dictator that never rests and can monitor each action of and interact personally with each person under their control at the same time.
> Most corporations do not plot to kill or destroy their competition or adversaries using weapons, even if that might be in their economic interest. Only countries do this sometimes.
When discussing what corporations might do it's always worth remembering United Fruit Company and the origins of the phrase "Banana Republic"[1]. United Fruit company was still doing this within living memory.
Alternatively you could look at the current crop of sportswear companies. When given the choice between "made by forced labour" and $5 more the market goes with cheap every time.
In the absences of legal restraints corporations will most definitely optimise away democracy and human rights in favour profits.
The main question is 'who is in charge?' for the decisions. Who gets punished with no bonus and a bad reputation or even go bankrupt if a company dies?
Same responsibility barrier exists for the driverless cars adoption.
In short, most people feel themselves better (if they even think about it) if a person is in charge and taking responsibility of bad decisions. Someone who will either die with them of will go to prison if thinga go bad.
Probably the same is for the pilotless airplanes: the first bad accident can ruin it.
Corporations are made of people - subsystems with latency measured with minutes to days and bandwidth measured in words per minute. AIs are made of subsystems communicating with latency dictated by speed of light and bandwidths in GB/s.
When you want to clone a corporation it's a whole enterprise. Cloning an AI takes one click.
Yes, corporations were early form of AI, and horses were early form of transport. Doesn't mean that horses and rockets are the same and that no new safety precautions apply.
> Once AI systems can outperform CEOs, investors and boards will insist they be in charge
And nothing in the rules says that a dog can't play basketball.
What that would actually result in is a complete and utter lack of any accountable entities, which the government would block and stockholders wouldn't want to begin with.
Thank you for this! I've been thinking about this since the digital artists got up in arms about AIs coming for their corporate jobs of shoving more ads down our throats.
People with lots of money have always been able to buy other people's time to get ahead.
We will always need a charismatic human being to be the figurehead of an organization, so I don’t think AI will ever replace CEOs. They will wind up making the important decisions though.
... perhaps, but the "embodied Turing test" has already become the typical term to describe the new goal post in most ML papers - meaning that (since the usual Turing test was already completed and surpassed by large amounts several years ago) the new goal is to be able to have a system that given the choice between a real human in front of them, and a humanoid robot that looks like a human in front of them (think Westworld), humans are incapable of determining which is which (using proper statistics).
This terminology is becoming widely used by prominent AI researchers at e.g. Deepmind (Botvinick et al), Stanford (Finn et al), MIT, Northeastern, Meta, etc. as we have to switch to a new goal in lieu of the new advancements that have been coming up in that past few years. Importantly, this shift has been happening behind the scenes independent of this 'OpenAI' craze, although it's obviously made a select portion of the advancements made accessible by the public. There is much more going on than just the GPT series that few people are engaging with, but much is hidden in the literature.
To your point - it's of course extremely strange to conceive of - but while the quirks of human forms may be a useful tool at the moment, there isn't anything necessarily fundamental that requires it for long term.
Neither Tim Cook or Jeff Bezos comes across as particularly charismatic. I would say they are in their positions because of their vision and execution, despite not being particularly charismatic.
So, if an AI CEO can execute better than human CEOs, it will dominate.
What we'll probably wont need or have will be masses of consumers. Without those you don't need charismatic CEOs either. Stock markets and charismatic CEOs are a pre-A.I. concern.
The most efficient market has no human actors in it. Imagine, a market composed only of highly rational AI. Finally all those unrealistic economical models work not only on paper but in reality. Perfection!
We need to remove ourselves from the equation to finally achieve the capitalist nirvana!
We have no idea what they are. Their emergent properties remain unexplained, so soundbites like "linguistic mirror" are like saying "brains are organs, after all".
We should not try to achieve consumer protection via laws, like GDPR, that bureaucratically regiment private interaction.
Consumer protection should be promoted through technological means, e.g the EU could have funded development of fingerprintless browser technologies, zero-knowledge proof based identity verification, etc, instead of through GDPR.
(Largely) fingerprintless browsers exist: Just use Firefox and enable the "resist fingerprinting" setting.
If you do, you will find that large parts of the web become unusable, with CAPTCHAs and challenges everywhere, some of them too difficult for even a human to solve.
I don't really agree with this... there is a parallel issue.
If AIs are doing most of the work in megacorps (especially if robotics take off), then basically all profit is going to the owners and tiny groups of executive AI "managers."
Even the most staunch (but sane) capitalist libertarian will admit mega income inequality is baaad juju.
For those who don’t know about the author, he is most well known a founder of the field of virtual reality. As a full time philosopher in the CTO’s office at MSFT, and the originator of concepts like data dignity, he’s uniquely qualified to write this kind of piece.
Jaron has an amazing publicist. I challenge anyone to read Wikipedia’s history of VR (https://en.m.wikipedia.org/wiki/Virtual_reality#History) and assess him to fairly be called a founder. There was serious work being done for over a decade before he even started his firm; VPL never came close to being a standard and the VR tech today has hardly any relation to his “post symbolic” language.
He’s certainly a very smart and interesting guy but my alarm bells go off when his supposed achievements are used as justification for the piece being valid rather than… well, any of the arguments he made.
It's absolutely ludicrous nonsense. It's like saying someone invented the idea of 'wondering what would happen if the sun blew up'.
VR is a trivial idea that pretty much everyone has thought about at some point. It's implementation is hard to do, and if I had to point to anyone there, it'd be Carmack who's probably done the most.
He did have a team that produced an early VR system. It took two expensive Silicon Graphics machines to run the headgear. I got to try it, with him present.
There was a second person wired up, driving a model of a lobster or octopus or something, and I could sort of interact with them. All the right parts were there. But it didn't work very well. Too slow, and too low-rez. Turn head, wait for tracker and rendering to catch up. 80s VR was pretty marginal.
he hasn’t given up on VPL, though, only few years back Vi Hart’s eleVR team was working on “sculptural” programming languages within VR, at Microsoft Research under Jaron. (…but I think that team disbanded a couple years back)
His thinking seems much clearer than most of the discourse around this technology.
But I think that a better way of thinking about these systems is an imperfect world model analogy. Not a made-of-people analogy.
These models are not made of people. People helped gathering data about the world around us. And that data allowed to build a world model. Which includes models of people, along with everything else.
And it’s a world model, not a human civilization model. Data goes through the funnel of The Pile and the likes, but ultimately the predictive model is of the universe, not that of humans.
I’ve seen many of the very best AI scientists (Hinton, Hasabis, Lecun, etc) say that we need philosophers and ethicists deeply involved on this topic urgently. I think a guy like Lanier, who knows CS fundamentals but has been talking about the ethics and impact of different digital creations for decades, is a pretty good guy to have on team humanity, no?
It should have been obvious to all of us at AlphaZero that Intelligence in a general sense is emergent from large networks of feedback loops. But our monkey brains tell us that we are special so we look for data to confirm that bias. No one is more guilty of that than "Experts".
He is uniquely qualified to be wrong about this in a specific way.
And a roommate of Richard Stallman at one point, AFAIR.
the easiest way to mismanage a technology is to misunderstand it
Indeed.
His idea of attaching provenance to source of information used in models is a good one, and one that already has rumblings of legal weight behind it (see the various articles about copyright claims in response to GPT/Copilot).
I worry a bit that his argument is too centred on digital information creation. Though I suppose that's the novelty of the most recent pieces of technology called AI - they affect information workers, people who already use a computer interface for some large percentage of their work. Still, the topic of /physical work/ : fieldwork, factorywork, servicework, seems one or more steps removed from LLMs. The management of that work may be affected, but the work itself has already (it seems to me) gone through its first computer-revolution shock.
Edit: I'll add that the whole article has a 90s-Wired feel to it, which is refreshing to see. There's been something of a slowdown in tech-revolutions for the past decade, and it's not original to say that we may be at the start of a new one.
Wow, what a giant word salad. I struggle to parse anything meaningful from it.
There is intelligence and it is artificial. I think some people have a real hard time grasping the term “artificial”. There is also this “true intelligence” spectre that gets invoked. Nobody agrees what is meant by it. It is also unclear why one should care.
I tried reading it twice, but I am very sorry, I wanted to say some ideas were.. short sighted, but it actually borders on stupid. Don’t know why this is on HN.
“Cars don’t walk, it is not True Mobility” “We have to understand legs first, to absurd degrees, before we can talk about True Mobility”. Ack, sorry.
Can we just backronym AI to Approximate Intelligence and carry on?
There are so many dangers and possibilities of AI that are fascinating, and none of them are even remotely as interesting as those of actual animal and human intelligence.
It's as if AI is held up as a mirror to our stupidity, and the Turing Test is how we measure ourselves against our models of computation. I'm really weirded out by the fear and exploitation being demonstrated.
There's more to reality, and intelligence, than mathematical models we currently use and pretend to understand.
“Why should I refuse a good dinner simply because I don’t understand the digestive processes involved?”
GDP is measured both as production and as consumption. The figures have to be the same, within a "statistical discrepancy". The consumption is final consumption: goods and services used by households.
You are asserting that households will each have 100 times the income that they do now (and be spending the same proportion of income as they do now).
Edit: I believe that if most companies, or even a significant minority of them, can make effective use of LLMs, the most likely short term effect is a prolonged recession (i.e. steadily declining GDP, not increasing) with high unemployment.
Currently LLMs cannot construct factories, grow crops, or transport goods. Even if it leads to brilliant inventions that were otherwise impossible before, it takes a while for humans to physically make or do things.
How confident are you in your prediction? I’d be willing to make a wager with you that would pay off big if you’re right.
However this is a great simplification, and borders on an absurd reduction. You can model our brains using linear algebra, however that doesn’t mean our brains are linear algebra computer. There is a whole lot more going on than neurons receiving feedbacks from other neurons which adjusts the weight for subsequent firing. A lot of our behavior is actually inherited (I know I spent a whole week here on HN arguing with IQ advocates on the nuance of that statement), neurochemicals and hormones add a whole another level of statefulness not seen in artificial neural networks, the brains ability to make computations is actually pretty limited (especially next to a GPU). I mean, cordiseps exists, meaning a fungus can infect an organic system and control its behavior, there is 100% chance that some yet to be discovered viral and bacterial agents, not sharing any of our DNA—and certainly not “connected” to the “weight matrix”—are also influencing our behavior (just not as dramatically), and there is 100% chance they interact with our DNA also controlling our “innate” behavior.
What is going on in our brains can only be modeled using statistical ordering of semantic words and actions. The real world brain is always going to be infinitely more complicated than this model.
And all of our surprising wins and awful mistakes had explainable reasons, dammit; it wasn’t just a misfiring of trained statistical networks!
The LLM value add for coding is less than the value add of syntax highlighting in my experience.
To solve ourselves is to know ourselves completely, and to know ourselves completely is to be honest in who and why we are what we are simultaneously across all persons. It assumes perfect knowledge.
There is no statistical approximation nor computational power which can do this.
> People are understandably desperate to understand their experiences as more than an encoding of a thing that might be explained.
Another way to frame this is, "some people are nihilists and do not see life as more than an encoding of a thing that might be explained."
There may be a part of the brain that is modelled well by an LLM, but if so, there seem to also be parts that aren’t, or even existing “multimodal” models like GPT-4, which is more than an LLM.
Humans, including their brains (both in the narrow sense and in the broader sense of “everything that contributes to cognition”, which may extend beyond the brain proper) are machines, and their function probably will, someday, be mimicked by machines. But they are still more complex machines that modern “AI” systems
Deleted Comment
And that's completely fine.
Not just that LLMs are very capable by themselves, but significantly improve speech and image models.
We’ll need to break out of the Chomsky hierarchies and develop some new theories of language.
Vocal communication isn’t unique to humans, a new theory should be more broadly applicable to non human and non vocal languages.
I’m excited to see what big questions will be answered, and what new questions arise.
That's highly debatable. For starters, who said there's no creativity or inspiration in humans, or for that matter, that there can't be in a complex A.I.?
How we achieve that creativity or inspiration is irreleant, as long the entity (human or AI) showcases creativity and inspiration.
Nor is it much clear why all being "only (a very complex) agency" would preclude creativity and inspiration.
That's like a worse version of "a human can't be creative or have feelings because it's all a bunch of molecules".
There is such a thing as emergent properties.
>And that's completely fine.
That's also highly debatable. I mean, that it would be "completely fine" if you were right and there wasn't "creativity of inspiration".
There has been a long history of reifying the behaviors of living things in general (e.g. vitalism) and humans in particular (e.g. dualism).
The success of LLMs challenges a lot of philosophy dealing with what behaviors are and are not possible in the absence of these categorical districtions.
I have had casual debates years ago, in which strong dualists asserted that the kind of creativity exhibited by today's LLMs is simply impossible. No doubt those folks are busy inventing "special philosophical creativity" that LLMs "aren't really doing," but they've lost credibility.
LLMs have demonstrated that there was never any need to invoke categorical districtions between human behavior and math-as-implemented-by-physics. The gap is closed, there is no more room for gods.
It is telling that you talk about all those things happening in "real time." Ask any super-regarded philosopher, from Plato to Wittgenstein (yes I'm excluding Dennet et al), and that would be quite the hoot to point out.
Yep. Elevators used to be proof that machines can think. Then compilers, and chess, and go, and search, and …
The problem with AI is that as soon as it works, we stop thinking about it as “artificial intelligence” and it becomes “just automation”. Then AI moves to the next goalpost.
most brains I run into don't do that much at all, mostly just existing and adaptation-execution
Our minds are in fact same statistical models with a gradually declining ability to learn and driven by exogenous irrational goals to eat and mate.
To crank it up a notch, the assigned task could involve generating itself subtasks which are handled in the same manner. This subtask generation could start to look a bit like will/intentionality.
Now consider if the top-level task is something like maximizing money in a bank account that pays for the compute to keep it running :)
(IMO this is still missing some key pieces around an emotion-like system for modulating intensities of actions and responses based on circumstantial conditions—but the basic structure is kinda there...)
It's just hard coded, whereas GPT's is dictated. More or less anyway.
Also our "fitness function" or motivations & goals aren't even that hard coded. You can easily modify them through drugs.
I'm not sure that's as clear cut as you make it sound.
For example curiocity could be a fine engine for motivation as well, even if you don't care if you'll survive or not.
- If you mean "you cannot EXPLAIN AGI/consciousness/intelligence if you don't understand it", then that's true, but it's a trivial tautology.
- If you mean "you cannot DEVELOP AGI/consciousness/intelligence if you don't understand it" then that's very debatable.
Historically we have been able to develop all kinds of things, despite not knowing how they work. Tinkering and trial and error is often enough.
After all that's how evolution solved the problem of creating consciousness/intelligence. There wasn't some entity that "understood" intelligence that created it.
Well that's not true. We tamed fire before understanding combustion, friction, heat, or anything else.
In fact it is damage to the prefrontal cortex (which has nothing to do with speech) which is mostly correlated with a detriment in intelligent behavior (suspiciously also social behavior; a food for though in what we consider “intelligence”). Victims of lobotomy had their prefrontal cortex destroyed, and their injuries resulted in them loosing their personalities and loosing basic function as human beings, even though they were still able (but perhaps not always willing) to speak and comprehend speech.
What you have that LLMs lack is a visual part of your brain - one which can instantly count quantities of objects up to about 7. That gives you tools that can be trained to do basic arithmetic operations. Although you have to be taught how to use that natural capability in your brain to solve arithmetic problems.
And of course for more complex things than simple arithmetic, you fall back on verbalized reasoning and association of facts (like multiplication tables) - which an LLM is capable of doing too.
Poor GPT though has only a one dimensional perceptual space - tokens and their embedding from start to end of its attention window - although who’s to say it doesn’t have some sense for ‘quantity’ of repeated patterns in that space too?
That has been my best analogy so far.
They'll never say I don't know and bullshit you into oblivion while never backtracking.
When you discuss other things, it goes off the rails a lot more. For instance, I have had it quote some passages of classic English poetry to me, stating blankly that those passages contain certain words. The passages did not contain any traces of those words or even remotely similar words. In that situation, GPT4 was being dumber than /bin/grep, which can confirm that some piece of text doesn't contain a string.
GPT4 is deliberately trained as a coding technician, so that it will convince programmers who will become its advocates.
1. Programmers are vain and believe that everything they do requires great intelligence. (Not just some of it.)
2. GPT4 can "talk shop" with programmers, saying apparently intelligent things, perform complex refactorings, and intuit understanding of a piece of code with minimal context based on meaningful identifier names, and so on.
3. Thus, some programmers will falsely conclude that it's highly intelligent, like them.
To be sure, what the technology does is unmistakably a form intelligence. It solves complex problems of symbol manipulation and logic, in flimsy contexts. To deny that it is artificial intelligence is silly.
AI /= AGI
That's not where the goalposts are for the definition of AI.
Computers playing chess is AI. Google Assistant is AI. An expert system from 1990 is AI.
no, it solves the assumed-to-be complex problem of constructing probable and believable text given a prompt.
Are we all vain? I guess we are, but I find this take to be a bit too simplistic.
I do agree programmers are arrogant and think everything they do requires vast amounts of intelligence.
I’n not sure what your point is. I am honestly curious. You think having coders as major advocates for your product is somehow a brilliant strategy? If anything, this will work against it.
For instance, as an exercise, I had it write a maze-generating Lisp program that produces ASCII. I wanted it in a relatively little known Lisp dialect, so I described to it some of the features and how they differ from the code it generated.
For instance, GPT4 hypothesized that it has a let which can destructure multiple values: (let ((x1 y1) (function ..)) ((x2 y2) (function ...)) ...).
In plain English, I explained to GPT4 that the target Lisp dialect doesn't have multiple values, but that when we have a pair of values we can return a cons cell (cons a b) instead of (values a b). I also explained to it that we can destructure conses using tree-bind, e.g (tree-bind (a . b) (function ...) ...). And that this only takes one pattern, so it has to be used several times.
GPT4 correctly updated the function to return a cons cell and replaced the flat let with nested tree-bind.
At the end of the chat, when the code was almost working, I made a bugfix to the function which allocates the maze grid, to get it working.
I told GPT4: this version of the grid allocation function makes it work. Without it, the output is almost blank except for the vertical walls flanking the maze left and right. Can you explain why?
GPT4 correctly explained why my function works: that the function it wrote shared a single row vector across the grid rows, giving rise to sharing. It explained it like a computer scientist or seasoned developer would.
It's like a somewhat dim-witted, but otherwise capable coding clerk/technician, which talks smart.
With GPT4, you're Sherlock, and it is Watson so to speak. (Sorry, IBM Watson.) It can speak the language of crime investigation and make some clever inferences. In the end, you do all the real work of designing and debugging, and judging complex requirements against each other in the broader context. It saves you the keystrokes of doing the tedious coding you're used to doing yourself.
On the other hand, you expend some keystrokes explaining yourself. Some of the chat could be saved and used for documentation, because it captures requirements and why some things are done in a certain way and not otherwise (rejected decisions).
I'm a programmer and I've used GPT4 for a variety of tasks (well, tried to). The results have been mediocre on average, usually syntactically correct, but more often than not, semantically incorrect. It usually ends in frustration as GPT-4 keeps responding confidently incorrect answers and upon the slightest expression of doubt, it tends to spin in circles.
I'd like to know what people who are so impressed with GPT-4's programming capabilities are doing? It must be TODO apps, solving leetcode problems, writing basic Express.js routers, some basic React components and code for <one of the top 10 popular libraries>. The kind of things ChatGPT has seen a million of examples of.You have to already know how to code the problem, so that you can spot what is wrong, and, if necessary, tell the thing how to fix it.
The fix is not always going to come from the training data; it needs to scrape it from what you give away in your follow-up question, together with its training data.
I went through an exercise in which I encoded a paragraph from Edgar Allen Poe with a Vigenere cipher. I presented this to GPT4 and asked it to crack it. First I had to persuade it that it's ethical; it's a cipher I made myself and it's not my secret. We worked out a protocol by which it can ask me questions which show that I know the plaintext.
It was a quite a long chat, during which I had to give away many hints.
In the end I basically gave the answer away, and the thing acted like it cracked the key. Then I reproached it and admitted that yes, it didn't crack the key but used my hint to locate the E. A. Poe passage.
Basically, if you know the right answer to something, and GPT4 isn't getting it, it will not get it until you drop enough hints. As you drop the hints, it will produce rhetoric suggesting that it's taking credit for the solution.
I've used it for side projects though, especially front end stuff that I absolutely hate (js) and it works fine for that, but that's because I'm absolutely garbage at it in the first place and probably ask it to solve the most answered things ever on stackoverflow &co
Although... on the other hand there are plenty of programmers that don't even use IDEs still.
We've well past that goalpost, like decades ago, if you haven't realized.
The simplest form of AI is just a series of if-else statements (expert system). A highly sophisticated network of conditional statements can make fully informed decisions way faster and more accurate than any human mind.
Computers doing anything are following the programming of their programmers. Without feeling and free will, there is no AI.
That's quite a leap to that conclusion, friend.
The hubris in Big Tech right now is truly pathetic and imho a sign of a real disconnect with a sense of meaning and purpose to human life.
In fact recently I was listening to a podcast with Donald Hoffman,. where he mentioned brain studies done on people with split brains etc and how these people did not even have a single consciousness. So even the assumption that we have like one "AGI" program running in our brain is not at all what goes on in our brain.
Likewise Iain McGilchrist pointed out how the left and right brain hemispheres are two kind of attentions to the world, and almost two different type of consciousness in us.
I quit listening to Rupert Spira many years ago but I found this interaction between him and Donald Hoffman really insightful. DOnald mentions the brain studies in this podcast:
The Convergence of Science and Spirituality | Donald Hoffman & Rupert Spira
https://www.youtube.com/watch?v=rafVevceWgs
Many people in the developed world, especially tech people are now living online and spend the majority of their lives looking at a screen, chatting to each other on slack and via forums like this and now comparing their intellectual abilities to that of super computers running a language model of extreme complexity, which is basically the sum total of all human technical and intellectual output. All popular LLMs exist because of this online universe we’ve created.
What the online life does is actually reduce one to an “autonomous agent”. You really become ChatGPT when you live this way. Go out for a night of dancing, practice martial arts, paint a real panting with oil on canvas, cook a nice meal, smell and taste it, or hike a mountain, then come back and try tell yourself you’re a LLM, it’s completely preposterous but it does make sense if you live an almost purely “intellectual and online existence”. Which is the world we’ve created in 2023. Science and religion are now merging and it is said science will bring us “immortality” soon through AI advancement and many would immediately upload their brain to a computer if given the chance.
It completely makes sense to forget what you actually are and it makes sense to reduce yourself to “software running on hardware”, or a statistical model, a person who clicks like and dislike on social media most of the day, you become a model trainer etc. Even if there is some truth to the former statements, it’s stupid to discount the rest of existence because you can’t instantly translate text into many languages or whatever “emergent property” is displayed in some complex system.
The intellect is something which should be consulted or used for survival or problem solving, to help one get back to enjoying life and that what’s most important. Intellect is not something one should strive to actually become.
That's irrelevant. Transformers are Turing complete. Whatever mathematical system underlies human cognition will thus have some expression as a transformer.
These transformers are learning how to reproduce functions by inferring associations between inputs and outputs. The data whose function is being reproduced is the function in the human brain that produces human text. Therefore these transformers are literally learning how the human brain generates text, ie. this is what the brain does.
Furthermore, there have been neuroscience papers correlating spiking behaviour in neural networks and human neurons when processing the same information, lending even more credence to this hypothesis. It's not as far-fetched as you think.
That‘s irrelevant, too, because nothing indicates that the actual transformer of, say, GPT or the training is in any way connected to such an expression of the human brain.
Linux is written in C and that‘s Turing complete. And? Therefore Linux is like the brain?
The more I look into the more I see a cult/religion more than a science
But fundamentally speaking, evidence suggests that the nature of reality follows simple rules. That everything obeys simple algorithmic rules. And therefore, suggests humans are just as robotic as anything else, including the earliest of computers and current "AI". We don't look at it that way, as we don't fully understand the underlying complexity which gives rise to our behavior (whereas for computers we do), so we conveniently call ourselves intelligent as a way to appease the thirst for a satisfactory explanation for our behavior, and are dissatisfied with calling current AI, as true AI.
Thus, I argue, depending on what one wants to define intelligence, AI does not innately exist and can never exist if everything truly obeys simple rules, or, AGI has existed long before humans have come to exist as it's merely a concept which we are free to define.
> Since the beginning of the 20th century, intelligence has been conceptualized as a qualitatively unique faculty (or faculties) with a relatively fixed quantity that individuals possess and that can be tested by conventional intelligence tests. Despite the logical errors of reification and circular reasoning involved in this essentialistic conceptualization, this view of intelligence has persisted until the present, with psychologists still debating how many and what types of intelligence there are. This paper argues that a concept of intelligence as anything more than a label for various behaviors in their contexts is a myth and that a truly scientific understanding of the behaviors said to reflect intelligence can come only from a functional analysis of those behaviors in the contexts in which they are observed. A functional approach can lead to more productive methods for measuring and teaching intelligent behavior.
https://www.researchgate.net/publication/266418013_The_myth_...
Consider a detective story with all the clues laid out and then at the end the detective says: “I know who it is. It is: …” Correctly predicting the next “tokens” entails that you incorporate all the previous details. Same goes for math questions, emotive statements, etc.
I’d be careful calling it simple. They might be simulating humans including for example a theory of mind just as a side effect.
Assuming there are simple rules, we don’t know how, for example, an electron has the wherewithal to follow them (when does it read the rules or check on them, where are they stored, etc.). It’s mystery all the way down (unless you define it as simple using hand-wavey abstractions ;))
But so is everything else that is not a fundamental particle/wave/string.
So, while true, it’s not that useful in and of itself.
I’m not a religious person and I don’t believe in a soul or anything magical like that, but it’s just impossible for me to accept that I’m just a bunch of atoms following rules. I know that there’s something there, I see the evidence right before me, even if I can’t explain it or prove it to you.
Most of this is ordinary consumer protection. Regulating AI means regulating corporations. Placing hard limits on what AIs are allowed to do means consumer protection with teeth. Nobody in power wants to talk about that too much.
As for AIs taking over, that will come via the corporate route. Once AI systems can outperform CEOs, investors and boards will insist they be in charge.
> Before too long, we will not delegate decisions to AI systems because they perform better. Rather, we will delegate decisions to AI systems because they can get away with everything that we can’t. You’ve heard of money laundering? This is human-behavior laundering. At last—plausible deniability for everything. [1]
While in a corporation there's still a person somewhere who could be held accountable, AI diffuses this even more.
[1]: https://matthewbutterick.com/chron/will-ai-obliterate-the-ru...
Here's Frontier Airlines announcing proudly to investors that they will do exactly that.[1] See page 44.
Today: Call center. Avenue for customer negotiation.
Tomorrow: Chatbot efficiently answers questions, reduces contacts and removes negotiation.
[1] https://ir.flyfrontier.com/static-files/c7e0a34d-3659-49cc-8...
This is one of the main reasons I quit the Facial Recognition industry; it is being used to delegate decisions, to remove responsibility of those decisions from those that need to be held accountable.
I worked as principal engineer of one of the top 5 enterprise FR systems globally, and the number of end-users fraudulently abusing the software blew my mind. Case in point: police called for a street crime, police ask the victims what celebrity their culprit looks like, police then put images of that celebrity into their FR software to collect suspects, followed by ordinary innocents who happen to look like celebrities being called into lineups and harassed by the police. And this practice is widespread!!!
That is just one example of how incredibly stupid people using our software will use our software, potentially harming large numbers of innocents.
Governments are supposed to safeguard the interests of the many. Over decades they used collective resources to advance the research that made IT/AI possible. They granted corporates (private interests) the license to operate, a financial system to oil them, a security system to protect them.
If current IT has the shape it does (oligopolistic, unaccountable, abusive, potentially a runaway disaster) it is entirely due to complicit, captured governments and malfunctioning political systems.
This is an issue with human nature itself. We have to change our innate nature, and we all know... that is not going to happen.
With your framing of the issue, I wonder what would be the solution. Smaller government? That just gives more power to corporations.
I doubt it. You know who sits on those boards? The executives of other companies.
I just started reading “Life 3.0” and it starts out with an extreme version of this scenario.
Deleted Comment
Random number generator can outperform more than half of the CEOs.
That's not why we have CEOs. Gamblers are a superstitious bunch and they love their lucky monkeys. So they put into law that they are entitled to one lucky monkey per corporation.
Actually, I do know why you are getting downvoted. You speak the truth.
p.s.
https://en.wikipedia.org/wiki/Corporate_personhood
Almost all such legal framings have been motivated by the legal difficulties of applying laws to organized groups of individuals. For example, entering into contracts is legally between two parties. Presumably, the sheer amount of legal reworking necessary to actually create and frame laws addressing legal situations involving groups of individuals as a single organized entity back when these precedents were being set outweighed objections. But today, specially with AI, the "it's too much work" excuse does not fly.
We need laws that are specifically framed in consideration of requirements and practicalities of entities, not slowly parcel out the legal rights of people to entities. Now that machines may soon be able to employed as willful agencies, we are willy-nilly granting legal personhood to machines.
https://thelawdictionary.org/juridical-person/
Who better than an AI to 'ride' an AI? AI will be the game master, in role-playing games terms.
Deleted Comment
There is a rebuttal to this view presented here by Robert Miles:
https://youtu.be/L5pUA3LsEaw
His big point in the first video is that corporations can achieve only a modest level of superintelligence. A corporation can have more breadth of knowledge than any individual, and for some tasks, that's good enough. But it's mostly using more people for broad coverage.
All this predates GPT. What we're seeing with GPT is good breadth combined with mediocre intelligence. That's very corporate. It handles much of what people do in offices. We may not get super-intelligence that way. Just super bureaucracy. Which is a problem.
This bit here is what makes a huge difference. It’s very important to note that AI is all of this in scale. Corporations can now do all of this en mass. It will cheaper too, therefore it lowers the barriers to entry and actors who couldn’t afford it, now can.
If it comes to AIs replacing CEOs and boards, those in power will change the rules so they can't be replaced. Who do you think influenced the laws for corporate governance? The public?
Consumer protection is a crucial aspect of addressing these concerns. Regulating AI should involve regulating corporations' use of AI to prevent harmful practices, promote transparency, and ensure ethical applications. This would require a balance between fostering innovation and imposing necessary restrictions to safeguard consumers and society as a whole.
The potential for AI systems to outperform CEOs and other decision-makers raises questions about the future of corporate management. While there is a possibility that AI could be employed to optimize decision-making and increase efficiency, it is important to recognize the limits of AI in understanding human values, ethics, and emotions. Striking the right balance between utilizing AI capabilities and retaining human oversight will be essential to navigate the future of corporate governance.
Moreover, the involvement of policymakers and regulators is crucial in addressing these challenges. As AI continues to advance and integrate into various aspects of society, it becomes more important to have comprehensive regulations that ensure the responsible development and use of AI technologies.
Deleted Comment
And how would we make a tax that covers all the productivity gains indiscriminately, without pointless compartmentalization?
Well, it's a tax on corporate profits of course. Basic general taxes that we already have.
If, however, you accelerate competition, eventually you might get just one corporation (perhaps appearing to be several, directed by the same "best-there-is" AI-CEO) that achieves world domination and can never be removed from power. An immortal dictator that never rests and can monitor each action of and interact personally with each person under their control at the same time.
When discussing what corporations might do it's always worth remembering United Fruit Company and the origins of the phrase "Banana Republic"[1]. United Fruit company was still doing this within living memory.
Alternatively you could look at the current crop of sportswear companies. When given the choice between "made by forced labour" and $5 more the market goes with cheap every time.
In the absences of legal restraints corporations will most definitely optimise away democracy and human rights in favour profits.
1. https://en.wikipedia.org/wiki/Banana_republic
Same responsibility barrier exists for the driverless cars adoption.
In short, most people feel themselves better (if they even think about it) if a person is in charge and taking responsibility of bad decisions. Someone who will either die with them of will go to prison if thinga go bad.
Probably the same is for the pilotless airplanes: the first bad accident can ruin it.
When you want to clone a corporation it's a whole enterprise. Cloning an AI takes one click.
Yes, corporations were early form of AI, and horses were early form of transport. Doesn't mean that horses and rockets are the same and that no new safety precautions apply.
And nothing in the rules says that a dog can't play basketball.
What that would actually result in is a complete and utter lack of any accountable entities, which the government would block and stockholders wouldn't want to begin with.
Those jobs are not based on performance or anything remotely meritocratic.
People with lots of money have always been able to buy other people's time to get ahead.
For most purposes maybe it's enough to project charisma in a video.
An AI CEO might also be capable of superhuman "interpersonal" feats like "personally" answering every letter, phone call or complaint.
This terminology is becoming widely used by prominent AI researchers at e.g. Deepmind (Botvinick et al), Stanford (Finn et al), MIT, Northeastern, Meta, etc. as we have to switch to a new goal in lieu of the new advancements that have been coming up in that past few years. Importantly, this shift has been happening behind the scenes independent of this 'OpenAI' craze, although it's obviously made a select portion of the advancements made accessible by the public. There is much more going on than just the GPT series that few people are engaging with, but much is hidden in the literature.
To your point - it's of course extremely strange to conceive of - but while the quirks of human forms may be a useful tool at the moment, there isn't anything necessarily fundamental that requires it for long term.
So, if an AI CEO can execute better than human CEOs, it will dominate.
Hell, in time AI will replace customers!
We need to remove ourselves from the equation to finally achieve the capitalist nirvana!
Consumer protection should be promoted through technological means, e.g the EU could have funded development of fingerprintless browser technologies, zero-knowledge proof based identity verification, etc, instead of through GDPR.
If you do, you will find that large parts of the web become unusable, with CAPTCHAs and challenges everywhere, some of them too difficult for even a human to solve.
That's why we need laws.
If AIs are doing most of the work in megacorps (especially if robotics take off), then basically all profit is going to the owners and tiny groups of executive AI "managers."
Even the most staunch (but sane) capitalist libertarian will admit mega income inequality is baaad juju.
True as this may be, one must also consider that “the market can remain irrational longer than you can remain solvent”
*the concept of capitalism
https://en.m.wikipedia.org/wiki/Jaron_Lanier
He’s certainly a very smart and interesting guy but my alarm bells go off when his supposed achievements are used as justification for the piece being valid rather than… well, any of the arguments he made.
VR is a trivial idea that pretty much everyone has thought about at some point. It's implementation is hard to do, and if I had to point to anyone there, it'd be Carmack who's probably done the most.
But I think that a better way of thinking about these systems is an imperfect world model analogy. Not a made-of-people analogy.
These models are not made of people. People helped gathering data about the world around us. And that data allowed to build a world model. Which includes models of people, along with everything else.
And it’s a world model, not a human civilization model. Data goes through the funnel of The Pile and the likes, but ultimately the predictive model is of the universe, not that of humans.
- He popularized the term 'virtual reality' in the late 80s,
- Did a bunch of stuff with music
- Spent a whole lot of time warning against social media
- Spent a whole lot of time warning against what I consider to be the best parts of the internet like Wikipedia
He is uniquely qualified to be wrong about this in a specific way.
His idea of attaching provenance to source of information used in models is a good one, and one that already has rumblings of legal weight behind it (see the various articles about copyright claims in response to GPT/Copilot).
I worry a bit that his argument is too centred on digital information creation. Though I suppose that's the novelty of the most recent pieces of technology called AI - they affect information workers, people who already use a computer interface for some large percentage of their work. Still, the topic of /physical work/ : fieldwork, factorywork, servicework, seems one or more steps removed from LLMs. The management of that work may be affected, but the work itself has already (it seems to me) gone through its first computer-revolution shock.
Edit: I'll add that the whole article has a 90s-Wired feel to it, which is refreshing to see. There's been something of a slowdown in tech-revolutions for the past decade, and it's not original to say that we may be at the start of a new one.
Deleted Comment
Is that like Shingy, the AOL digital prophet?
https://www.newyorker.com/magazine/2014/11/17/crystal-ball-3
What does that even mean?
Anyone got anything earlier? Wouldn't surprise me.
https://iep.utm.edu/evil-new/
There is intelligence and it is artificial. I think some people have a real hard time grasping the term “artificial”. There is also this “true intelligence” spectre that gets invoked. Nobody agrees what is meant by it. It is also unclear why one should care.
I tried reading it twice, but I am very sorry, I wanted to say some ideas were.. short sighted, but it actually borders on stupid. Don’t know why this is on HN.
“Cars don’t walk, it is not True Mobility” “We have to understand legs first, to absurd degrees, before we can talk about True Mobility”. Ack, sorry.
There are so many dangers and possibilities of AI that are fascinating, and none of them are even remotely as interesting as those of actual animal and human intelligence.
It's as if AI is held up as a mirror to our stupidity, and the Turing Test is how we measure ourselves against our models of computation. I'm really weirded out by the fear and exploitation being demonstrated.
There's more to reality, and intelligence, than mathematical models we currently use and pretend to understand.
“Why should I refuse a good dinner simply because I don’t understand the digestive processes involved?”
— Oliver Heaviside
I'm more focused on the fact that GDP as it was measured in 2020 is going to 100x in the next 5 years.
You are asserting that households will each have 100 times the income that they do now (and be spending the same proportion of income as they do now).
Edit: I believe that if most companies, or even a significant minority of them, can make effective use of LLMs, the most likely short term effect is a prolonged recession (i.e. steadily declining GDP, not increasing) with high unemployment.
How confident are you in your prediction? I’d be willing to make a wager with you that would pay off big if you’re right.