To be fair, we shouldn't bundle Augustine and Thomas Aquinas with John MacArthur and Joel Osteen. Meaning that some religious thought is more philosophically robust than other religious thought.
To be fair, we shouldn't bundle Augustine and Thomas Aquinas with John MacArthur and Joel Osteen. Meaning that some religious thought is more philosophically robust than other religious thought.
Do you assume that someone will stumble into creating a person, but with unlimited memory and computational power?
Otherwise, if we are able to create this person using our knowledge, we will most certainly be able to augment humans with those capabilities.
This just shows you lack imagination.
I have a lot of use cases that they are not good enough for.
It's a simple formula. Layoffs because of market conditions or company health = stock price go down. Layoffs because "AI took the jobs" = stock price go up.
As a developer, I can just make it myself. Now with LLMs, if it's very simple and bounded, I can just vibe most of it with very little to lose.
As a lay person, I don't see what the TAM for this is. Who will spend the time to learn how to drag and drop an application?
Assume we have excellent test coverage -- the AI can write the code and ensure get the feedback for it being secure / fast / etc.
And the AI can help us write the damn tests!
On a more serious note: how could anyone possibly ever write meaningful tests without a deep understanding of the code that is being written?
People don’t like to do code reviews because it sucks. It’s tedious and boring.
I genuinely hope that we’re not giving up the fun parts of software, writing code, and in exchange getting a mountain of code to read and review instead.
That we will end up just trying to review code, writing tests and some kind of specifications in natural language (which is very imprecise)
However, I can't see how this approach would ever scale to a larger project.
How many times did you have a mutation operation where you had to hand code the insert of 3 or 4 entities and make sure they all come back successful, or you back out properly (and perhaps this is without a transaction, perhaps over multiple databases).
Make sure the required fields are present Grab the created inserted ID Rinse, repeat
Or if you're mutating a list, writing code that inserts a new element, but you don't know which one is new. And you end up, again, hand coding loops and checking what you remember to check.
What about when you need to do an auth check.
And the hand coder may fail to remember one little thing somewhere.
With LLM code, you can just describe that function and it will remember to do all the things.
An LLM with a model + metadata - we won't really need to think of it as editing User.java or User.py anymore. Instead User.yaml - and the LLM will just consume that, and build out ALL of your required biz-logic, and be done with it. It could create a fully authenticating/authorizing REST API + GraphQL API with sane defaults - and consistent notions throughout.
And moving into UIs- we can have the same thing. The UI can be described in an organized way. What fields are required for user registration. What fields are optional according to the backend. It's hard to visualize this future, but I think it's a no-code future. It's models of requirements instead.
Sending that idea to an LLM (in absence of AGI) seems like a great way to find out about the flaws too late.
Otherwise, specifying an application in such detail as to obtain the same effect is essentially coding, just in natural language, which is less precise.
I asked it to help me turn a 6 page wall of acronyms into a CV tailored to a specific job I'd seen and the response from Gemini was that I was over qualified, it was under paid and that really, I was letting myself down. It was surprisingly brutal about it.
I found a different job that although I really wanted, felt I was underqualified for. I only threw it at Gemini as a moment of 3am spite, thinking it'd give me another reality check, this time in the opposite direction. Instead it hyped me up, helped me write my CV to highlight how their wants overlapped with my experience, and I'm now employed in what's turning out to be the most interesting job of my career with exciting tech and lovely people.
I found the whole experience extremely odd. and never expected it to actually argue with or reality check me. Very glad it did though.