Yep, I worked in a factory assembly line 'Wrangling the robots' and I'm not an engineer. They require watching and they can't do parts of the job. They get hung up easily and require alot of resetting. There's good reason factories still hire plenty of manufacturing laborers.
I wouldn't say so. The 'Learning by mistakes' step in the demo shows that the AI can work through these issues, presumably this can be done more efficiently than a human.
The learning from mistakes part of the example is – you tried /port_wines but the path is /wines/port instead.
But what about if the path is actually /beer and you have to pass a hidden undocumented query parameter called ?wine=1 for it to give you wines. But then the response is still of Beer objects (because that's the only thing the API validator would allow), so you have to map all the Beer fields to their equivalent Wine counterparts. But not all of them make sense, so you have to ignore them. Which ones? Ask the engineering team. Turns out the guy who wrote all this left years ago, and no one remember how it works. Someone digs up a link to a documentation page, but that internal wiki was taken down and so it returns a 404. You ping a sysadmin to see if they kept any backups. He points you to a few PBs worth of SQL dumps from an internal migration a few years ago and asks you to take a look in those. You simultaneously have to write up a status update for senior leadership which is due by end of day and give them a revised launch date for the project. They want to know why it can't be done in half the time.
The day an AI can figure all this out, I will be looking for another career. Until then I'm fine.
The automation of telephone systems didn't make front desk workers "PABX assistants" or even "PABX Operators", the automation became a tool and faded into the background.
parent comment emphasizes on the fact that no matter how big the effort was, ultimately our tools will always be imperfect. Therefore we will always have something to do.
something funny about the way I interact with ChatGPT, I basically never give a shit about correcting mistakes or bothering with good grammar because no matter what, it pretty much always understands me. and it's insanely good at solving tip-of-my-tongue type problems.
I guess maybe we should go back to SOAP to eek out a few more months of job security?
For now. But how about 5 years from now? or 10? or 15?
Given what things looked 10 years ago, and what was revolutionary back then (ImageNet challenge) - it's hard to comprehend what state of the art will look like 10 years from now.
Agreed. The network architectures have evolved so much in the past few years alone. Imagine what they can do in just 5 years if they can embed domain-specific knowledge to the current “dumb” statistical word guessers. Ten years is truly unimaginable.
I’m not expecting the singularity within that timeframe nor do I think we need it for lots of disruption in knowledge-based work. Still, I’m leaning towards a near future where these AI tools augment our capabilities more than the alternative where we lose all our jobs.
I’ve recently thought about iron man movies, where tony stark actually designs stuffs by simply talking to jarvis. I’ve always thought « if only i could have those kind of tools available, i would too be able to design crazy stuffs ».
We didn’t think « damn, tony stark didn’t have to write a line of code, he got rendered obsolete by this damn AI ». That’s how i choose to think about AI tools for developers from now on.
As someone who has been in the game a long time let me tell you that as our tools have gotten better user expectations have gotten way higher. I would argue that it's actually harder than ever before to deliver a true first class product for mobile or web.
That's what the hivemind says - AI will automate all our jobs. Well, yes it will automate a lot, but we won't be having the same expectations in a few years. The bar will rise so high we'll still have jobs with AI on top.
There's also a small little thing called competition. When your competitors use AI, your job just got harder.
Let's hope in reality it isn't available only to the billionnaire owners of software companies. In those movies everyone else wasn't designing cool suites by talking to voice assistants.
The marginal cost of technology is pretty close to zero (albeit slightly higher for large LLM model inference) so it would make sense that the technology gets distributed widely. Kind of like how everyone has essentially the best smart phone. And it's laughable when someone tries to create a luxury smartphone for $10k and its just a sub-par Android with leather accents.
If you have a technology with practically zero marginal cost, pricing it very low and distributing it widely would maximize your profit. Not to mention that once its out of the bag, others will know that its possible and copy it.
It won’t be. Some of the biggest companies in the tech world got that way by building platforms and ecosystems that others build on top of. AWS and Azure the most obvious examples.
What’s more likely is that someone like a Microsoft will make it part of their cloud offerings so devs like us can build things on top.
Why try to capture every market when you can build tools and take percentages of the rest of the economy?
Tony Stark is an exception, though. He was once a slave who escaped after he fooled his captor into letting him create a minature arc reactor that he embedded into his own chest in order to fly to freedom and safety - as well as to magnetically keep shrapnel from invading his heart and killing him. He isn't the normal billionaire owner of a software company (outside of comic books).
Can you think of any category of software that doesn't eventually have an open source alternative? There are many open source projects orders of magnitude more complicated than a LLM. The linux kernel, Blender, Firefox, etc.
These LLMs aren't inherently complicated to implement, just expensive to train. And if the LLaMA release/leak is anything to go by we are extremely close to ChatGPT running on consumer hardware.
Software development isn’t just “writing code”. It’s knowing what code to write. It’s knowing how to accomplish a goal using code. It’s knowing how to structure all of your code to work together. It’s knowing how to adapt code to meet new requirements. I don’t foresee rich “idea” guys figuring out how to do that any time soon.
Tony Stark inherited tens of billions of dollars. YOU DIDN'T.
And, so, if we get Jarvis level AI, and you will get rendered obsolete.
When these tools come, inheriting billionaires will hire 10 people with Tony Stark intelligence as opposed to 10,000 devs with your level of intelligence.
So, don't be in denial that you will be made obsolete.
> We didn’t think « damn, tony stark didn’t have to write a line of code, he got rendered obsolete by this damn AI »
Because it's a movie.
In reality an AI like that would be more important than all the other nonsense. But it wouldn't be fun to watch an AI easily unravel the mysteries of magic, time travel, multiverse, etc.
damn, tony stark didn’t have to write a line of code, he got rendered obsolete by this damn AI
I was watching one of the movies and Stark was flying around some baddie castle/base. He tells Jarvis to locate all the missile placements and, when that's done, he tells Jarvis to target them.
That was when I thought, "oh, he's just a middle manager now" and lost interest.
I'm less concerned that AI tools will replace developers in the short term, and more concerned that it will encourage incompetent management to try their hand at "contributing" to projects using AI tools, creating more headaches for the developers. Kind of like how Blackberries made managers feel super productive firing off emails, while adding significantly to the work load of those under them.
in the Toolformer paper, it seems what they are doing is :
* take a performant language model (copied from some other team e.g. GPT-J)
* show the machinery that it can learn new tokens using one or more tools from a provided set of tools ... e.g. WikiSearch tool
* demonstrate that the sequence of characters in the full API call content, has some effect.. e.g. no reply, useless content, or content that helps predict another token. Save that complete set of characters in the API call as an entry
* run a learning session with tool calls to APIs, improve the model for resolving known or new tokens (queries with answers)
* show the model that it can try new combinations itself (!)
* let the machinery try API calls itself to resolve tokens
* minimize loss functions for API results
comments - this is strikingly different than some RDF hard-wired data store.. it is using huge numbers of failed attempts to find results that work. The results that work are complete API calls as type string. It does not seem to care about "learning" the contents of the API calls, just that it is methodical about remembering API calls that work, and retrying those
Yeah there are some pieces missing with it, it’s till not really understanding mutations and state which would be needed, but it’s a huge step in the right direction
Is there a good way/site/method to keep up to date with the latest AI projects? Seems like a lot of people are now trying to specialize these GPT AIs and I would like to find out about them as there will likely be a few that could help me with my workflows.
I wonder if farriers and ice deliverypeople mumbled amongst themselves about job security as much as I see devs do it these days. A lot of us are in the business of deprecating people’s careers. Hopefully the irony isn’t lost.
I feel like for the actual job lots of us do day to day, we’re ridiculously overpaid. I’m not even on a rooftop with tar and a mop! I’m enjoying it but I’m expecting the gravy train to stop eventually.
I think for my next career I’ll do something with… hmm drawing a blank. Let me ask ChatGPT.
I’m a skeptic. There is no evidence behind this tweet that the technique works. I may be mistaken, but I don’t see the authors as having a background in the field. Likewise they don’t describe the technique in any detail.
Sounds like snake oil to me, or fake it until you make it.
The proposed method would improve the model anytime the model didn’t know which api to use. This requires the model to not hallucinate knowledge and some form of incremental training.
Our jobs are safe.
But what about if the path is actually /beer and you have to pass a hidden undocumented query parameter called ?wine=1 for it to give you wines. But then the response is still of Beer objects (because that's the only thing the API validator would allow), so you have to map all the Beer fields to their equivalent Wine counterparts. But not all of them make sense, so you have to ignore them. Which ones? Ask the engineering team. Turns out the guy who wrote all this left years ago, and no one remember how it works. Someone digs up a link to a documentation page, but that internal wiki was taken down and so it returns a 404. You ping a sysadmin to see if they kept any backups. He points you to a few PBs worth of SQL dumps from an internal migration a few years ago and asks you to take a look in those. You simultaneously have to write up a status update for senior leadership which is due by end of day and give them a revised launch date for the project. They want to know why it can't be done in half the time.
The day an AI can figure all this out, I will be looking for another career. Until then I'm fine.
Deleted Comment
something funny about the way I interact with ChatGPT, I basically never give a shit about correcting mistakes or bothering with good grammar because no matter what, it pretty much always understands me. and it's insanely good at solving tip-of-my-tongue type problems.
I guess maybe we should go back to SOAP to eek out a few more months of job security?
Given what things looked 10 years ago, and what was revolutionary back then (ImageNet challenge) - it's hard to comprehend what state of the art will look like 10 years from now.
Either I’ll be well prepared for unemployment or I’ll have plump investment accounts and still be employed. Good outcome regardless.
I’m not expecting the singularity within that timeframe nor do I think we need it for lots of disruption in knowledge-based work. Still, I’m leaning towards a near future where these AI tools augment our capabilities more than the alternative where we lose all our jobs.
We didn’t think « damn, tony stark didn’t have to write a line of code, he got rendered obsolete by this damn AI ». That’s how i choose to think about AI tools for developers from now on.
There's also a small little thing called competition. When your competitors use AI, your job just got harder.
If you have a technology with practically zero marginal cost, pricing it very low and distributing it widely would maximize your profit. Not to mention that once its out of the bag, others will know that its possible and copy it.
[0] https://newatlas.com/vertu-ti-luxury-10000-dollar-android-sm...
What’s more likely is that someone like a Microsoft will make it part of their cloud offerings so devs like us can build things on top.
Why try to capture every market when you can build tools and take percentages of the rest of the economy?
I hope that we get there, but it won't happen by default.
These LLMs aren't inherently complicated to implement, just expensive to train. And if the LLaMA release/leak is anything to go by we are extremely close to ChatGPT running on consumer hardware.
To me it illustrates that if you have money and want to build something, you don’t need to hire coders or have any coding skills, AI will help.
And if you’re a regular coder, you just got replaced by Jarvis.
And, so, if we get Jarvis level AI, and you will get rendered obsolete.
When these tools come, inheriting billionaires will hire 10 people with Tony Stark intelligence as opposed to 10,000 devs with your level of intelligence.
So, don't be in denial that you will be made obsolete.
(And probably so will I.)
Because it's a movie.
In reality an AI like that would be more important than all the other nonsense. But it wouldn't be fun to watch an AI easily unravel the mysteries of magic, time travel, multiverse, etc.
I was watching one of the movies and Stark was flying around some baddie castle/base. He tells Jarvis to locate all the missile placements and, when that's done, he tells Jarvis to target them.
That was when I thought, "oh, he's just a middle manager now" and lost interest.
I'm less concerned that AI tools will replace developers in the short term, and more concerned that it will encourage incompetent management to try their hand at "contributing" to projects using AI tools, creating more headaches for the developers. Kind of like how Blackberries made managers feel super productive firing off emails, while adding significantly to the work load of those under them.
but i support your optimism because what else are we gonna do
Dead Comment
* take a performant language model (copied from some other team e.g. GPT-J)
* show the machinery that it can learn new tokens using one or more tools from a provided set of tools ... e.g. WikiSearch tool
* demonstrate that the sequence of characters in the full API call content, has some effect.. e.g. no reply, useless content, or content that helps predict another token. Save that complete set of characters in the API call as an entry
* run a learning session with tool calls to APIs, improve the model for resolving known or new tokens (queries with answers)
* show the model that it can try new combinations itself (!)
* let the machinery try API calls itself to resolve tokens
* minimize loss functions for API results
comments - this is strikingly different than some RDF hard-wired data store.. it is using huge numbers of failed attempts to find results that work. The results that work are complete API calls as type string. It does not seem to care about "learning" the contents of the API calls, just that it is methodical about remembering API calls that work, and retrying those
not a specialist, feedback welcome
I feel like for the actual job lots of us do day to day, we’re ridiculously overpaid. I’m not even on a rooftop with tar and a mop! I’m enjoying it but I’m expecting the gravy train to stop eventually.
I think for my next career I’ll do something with… hmm drawing a blank. Let me ask ChatGPT.
…well… it suggested project management.
I hope my mortgage is paid off before AI makes me redundant
edit: I was wrong. The site just doesn't have a http -> https redirect
https://sampleapis.com/api-list/wines
Sounds like snake oil to me, or fake it until you make it.
https://twitter.com/sergeykarayev/status/1569377881440276481
https://twitter.com/sergeykarayev/status/1570868002954055682
None of this seems like it would be hard to implement. The memory would be the more out there thing but even that would be a matter of retrieval.