Readit News logoReadit News
paxys · 3 years ago
So the AI can learn to use APIs that follow perfect RESTful semantics, are fully documented and intuitively behave exactly like one would expect.

Our jobs are safe.

ForestCritter · 3 years ago
Yep, I worked in a factory assembly line 'Wrangling the robots' and I'm not an engineer. They require watching and they can't do parts of the job. They get hung up easily and require alot of resetting. There's good reason factories still hire plenty of manufacturing laborers.
NZ_Matt · 3 years ago
I wouldn't say so. The 'Learning by mistakes' step in the demo shows that the AI can work through these issues, presumably this can be done more efficiently than a human.
paxys · 3 years ago
The learning from mistakes part of the example is – you tried /port_wines but the path is /wines/port instead.

But what about if the path is actually /beer and you have to pass a hidden undocumented query parameter called ?wine=1 for it to give you wines. But then the response is still of Beer objects (because that's the only thing the API validator would allow), so you have to map all the Beer fields to their equivalent Wine counterparts. But not all of them make sense, so you have to ignore them. Which ones? Ask the engineering team. Turns out the guy who wrote all this left years ago, and no one remember how it works. Someone digs up a link to a documentation page, but that internal wiki was taken down and so it returns a 404. You ping a sysadmin to see if they kept any backups. He points you to a few PBs worth of SQL dumps from an internal migration a few years ago and asks you to take a look in those. You simultaneously have to write up a status update for senior leadership which is due by end of day and give them a revised launch date for the project. They want to know why it can't be done in half the time.

The day an AI can figure all this out, I will be looking for another career. Until then I'm fine.

layer8 · 3 years ago
As long as their reasoning capabilities for the general case (system 2 thinking) remains limited, trial and error will only get them so far.
lenkite · 3 years ago
Not really - our jobs are going to get changed to "AI operators". Or more brutally: "AI assistants".
sangnoir · 3 years ago
The automation of telephone systems didn't make front desk workers "PABX assistants" or even "PABX Operators", the automation became a tool and faded into the background.
skor · 3 years ago
parent comment emphasizes on the fact that no matter how big the effort was, ultimately our tools will always be imperfect. Therefore we will always have something to do.

Deleted Comment

ken47 · 3 years ago
Our jobs are safe for now. That's not my takeaway. Our jobs will eventually be "unsafe."
ranguna · 3 years ago
Yep, they're safe in 2023. At the rate things are going, I don't even know what's gonna happen in 2024.
GulpGulp · 3 years ago
Our jobs could eventually become unnecessary
realusername · 3 years ago
You already have Zapier and their competitors which are filling this need (and they didn't destroy the whole industry)
WXLCKNO · 3 years ago
Many humans can't do this.
tobr · 3 years ago
Most humans can’t do it. Most humans aren’t expected to do it in their jobs, though.
LesZedCB · 3 years ago
lol!

something funny about the way I interact with ChatGPT, I basically never give a shit about correcting mistakes or bothering with good grammar because no matter what, it pretty much always understands me. and it's insanely good at solving tip-of-my-tongue type problems.

I guess maybe we should go back to SOAP to eek out a few more months of job security?

TrackerFF · 3 years ago
For now. But how about 5 years from now? or 10? or 15?

Given what things looked 10 years ago, and what was revolutionary back then (ImageNet challenge) - it's hard to comprehend what state of the art will look like 10 years from now.

tnel77 · 3 years ago
I’m saving and investing like I only have 10 more years of work before AI takes my job.

Either I’ll be well prepared for unemployment or I’ll have plump investment accounts and still be employed. Good outcome regardless.

TechnicolorByte · 3 years ago
Agreed. The network architectures have evolved so much in the past few years alone. Imagine what they can do in just 5 years if they can embed domain-specific knowledge to the current “dumb” statistical word guessers. Ten years is truly unimaginable.

I’m not expecting the singularity within that timeframe nor do I think we need it for lots of disruption in knowledge-based work. Still, I’m leaning towards a near future where these AI tools augment our capabilities more than the alternative where we lose all our jobs.

1-6 · 3 years ago
Now if only the AI can write perfectly written API documentation...
skeaker · 3 years ago
How did you gather all that from the linked example?
paxys · 3 years ago
Because that's what the linked example showed? Is there a similar example of a more complex case?
bsaul · 3 years ago
I’ve recently thought about iron man movies, where tony stark actually designs stuffs by simply talking to jarvis. I’ve always thought « if only i could have those kind of tools available, i would too be able to design crazy stuffs ».

We didn’t think « damn, tony stark didn’t have to write a line of code, he got rendered obsolete by this damn AI ». That’s how i choose to think about AI tools for developers from now on.

mdgrech23 · 3 years ago
As someone who has been in the game a long time let me tell you that as our tools have gotten better user expectations have gotten way higher. I would argue that it's actually harder than ever before to deliver a true first class product for mobile or web.
visarga · 3 years ago
That's what the hivemind says - AI will automate all our jobs. Well, yes it will automate a lot, but we won't be having the same expectations in a few years. The bar will rise so high we'll still have jobs with AI on top.

There's also a small little thing called competition. When your competitors use AI, your job just got harder.

vasco · 3 years ago
Let's hope in reality it isn't available only to the billionnaire owners of software companies. In those movies everyone else wasn't designing cool suites by talking to voice assistants.
bko · 3 years ago
The marginal cost of technology is pretty close to zero (albeit slightly higher for large LLM model inference) so it would make sense that the technology gets distributed widely. Kind of like how everyone has essentially the best smart phone. And it's laughable when someone tries to create a luxury smartphone for $10k and its just a sub-par Android with leather accents.

If you have a technology with practically zero marginal cost, pricing it very low and distributing it widely would maximize your profit. Not to mention that once its out of the bag, others will know that its possible and copy it.

[0] https://newatlas.com/vertu-ti-luxury-10000-dollar-android-sm...

atonse · 3 years ago
It won’t be. Some of the biggest companies in the tech world got that way by building platforms and ecosystems that others build on top of. AWS and Azure the most obvious examples.

What’s more likely is that someone like a Microsoft will make it part of their cloud offerings so devs like us can build things on top.

Why try to capture every market when you can build tools and take percentages of the rest of the economy?

IncRnd · 3 years ago
Tony Stark is an exception, though. He was once a slave who escaped after he fooled his captor into letting him create a minature arc reactor that he embedded into his own chest in order to fly to freedom and safety - as well as to magnetically keep shrapnel from invading his heart and killing him. He isn't the normal billionaire owner of a software company (outside of comic books).
ChatGTP · 3 years ago
I'm kind of hoping it is available only to some people, not going to be nice when everyone can have an AI design weapons, viruses etc?
jrumbut · 3 years ago
It works for Tony Stark because he gets the economic benefit of that AI's output.

I hope that we get there, but it won't happen by default.

valine · 3 years ago
Can you think of any category of software that doesn't eventually have an open source alternative? There are many open source projects orders of magnitude more complicated than a LLM. The linux kernel, Blender, Firefox, etc.

These LLMs aren't inherently complicated to implement, just expensive to train. And if the LLaMA release/leak is anything to go by we are extremely close to ChatGPT running on consumer hardware.

the_only_law · 3 years ago
It works for tony stark because he’s a fictional comic book character and there probably wasn’t that much thought put into it.
thih9 · 3 years ago
I don’t see it like that.

To me it illustrates that if you have money and want to build something, you don’t need to hire coders or have any coding skills, AI will help.

And if you’re a regular coder, you just got replaced by Jarvis.

dimal · 3 years ago
Software development isn’t just “writing code”. It’s knowing what code to write. It’s knowing how to accomplish a goal using code. It’s knowing how to structure all of your code to work together. It’s knowing how to adapt code to meet new requirements. I don’t foresee rich “idea” guys figuring out how to do that any time soon.
rg111 · 3 years ago
Tony Stark inherited tens of billions of dollars. YOU DIDN'T.

And, so, if we get Jarvis level AI, and you will get rendered obsolete.

When these tools come, inheriting billionaires will hire 10 people with Tony Stark intelligence as opposed to 10,000 devs with your level of intelligence.

So, don't be in denial that you will be made obsolete.

(And probably so will I.)

preommr · 3 years ago
> We didn’t think « damn, tony stark didn’t have to write a line of code, he got rendered obsolete by this damn AI »

Because it's a movie.

In reality an AI like that would be more important than all the other nonsense. But it wouldn't be fun to watch an AI easily unravel the mysteries of magic, time travel, multiverse, etc.

snozolli · 3 years ago
damn, tony stark didn’t have to write a line of code, he got rendered obsolete by this damn AI

I was watching one of the movies and Stark was flying around some baddie castle/base. He tells Jarvis to locate all the missile placements and, when that's done, he tells Jarvis to target them.

That was when I thought, "oh, he's just a middle manager now" and lost interest.

I'm less concerned that AI tools will replace developers in the short term, and more concerned that it will encourage incompetent management to try their hand at "contributing" to projects using AI tools, creating more headaches for the developers. Kind of like how Blackberries made managers feel super productive firing off emails, while adding significantly to the work load of those under them.

swyx · 3 years ago
its also escapist fiction, while tech in the real world very much has disproportionate negative impact on some parts of the population :)

but i support your optimism because what else are we gonna do

Dead Comment

mistrial9 · 3 years ago
in the Toolformer paper, it seems what they are doing is :

* take a performant language model (copied from some other team e.g. GPT-J)

* show the machinery that it can learn new tokens using one or more tools from a provided set of tools ... e.g. WikiSearch tool

* demonstrate that the sequence of characters in the full API call content, has some effect.. e.g. no reply, useless content, or content that helps predict another token. Save that complete set of characters in the API call as an entry

* run a learning session with tool calls to APIs, improve the model for resolving known or new tokens (queries with answers)

* show the model that it can try new combinations itself (!)

* let the machinery try API calls itself to resolve tokens

* minimize loss functions for API results

comments - this is strikingly different than some RDF hard-wired data store.. it is using huge numbers of failed attempts to find results that work. The results that work are complete API calls as type string. It does not seem to care about "learning" the contents of the API calls, just that it is methodical about remembering API calls that work, and retrying those

not a specialist, feedback welcome

mountainriver · 3 years ago
Yeah there are some pieces missing with it, it’s till not really understanding mutations and state which would be needed, but it’s a huge step in the right direction
tomalaci · 3 years ago
Is there a good way/site/method to keep up to date with the latest AI projects? Seems like a lot of people are now trying to specialize these GPT AIs and I would like to find out about them as there will likely be a few that could help me with my workflows.
4dahalibut · 3 years ago
I love this newsletter! https://www.boteatbrain.com/ it's written in a very accessible style
Narciss · 3 years ago
Waterluvian · 3 years ago
I wonder if farriers and ice deliverypeople mumbled amongst themselves about job security as much as I see devs do it these days. A lot of us are in the business of deprecating people’s careers. Hopefully the irony isn’t lost.

I feel like for the actual job lots of us do day to day, we’re ridiculously overpaid. I’m not even on a rooftop with tar and a mop! I’m enjoying it but I’m expecting the gravy train to stop eventually.

I think for my next career I’ll do something with… hmm drawing a blank. Let me ask ChatGPT.

…well… it suggested project management.

Der_Einzige · 3 years ago
It's a sad pill to swallow to realize that AI is coming for high end knowledge work first. A great irony to be sure.

I hope my mortgage is paid off before AI makes me redundant

jollyllama · 3 years ago
It's like that shot from Jurassic Park where the raptors figure out how to use the door.
optimalsolver · 3 years ago
Clever girl...
ItsABytecode · 3 years ago
I can't really infer much from this demo since the documentation it reads and the API it calls aren't public

edit: I was wrong. The site just doesn't have a http -> https redirect

https://sampleapis.com/api-list/wines

lumost · 3 years ago
I’m a skeptic. There is no evidence behind this tweet that the technique works. I may be mistaken, but I don’t see the authors as having a background in the field. Likewise they don’t describe the technique in any detail.

Sounds like snake oil to me, or fake it until you make it.

famouswaffles · 3 years ago
GPT3 has demonstrably been able to run/execute execute and make API requests with python for quite some time now. You can test it too. See here

https://twitter.com/sergeykarayev/status/1569377881440276481

https://twitter.com/sergeykarayev/status/1570868002954055682

None of this seems like it would be hard to implement. The memory would be the more out there thing but even that would be a matter of retrieval.

lumost · 3 years ago
The proposed method would improve the model anytime the model didn’t know which api to use. This requires the model to not hallucinate knowledge and some form of incremental training.
singularity2001 · 3 years ago
I tried something similar with terminal copilot yesterday. The approach works for bash calls, why not for web calls?