Funny, my IDE already has templates, snippets, and code completion built in.
I kid, but only a little. Working on proprietary software means if AI doesn’t run locally, it doesn’t run. And it doesn’t offer (me) something worth fighting bureaucracy for.
> Working on proprietary software means if AI doesn’t run locally, it doesn’t run.
100% agree with you on this.
We want to deploy these AI agents locally and also in a self-hosted fashion for enterprise.
The way I imagine this to work is the editor will be a shell and just how LSPs are swapped in and out for different languages or work together, there will be APIs exposed for AI to work with, you want an expert Python AI agent..sure here you go, you want something for Rust.. there we go!
Creating a consistent API layer and getting AI agents to work on top of these APIs is a very important step, and we don't want to or plan on being tied down to just OpenAI.
Enterprise and proprietary software requires local AI agents, and with OSS pushing with great speed here I am sure we will get there in the coming months. (I have prototyped with LLAMA2 locally, on my MacBook Air and its okay, but nowhere close to say GPT4), but the early signs are there and we will be providing this as an option for sure going forward!
In my experience I spend way too much time fixing the output of AI assistants rather than working on the next thing. It slows me down, even after the speed ups.
And generating the right code at first shot is not something I would expect from any engineer (unless we are working with compiled languages), giving access to simple tools like linters and static analysis tools goes a long way, not just for humans but for these AI agents to improve their work.
Hey! Author here. That's actually a fair point — in hindsight, I see why the article comes off as 'because AI' and not having clear articulations on how the developer experience changes. The current set of changes we have are indeed incremental additions.
That being said, this is not accidental because we intend to not fundamentally disrupt the core workflows of today right away (as another comment said — IDEs of today are indeed built on years of user experience research). But we do see the capability to simplify or rethink the developer experience once AI is deeply integrated into every workflow. Its perhaps the same way I'd have never stated writing code as a problem, but after having used Copilot, I don't want to go back to not having it.
I'll just have to guess the reimagined IDE is not longer very text-centric given that without proprietary JavaScript the page is just a green background.
Not yet, but I'd expect we'll eventually get there. Code always remains the source of truth and thus will have its place in every IDE for finer control, but I imagine shifting our primary interaction with code from the editor panel to an agent controller can become a powerful models once these agents become very good. A neat way to think about this is code reviews.
When I give a task to another developer on the team, they go out to understand the task, work on it, write tests, run everything and put it up for review (which is then also auto-evaluated by CI first). In this scenario, as a reviewer, we already don't have an absolute need to read every line of code as long as high level design/project principles are followed and all scenarios are covered in tests that are passing.
AI agents can become this other developer picking up and completing end-to-end tasks, but rather than taking hours/days, they take a few seconds/minutes at best — so review comments can actually be shared and incorporated quicker within the IDE itself.
Hey all!
We are building CodeStory, an AI powered mod of VSCode and wanted to share our thoughts and vision on what the future of IDE's will look like.
We have some open questions about how we will get there, and would love your opinion and thoughts on this.
Cursor looks very much in the ballpark and have done a good job. For us, the most important thing is not just code completion or talking with your code (there are many other tools that can do that), but giving the AI agent the freedom to Cmd+Click, Ask for References, all those things that we do in the editor.
Each interaction the user does with the editor needs to be thought out for the AI agent.. I kind of want to tell the AI agent to "follow these code pointers, and figure out how to add a widget on my website" (similar to how you would tell any other engineer in your team).
We want to build a development environment where AI and humans are working together, rather than humans being guided by the AI agent.
We are not there yet, but engineering the editor right now for these AI agents will pay out in the future as the underlying models improve.
100%. I'd like to hear your thoughts on this, but it's possible to argue that IDEs are purpose built and a lot more powerful for programming. This is exactly where we think building the right tooling would allow us to give an IDE-like experience for AI models to perform the same kind of tasks a developer is able to today with IDEs.
Hey! I'm the author of this post and this is certainly true! Jetbrains products are certainly the most powerful IDEs in the list due to the amount of custom tooling they've built for each language/framework and provide a great development experience out of the box.
But VSCode's approach of having an extension-first architecture is pretty powerful too (in fact, something I learnt only recently is how much of VSCode's implementation is built on the same extension architecture that is exposed for developers to extend the editor).
Did you look much into Emacs and its huge variety of packages?
Edited to add: it's built with a vast amount of Lisp and so much of the application can be live modified at runtime via Lisp. If you want an extensible editor, you can learn a lot from Emacs.
I kid, but only a little. Working on proprietary software means if AI doesn’t run locally, it doesn’t run. And it doesn’t offer (me) something worth fighting bureaucracy for.
100% agree with you on this.
We want to deploy these AI agents locally and also in a self-hosted fashion for enterprise. The way I imagine this to work is the editor will be a shell and just how LSPs are swapped in and out for different languages or work together, there will be APIs exposed for AI to work with, you want an expert Python AI agent..sure here you go, you want something for Rust.. there we go!
Creating a consistent API layer and getting AI agents to work on top of these APIs is a very important step, and we don't want to or plan on being tied down to just OpenAI.
Enterprise and proprietary software requires local AI agents, and with OSS pushing with great speed here I am sure we will get there in the coming months. (I have prototyped with LLAMA2 locally, on my MacBook Air and its okay, but nowhere close to say GPT4), but the early signs are there and we will be providing this as an option for sure going forward!
this is a bad thing?
deciding on data structures, data flows and abstractions to build is the hard bit of being a software developer, which works perfectly well on paper
turning that into the code is the easy bit, with or without a boilerplate generator
In my experience I spend way too much time fixing the output of AI assistants rather than working on the next thing. It slows me down, even after the speed ups.
And generating the right code at first shot is not something I would expect from any engineer (unless we are working with compiled languages), giving access to simple tools like linters and static analysis tools goes a long way, not just for humans but for these AI agents to improve their work.
AI could provide some different views of project structure (not file/folder). AI could use existing IDE tooling?
None of this is a reimagining of the IDE, just some incremental additions to it.
That being said, this is not accidental because we intend to not fundamentally disrupt the core workflows of today right away (as another comment said — IDEs of today are indeed built on years of user experience research). But we do see the capability to simplify or rethink the developer experience once AI is deeply integrated into every workflow. Its perhaps the same way I'd have never stated writing code as a problem, but after having used Copilot, I don't want to go back to not having it.
When I give a task to another developer on the team, they go out to understand the task, work on it, write tests, run everything and put it up for review (which is then also auto-evaluated by CI first). In this scenario, as a reviewer, we already don't have an absolute need to read every line of code as long as high level design/project principles are followed and all scenarios are covered in tests that are passing.
AI agents can become this other developer picking up and completing end-to-end tasks, but rather than taking hours/days, they take a few seconds/minutes at best — so review comments can actually be shared and incorporated quicker within the IDE itself.
Each interaction the user does with the editor needs to be thought out for the AI agent.. I kind of want to tell the AI agent to "follow these code pointers, and figure out how to add a widget on my website" (similar to how you would tell any other engineer in your team).
We want to build a development environment where AI and humans are working together, rather than humans being guided by the AI agent.
We are not there yet, but engineering the editor right now for these AI agents will pay out in the future as the underlying models improve.
But VSCode's approach of having an extension-first architecture is pretty powerful too (in fact, something I learnt only recently is how much of VSCode's implementation is built on the same extension architecture that is exposed for developers to extend the editor).
Edited to add: it's built with a vast amount of Lisp and so much of the application can be live modified at runtime via Lisp. If you want an extensible editor, you can learn a lot from Emacs.