I fired up the browser DevTools to see how this works. It submits the code directly to the OpenAI API (I like that it does this in the browser rather than forwarding my API key on to the useadrenaline.com server where it might end up logged) with the following options:
{
"model": "code-davinci-edit-001",
"input": "### Code goes here ###",
"instruction": "Identify and fix all bugs in this Python code."
}
Then the application itself has a nice implementation of client-side diff presentation.
That was for the "lint" button - I didn't run this experiment for the "debug" button.
I have an experiment related to that which seemed to mostly work. What I did was give it the project dir listing first along with the request and ask which files it needed. The other part of it was to give it a specific format to list the file updates with their file name first.
What I will probably actually do when I get a chance for aidev.codes is use OpenAIs embeddings with a vector search for relevant snippets for the prompt context. Or possibly use the gpt-index project which I think does that for me.
This is why it may be tough for startups without in-house ML expertise, proprietary weights, and the resources to continually train them to retain a moat. Innovations on the base layer seem to eat light packaging layers for breakfast. I knew a guy who got early access to the GPT-3 beta a few years ago and made a site where you could upload apartment leases and get an explanation. Now with chatGPT you can just ask it for what you want it to do. Of course, both this guy's GPT-3 wrapper app, and chatGPT both give inaccurate but dangerously plausible sounding answers, but that's a different problem.
This is literally a web app. It’s not meeting me anywhere OpenAI’s web app isn’t. There are VS code plugins that people have built that put ChatGPT in your IDE by reverse engineering the API (I wrote one).
Also, without access to the model (eg. by merely calling OpenAI’s API), no one’s refining it. They need access to the actual model (eg. Bloom)
Not with their current implementation, it's literally just making API calls to OpenAI GPT models. They can improve their prompts, but it'll never be better than what OpenAI offers as a first party.
Doesn't seem to work after pasting API key (just keeps asking for API key over and over). Also seems to force email input to subscribe to a mailchimp list.
Has anyone been able to get OpenAI to increase their rate limit for Codex (code-davinci-002)?
I have a somewhat related service https://aidev.codes but I have to default to using text-davinci-003 instead because the code-davinci-002 rate limit is very small (10-20 requests per minute).
I have been trying to contact their support about it for a month without any response.
That was for the "lint" button - I didn't run this experiment for the "debug" button.
Is that Adrenaline's own model (OpenAI let's you train your own model in their environment, right?), or OpenAI's model?
I would recommend changing it.
I will be super interested when I can run this against a whole git repo codebase instead of a single file.
What I will probably actually do when I get a chance for aidev.codes is use OpenAIs embeddings with a vector search for relevant snippets for the prompt context. Or possibly use the gpt-index project which I think does that for me.
Also, it would be nice if there is a WebStorm plugin.
Also, presumably they will continue to refine beyond what ChatGPT is doing with better models, smarter prompts, etc.
Also, without access to the model (eg. by merely calling OpenAI’s API), no one’s refining it. They need access to the actual model (eg. Bloom)
Using it as a standalone website looks fun too, can't wait to dig into this !
You should also set your site to force HTTPS.
I have a somewhat related service https://aidev.codes but I have to default to using text-davinci-003 instead because the code-davinci-002 rate limit is very small (10-20 requests per minute).
I have been trying to contact their support about it for a month without any response.
xhr.js:162 Refused to set unsafe header "User-Agent"
POST https://api.openai.com/v1/edits 429