I've had a positive experience building a ChatGPT shell for Emacs [1]. Not having to context switch between the editor and browser is great. With Emacs being a text paradise, there are all sorts of possible integrations, like babel integration to elisp [2] or SwiftUI [3].
In addition to a shell, functions for inserting GPT responses can be pretty neat too. For example, creating org tables [4].
There are plenty (perhaps far too many) tools for doing basically `curl` to OpenAI.
Local LLM tools are needed however; and are much better for deploying systems on terabytes of data at fractions of the cost.
I use the aichat [1] command line tool a lot for these sort of ad hoc chats. It takes piped input and has nice configurability for setting up a variety of system prompts ("roles"), etc.
If you want to use GPT-4 to manipulate and edit files in your local file system, you can use my cli tool aider [2]. It’s intended for generating and editing code, but you can use it to chat with GPT-4 to read, edit and write any text files in your local. If the files are under git source control, it will commit the changes as they happen as well.
Here’s a transcript of aider editing the ANSI-escape codes in an asciinema screencast recording, for example[3].
I wonder how many different ways people use to do basic ChatGPT queries.
My preferred method is to run a WhatsApp bot, this way I can easily use the LLM also on my phone. And on a computer I just use WhatsApp web, which I keep running anyways. Also this method natively supports iterated conversations.
The OpenAI API is available to everyone. I've spent well over $100 just trying various things out over the past two months. I was not trying to save on it, you can do quite a lot even on $10, just make sure to do some napkin maths before you query som endpoint a lot of times. For example it's a lot easier to spend a lot on Dall-e than it is on GTP-3.5
Try a Google search for GitHub projects that do that. Or really any other GPT idea. People are building many copies of everything, so I'm not even going to recommend the one that I'm using because there's probably a better one already :). It's simple code so you can also modify it to your liking.
I keep plugging my own… yet another API invoker - with parallel queries, templates, and config files written in Golang;
https://github.com/tbiehn/thoughtloom
Has some interesting examples, but I expect the population of users to be constrained to the 5 of us that enjoy CLI, jq, and writing bash scripts.
In addition to a shell, functions for inserting GPT responses can be pretty neat too. For example, creating org tables [4].
[1]: https://xenodium.com/chatgpt-shell-available-on-melpa
[2]: https://xenodium.com/images/chatgpt-shell-available-on-melpa...
[3]: https://xenodium.com/images/chatgpt-shell-available-on-melpa...
[4]: https://raw.githubusercontent.com/xenodium/chatgpt-shell/mai...
EDIT- my mistake, the first link points it out as https://github.com/xenodium/ob-swiftui. Thanks for sharing!
Everything that goes through the tool can be logged to SQLite so this should make it easier to build up comparisons of different models.
1. https://github.com/go-skynet/LocalAI
If you want to use GPT-4 to manipulate and edit files in your local file system, you can use my cli tool aider [2]. It’s intended for generating and editing code, but you can use it to chat with GPT-4 to read, edit and write any text files in your local. If the files are under git source control, it will commit the changes as they happen as well.
Here’s a transcript of aider editing the ANSI-escape codes in an asciinema screencast recording, for example[3].
[1] https://github.com/sigoden/aichat
[2] https://github.com/paul-gauthier/aider
[3] https://aider.chat/examples/asciinema.html
My preferred method is to run a WhatsApp bot, this way I can easily use the LLM also on my phone. And on a computer I just use WhatsApp web, which I keep running anyways. Also this method natively supports iterated conversations.
That, plus some scripts for repetitive stuff.
[1]: https://apps.apple.com/us/app/openai-chatgpt/id6448311069
https://github.com/charmbracelet/mods
If you want to run against GPT-4 (and your API key has access) you can pass "-4" or "--gpt4" as an option.
CORRECTION: Sorry, I was talking about my "llm" tool - https://github.com/simonw/llm - it looks like "mods" does indeed default to 4: https://github.com/charmbracelet/mods/blob/e6352fdd8487ff8fc...