The comments in this thread are depressing, I expected much better from HN.
The AI features are but 1 minor aspect of this release, they are optional, you can change the URL to point to a local LLM, yet people are pretending like all your data is going to be sent to OpenAI if you update.
I’m not sure if people are being intentionally daft or are just not reading anything past the word “AI” (which, again, isn’t even listed as the top feature of this release).
If you don’t want to use it, don’t put in an API key. It’s not like you are going to accidentally enable it.
iTerm2 is one of the most solid pieces of software in use on a daily basis. To the point that I often forget it’s not a default/included app. It has a million configuration options and it makes complete sense for them to /offer/ /optional/ AI/LLM features.
Depends on the vantage point. Have you worked in any regulated industries? I can see iTerm joining internal software ban lists because of its AI integration (even if it's off by default).
Security departments of these corps are constantly pleading with their staff to "please stop sharing corp data with LLMs, you're not allowed to do that", all the while staff feel under pressure to deliver faster, and reaching for whatever tools are available.
The temptation to use it will be irresistible to many, especially juniors/temps competing for limited positions and promotions.
From a regulated corp point of view, why would they risk it, and rely on individual staff conscience, knowledge, and ability to estimate risk? Better to neutralise the risk from the outset by banning use of the software. Plenty of other terminals where this can't be enabled at all by any over-excited staff.
If someone wants to use ChatGPT with their terminal it is not really much of a roadblock to use the LLM's web interface and copy/paste between that and the terminal.
I'd expect then that if the security department is worried about people obeying a "don't use unauthorized LLMs" policy to be blocking access at the network level.
Following that logic, regulated industries would be going after anything resembling Microsoft Office with a flamethrower. It would be product suicide for any piece of software, like e.g. Microsoft Office or Microsoft Windows, to offer even optional AI capabilities.
> I’m not sure if people are being intentionally daft or are just not reading anything past the word “AI”
I think this demonstrates the risks of jumping on a bandwagon. When software companies (in general, not iTerm2 specifically) overuse a term, including outright lying to attach a buzzword to basic features that are nothing to do with it, many people respond with an equal and opposite reaction: distrusting use of the term altogether.
We trusted google at one point with a lot of our info, then they started to screw us.
Are people overreacting for something not enabled by default? Quite possibly, but literally today open ai is getting in trouble for almost certainly using Scarlett Johansson's voice, even after she specifically told them "no". They're already giving all the indications they don't care about consequences to abuse.
And the URL for the AI API shouldn't be buried in the advanced settings.
I can see the perception/concern being different than the technical reality. I just did the update myself and briefly saw something about "AI Term" or other and finished the update. Afterward I was wondering how to get details on what that meant--searching "AI" in iTerm2 Help menu shows no results. If I hadn't read this post/comments already I would be concerned as should anyone who installed without detailed understanding.
There are clear explanations in the release notes and the wiki entry linked from the relevant place in the preference pane [1]. The full release note is displayed before updating. There are numerous comments here explaining how it's impossible to accidentally enable the feature. It's opt-in, you have to input a paid API key, you can use a offline model instead, and the data it sends are totally customizable and by default limited to the output of "uname" and the prompt that you explicitly enter.
Yet people are ignoring all of that and writing all sorts of misinformation.
iTerm2 is featureful yet solid, constantly improved on, doesn't work against the user, and is free. I've submitted patches before and the author was nice and responsive. The AI feature is minimal, non-intrusive, and doesn't advertise its existence once you decided not to opt in unlike commercial products hyped up about AI. It's thankless work even without HN piling on and the author deserves much better.
These aren't mutually exclusive—it's perfectly possible to be fully aware that this version of iTerm introduces optional AI and be concerned about it. Dismissing these concerns as people "ignoring [the optional aspect] and writing all sorts of misinformation" is disingenuous and unfair.
The most obvious concern is that it becomes non-optional in future, but there are plenty of related concerns ranging all the way up to the general principle of the use of AI technology.
I haven't seen any of that pressing the update button on the updater dialog box that automatically pops up (as I usually do, since typically iTerm2 updates don't have such sneaky surprises). Only after the update there was a little slideshow where the AI stuff was hidden somewhere on the 3rd or 4th slide.
Yeah, this state of discussion saddens me. There's so many other features I've yet to digest in the release notes. This release has been a long time in coming. Yet, as a daily iTerm user, I haven't thought I have really missed anything. It works and it works well. But, I'm certain there's few things here I'll be using soon. I have been donating a long time, and shall continue to so.
There’s a change coming in the next dot release so managed environments can disable all generative ai features. I’ll keep an eye out for what others do in this regard to support enterprise users.
They use OpenAI's API with the possibility to set a custom URL, so... done. This makes iTerm compatible with any LLM as long as they implement the same API.
OpenAI's API is the de facto standard for now, it's not up to iTerm to define a standard.
It doesn't look like there's the ability to change the URL. The only options are to change the API key, model name (from a predefined list), the prompt, and the token limit.
I hate "AI integration" rubbish as much as everybody else. But in this case the author did it well - it's disabled by default, it won't work unless you decide to paste your key and so on.
A lot of the comments mention the AI inclusion from an LLM-is-everywhere point of view. I'm also a little confused about why behaviour like that is in a terminal rather than a shell?
To my mind I just want the terminal to render text and handle input, and then it's my shell's job to define behaviour of commands etc.
I find that a super helpful distinction- what if you like iterm but want a different shell like fish or xonsh? How does the LLM integrate there? Is it still gonna spit out zsh commands?
I'm not an apple user, so maybe I'm missing something abouf iterm?
iTerm2 does a bunch of things with native controls that would not be the same in the shell. E.g. tmux integration allows the windows/panes etc. of a tmux session to show up as actual panes in tmux.
The composer is a small native popup that allows you to edit a command using a native textbox instead of interacting with the terminal, and then send it all at once. The AI stuff hooks into this.
iTerm is a feature-packed terminal emulator that's had shell-integration and various smart features to automatically trigger actions based on the terminal text content for a long time, long before the current AI wave.
That makes sense then! I hadn't realises this about iterm since I've never used it- it seems like a blurring of the lines between shell and terminal that I wouldn't want, but maybe I'm not the target user.
So what are the good Mac-centric alternatives for folks who don't want OpenAI snooping around their terminal? Warp already went all-in on AI and cloud, now iTerm headed down that same path.
Don’t be like that. Even as it is, iTerm is the furthest thing from Warp. The ai api is upto you to hook up using your own provider key, you don’t have to use to and everything else remains the same. And it’s reasonable to assume they chose the OpenAI rn because it has become the unofficial defacto standard, should a universal one emerge I’m fairly confident iTerm would use that. iTerm’s been around long enough and deserves some good will no?
You can just not turn it on. This is the most mild AI integration of any recent terminal I’ve seen. Also you can set the url from what other commenters are saying so you can use a local LLM if you want.
This manufactured outrage is absurd. iTerm2 has been the most solid and conservative terminal I’ve ever used and people are pretending they jumped the shark with this feature.
Much like with the whole "web3" crypto craze before it, I think it behooves us to push back pretty loudly on everyone who buys into the grift-of-the-week.
Nobody actually believes OpenAI is giving away billions of dollars in free compute just so we don't have to memorize awk syntax...
I probably hate "AI" more than you, but let's be fair: the author did well in this case - it's disabled and you need to actively enable it (for the 2 folks who actually want it). We should commend people for doing it this way rather than using grey patterns, calling home without consent (or without giving you any other option), forcing local apps to the cloud etc.
The feature requires an OpenAI API so it's not even on by default, you will have to configure it to be able to use it. Not to mention that you just have to not use it and boom, no communication will ever be made with OpenAI even if configured. That is a terrible use of the word "snooping".
Seconded wezterm is such a great terminal and the lua scripting possibilities including communication with neovim, are really powerful. Also it runs everywhere even BSD. Have switched on all my systems.
Wezterm is pretty good. There's little to choose between the two, but I use wezterm because it's cross-platform and I like to have the same experience on Mac and Linux.
Then be careful that you're not accidentally clicking the Update button on the updater popup. There was no upfront warning about the AI integration there.
Nothing in the changelog indicated that “OpenAI is now snooping around”. There is an optional feature that you need to set up yourself if you want to use it. Why do you feel like you need to deliberately misunderstand the post as soon as you read the term “AI” somewhere?
None of that is remotely clear when updating through the iTerm2 auto-update popup for a minor version update.
The Preferences panel doesn't have a single indicator which says whether the AI integration is activated or not. It's probably just bad UI design and not mischief, but I was instantly put off by what initially looked like a dark pattern.
PS: even the changelog doesn't explicitly state that the feature is disabled by default, only indirectly by stating that one needs an OpenAI key because requests cost money.
iTerm is such an amazing piece of software. It's one of the first things I install. Glad the settings dialog had a "donate" button tucked into the corner; did that straight away. The value for money is absurd.
What are the security implication of this? Can you really trust an external entity arbitrarily sending commands as responses. The attack surface is huge and I expect we'll see AI integration as shocking as telnet support at some point in the future.
> A new AI feature in the Toolbelt, "Codecierge", lets you set a goal and then walks you step-by-step to completing it by watching the terminal contents
What exactly is “watching the terminal contents”? Does this happen locally or is data sent to a third party?
If a third party is involved, what data is shared exactly?
Well, the very next sentence after your quote is "It requires you to supply an OpenAI API key" which should answer your question.
Here's the default prompt:
> Return commands suitable for copy/pasting into \(shell) on \(uname). Do NOT include commentary NOR Markdown triple-backtick code blocks as your whole response will be copied into my terminal automatically.
You can set a custom URL to use Ollama which is OpenAI API compatible. Llama3 8b runs quite fast for me on a M1 Max. I'll be making use of this feature I think (haven't tried it yet).
The AI features are but 1 minor aspect of this release, they are optional, you can change the URL to point to a local LLM, yet people are pretending like all your data is going to be sent to OpenAI if you update.
I’m not sure if people are being intentionally daft or are just not reading anything past the word “AI” (which, again, isn’t even listed as the top feature of this release).
If you don’t want to use it, don’t put in an API key. It’s not like you are going to accidentally enable it.
iTerm2 is one of the most solid pieces of software in use on a daily basis. To the point that I often forget it’s not a default/included app. It has a million configuration options and it makes complete sense for them to /offer/ /optional/ AI/LLM features.
Security departments of these corps are constantly pleading with their staff to "please stop sharing corp data with LLMs, you're not allowed to do that", all the while staff feel under pressure to deliver faster, and reaching for whatever tools are available.
The temptation to use it will be irresistible to many, especially juniors/temps competing for limited positions and promotions.
From a regulated corp point of view, why would they risk it, and rely on individual staff conscience, knowledge, and ability to estimate risk? Better to neutralise the risk from the outset by banning use of the software. Plenty of other terminals where this can't be enabled at all by any over-excited staff.
I'd expect then that if the security department is worried about people obeying a "don't use unauthorized LLMs" policy to be blocking access at the network level.
I think this demonstrates the risks of jumping on a bandwagon. When software companies (in general, not iTerm2 specifically) overuse a term, including outright lying to attach a buzzword to basic features that are nothing to do with it, many people respond with an equal and opposite reaction: distrusting use of the term altogether.
Are people overreacting for something not enabled by default? Quite possibly, but literally today open ai is getting in trouble for almost certainly using Scarlett Johansson's voice, even after she specifically told them "no". They're already giving all the indications they don't care about consequences to abuse.
And the URL for the AI API shouldn't be buried in the advanced settings.
There are clear explanations in the release notes and the wiki entry linked from the relevant place in the preference pane [1]. The full release note is displayed before updating. There are numerous comments here explaining how it's impossible to accidentally enable the feature. It's opt-in, you have to input a paid API key, you can use a offline model instead, and the data it sends are totally customizable and by default limited to the output of "uname" and the prompt that you explicitly enter.
Yet people are ignoring all of that and writing all sorts of misinformation.
iTerm2 is featureful yet solid, constantly improved on, doesn't work against the user, and is free. I've submitted patches before and the author was nice and responsive. The AI feature is minimal, non-intrusive, and doesn't advertise its existence once you decided not to opt in unlike commercial products hyped up about AI. It's thankless work even without HN piling on and the author deserves much better.
[1]: https://gitlab.com/gnachman/iterm2/-/wikis/AI-Prompt
The most obvious concern is that it becomes non-optional in future, but there are plenty of related concerns ranging all the way up to the general principle of the use of AI technology.
https://techhub.social/@gnachman/112481098349565431
https://techhub.social/@gnachman/112481098800427110
I’m happy to discuss the tradeoffs.
There’s a change coming in the next dot release so managed environments can disable all generative ai features. I’ll keep an eye out for what others do in this regard to support enterprise users.
If they want to allow extensibility for the explicit purpose of Llm integration then why not just… make an API?
OpenAI's API is the de facto standard for now, it's not up to iTerm to define a standard.
Deleted Comment
To my mind I just want the terminal to render text and handle input, and then it's my shell's job to define behaviour of commands etc.
I find that a super helpful distinction- what if you like iterm but want a different shell like fish or xonsh? How does the LLM integrate there? Is it still gonna spit out zsh commands?
I'm not an apple user, so maybe I'm missing something abouf iterm?
Deleted Comment
The composer is a small native popup that allows you to edit a command using a native textbox instead of interacting with the terminal, and then send it all at once. The AI stuff hooks into this.
Alacritty? Kitty? Hyper?
This manufactured outrage is absurd. iTerm2 has been the most solid and conservative terminal I’ve ever used and people are pretending they jumped the shark with this feature.
Nobody actually believes OpenAI is giving away billions of dollars in free compute just so we don't have to memorize awk syntax...
iTerm, since they don't do that. Just keep it disabled.
Unless you consider any terminal that lets you make network calls to OpenAI = OpenAI snooping around.
Deleted Comment
It has been a while since the last release… I don’t mind staying outdated for some more time.
Also, I’m not sure if the openai thingy is mandatory.
The Preferences panel doesn't have a single indicator which says whether the AI integration is activated or not. It's probably just bad UI design and not mischief, but I was instantly put off by what initially looked like a dark pattern.
PS: even the changelog doesn't explicitly state that the feature is disabled by default, only indirectly by stating that one needs an OpenAI key because requests cost money.
[1] https://iterm2.com/donate.html
[2] https://github.com/sponsors/gnachman
What exactly is “watching the terminal contents”? Does this happen locally or is data sent to a third party?
If a third party is involved, what data is shared exactly?
Here's the default prompt:
> Return commands suitable for copy/pasting into \(shell) on \(uname). Do NOT include commentary NOR Markdown triple-backtick code blocks as your whole response will be copied into my terminal automatically.
> The script should do this: \(ai.prompt)