Readit News logoReadit News
cchance · a year ago
Seriously confused, the article first screws up royally saying its not opensource, which they correct, but then discuss "shoved down their throats" ... a feature that exists but is not enabled by default and doesn't even work if you don't provide a key.

It's absolutely rediculous that supporting something, thats opt-in not opt-out is causing a ruckus lol, people really have too much fuckin time on their hands to bitch about things.

This is like bitching that Firefox allows us to enable a proxy server, and throwing a fit that a feature exists even as an option to be enabled.

verandaguy · a year ago
You're right across the board. My only issue with it is general-purpose AI fatigue. Everywhere I look, blog posts use the same generic-looking AI art (I can't quite put it into words, but you probably know the look I'm talking about), social media posts are written up using genAI (e.g. Linkedin will now ask you if you want AI to write your post for you -- though let's be honest, original thoughts are few and far between on there to begin with), and while interviewing recently I received multiple warnings about disabling any AI assistants in my editor (to me, it's kind of a bummer that that's a big enough issue to mention at all).

I have, in principle, nothing against an opt-in feature that requires unmistakable user consent and a specific sequence of actions to enable. I'm just kinda tired of AI in general, and I'm also worried about potential licensing issues that may arise if you use genAI in your terminal to write scripts that weren't permissively licensed before being used as part of a training set. That's nothing new though, I had, and have, the same concerns with Github Copilot.

I also recognize that my complaint is pretty personal (not counting the licensing thing). My low-level annoyance with genAI isn't something the AI industry at large should seek to resolve. Maybe I'm more set in my ways than I should be at this point in my life. Either way, it's a factor for me and a few other tech people I know.

__loam · a year ago
> Everywhere I look, blog posts use the same generic-looking AI art (I can't quite put it into words, but you probably know the look I'm talking about)

They got that AI grease on them

wormius · a year ago
The best is going to boingboing, and seeing that shit art everywhere, while they have posts whining about AI art. Be Consistent. You're just being part of the problem if you can't even abide by the basics.

I never thought I would end up hating the whole tech world so much, and I thought "Crypto" was peak - but that was just a bunch of scammy dudes and rugpulls for suckers. This? Everyone is suckers. In theory there's a case to be made for it, but I trust none of the entities involved in pushing this out.

For about 5 years I thought MS was going to do something good. WSL2 was actually good tech and they seemed to uh... "embrace" open source. But since 2020 I feel like things are just going downhill.

My inner old man yells at the lawnmowermanchild : GET OFF MY LAWN.

SoftTalker · a year ago
> while interviewing recently I received multiple warnings about disabling any AI assistants in my editor

Weird. Does the company forbid its staff to use AI assistants?

I get that they want to find out what you know. If you know how to solve problems using an AI, isn't that what they are going to (increasingly) expect you to do on the job?

In fact, demonstrating that you can effectively use AI to develop, would seem to me to be something they'd want to see.

joemi · a year ago
I don't think it's uncommon for a feature to start as opt-in, then turn to opt-out, then finally turn to built-in, so it makes sense to me to be wary if the feature is something you don't want in something typically light and not-overburdened-by-features as a terminal emulator.
dewey · a year ago
Yes, but you have to take into account what kind of product it is and who's the developer and their track record.

It's different if a paid or ad supported product wants to increase the amount of people who use a feature vs. an open source developer adding a feature for people to use and having no benefit by people using it. iTerm already has a lot of advanced options and that's just one more of it.

sangnoir · a year ago
> It's absolutely rediculous that supporting something, thats opt-in not opt-out is causing a ruckus

iTerm is pretty extensible, and there are other ways of making the AI bloat (IMO) opt-in, without including it in the core software.

The biggest issue for me is that it increases the attack surface on iTerm2 with no tangible benefit (to me), I'd be similarly upset if they added an opt-in "Share on Facebook/StackOverflow" feature. I'd seriously consider switching to a purist fork that doesn't integrate social-media sharing as a core-feature of a terminal app.

benwaffle · a year ago
> The biggest issue for me is that it increases the attack surface

What's your threat model?

mvdtnz · a year ago
> people really have too much fuckin time on their hands to bitch about things

I always find it so puzzling when people say this. If this author has "too much time" what would the ideal society look like to you? Would you prefer to live in a society where every person is worked to the bone for optimal productivity, such that there is not even a spare 30 minutes in their week to write something that isn't generating economic output? I really want to know what you mean when you say this.

justanotherjoe · a year ago
I want to put my hand on people's shoulders and calmly say 'take it easy'. So many times when I'm reading hn.

Deleted Comment

this_steve_j · a year ago
Brave’s crypto reward features might be a better analogy, but I like the comparison.

Fortunately I get paid to throw shade. Here is a free sample:

“Unsupervised use of LLMs can increase the risk of exposing sensitive information. Mitigation strategies and capabilities are not mature.

“Exploiting AI requires rapidly increasing the capital expenditures (input) per worker while the horizon for productivity gains (output) is uncharted and unknown.

joshstrange · a year ago
Except LLMs have real value today unlike crypto which is mostly used for scams and speculative trading. That’s what makes this different from Brave. Also I don’t think the CEO has any integrity whereas the iTerm2 developer has been making an incredible product for free for over a decade.
akira2501 · a year ago
> It's absolutely rediculous that supporting something, thats opt-in not opt-out is causing a ruckus lol, people really have too much fuckin time on their hands to bitch about things.

People have preferences. They find their preferences meaningful to them. This means there will always be a healthy competitive market of alternatives to choose from in order to serve those preferences. This is not a bad outcome. Why do you find it "worthy of ridicule?"

m463 · a year ago
> there will always be a healthy competitive market of alternatives

I'm not complaining about iTerm2, but this statement on its own is not true.

For example - a healthy competitive market for phones that respect your privacy. Cars without touchscreen controls for everything. Televisions that are not "smart".

Honestly, I hope that iterm2 gets a "local AI" feature.

Running a local LLM or stable diffusion is fun.

whimsicalism · a year ago
preferences aren't above criticism, i see many people on the internet with the misapprehension that just because "it's an opinion" somehow ought to shield you from people being able to say "that's a dumb opinion"
benced · a year ago
Because they’re going beyond their personal preference - which could be fulfilled by never using this feature - by trying to press their legitimate preferences onto others by blackballing companies that even touch LLMs.
thih9 · a year ago
This was fixed now. The article’s main point still stands IMO. I.e. that iterm2 focused on openai and not some local workflow by default.

Optional or not, I’d like core features to be privacy friendly and provider agnostic. Otherwise a plugin might be a better fit.

> I think that one of the greatest errors that was made with putting this in iTerm2 was making a big show of it, and by not letting you use local models (such as with Ollama) instead of having OpenAI be the only option.

ephimetheus · a year ago
I think you can hardly call it “focused on”, when it’s one feature out many many updates made to the software. Alao I believe you can retarget the calls to a local instance of a model behind an OpenAI compatible API, and it will happily use that.

Seems like lots of knee jerking going on.

joshstrange · a year ago
You can change the endpoint, which anyone could learn from reading the comments on the release yesterday or reading the wiki

> I think that one of the greatest errors that was made with putting this in iTerm2 was making a big show of it, and by not letting you use local models (such as with Ollama) instead of having OpenAI be the only option.

There is not a single line this is that is true. A big show was not made and you can use Ollama with it. The “big show” was made by other people not the developer behind iTerm2.

hoherd · a year ago
Having a proxy setting actually is a problem, which is why, for example, Microsoft Windows Server lets you create a policy that prevents users from configuring it.

Codecierge allows terminal scrollback and its presumably unredacted data to be sent to a third party, it is a conduit for data exfiltration and may violate a whole bunch of compliance policies, so if it cannot be disabled, it may put companies in violation of those compliance certifications.

mikl · a year ago
Features you don’t use still add complexity, bugs and potential security and privacy issues.
paulmd · a year ago
You can always fork it if you disagree with the direction of development.

Everyone of course knows perfectly well that it’s not anywhere near that level of concern in reality, one might say just concern trolling in fact.

But if you and enough other people really do feel strongly enough, you can maintain a security-focused fork with… removing an optional LLM thing that requires manually entering a key. Sure.

skyyler · a year ago
A proxy server is not a controversial feature.

The proxy server was not created through petabyte-scale plagiarism.

A proxy server does not use half a million kilowatt hours of electricity a day.

This is nothing like complaining that Firefox allows you to enable a proxy server.

I use ChatGPT but I also think the AI detractors have some good points...

lxgr · a year ago
Fortunately, the iTerm feature is not mandatory, nor are they now sponsored by OpenAI or neglecting other "duties" (it's a free and open-source project) as far as I can tell.

iTerm has always been an "everything and the kitchen sink" type of project/software. If you want minimalism, especially in the interest of security, it's definitely not the terminal emulator for you – its attack surface must be massive.

BobbyJo · a year ago
If you think AI is bad, wait til you hear about humans...

Im being facetious, but my point is that raw power/data usage isn't by itself a bad thing, as long as it is providing commensurate value. Now you can argue thy don't do that yet, but that would require a lt more nuance than "using resources bad".

planetafro · a year ago
Did you actually read the article? The conclusion is exactly what you state. No big deal. It's just a meandering and fun read. YMMV.
adamomada · a year ago
I got the sense that he’s speaking of the same people the author of the post is speaking of in the article, not the author himself
lordgrenville · a year ago
I love iTerm2. It has a gazillion features, most of which I don't want and never use (but I love the small subset I do use). I saw the new AI stuff on the release notes, thought, "ugh, lame, I'll never use that", and moved on with my day.
andy99 · a year ago
I use iterm2 as a default and haven't given it any thought or looked into to it.

But "AI powered" very often implies spying on you and sending home everything it can which has me a bit concerned. Is this true or likely to become true of iterm2?

Edit: looking around the thread and at other conversations like https://gitlab.com/gnachman/iterm2/-/issues/11475 it's not just me with this concern

kstrauser · a year ago
It is not true today. It seems highly unlikely to become true.
abnercoimbre · a year ago
Plug: This entire thread really doubles my motivation to get Terminal Click [0] out the door even sooner. I'm an indie dev who wants to sell an offline binary for the equivalent of a night out at the movies, probably cheaper.

Feeling a bit like a dying breed.

[0] https://terminal.click

paulmd · a year ago
if the concern is that iterm2 can’t get through a finance industry audit anymore, why would anyone install a shell (ie extremely privileged software) from some rando? What’s your plan for providing security audits and ongoing patching and updates to a PCI level compliance and who does your certifications?

What are the odds that this isn’t a ghost repo in 5 years let alone 25?

adamomada · a year ago
iTerm2 is so good, it’s worth using macOS just for the crazy good terminal. I love the quake term hotkey access and while I’m sure other terminals can easily do it too, I think it might have been the first?
cromka · a year ago
I believe Yakuake/Kuake for KDE first introduced it some 20 or so years ago. I’ve been using it since its early days on Linux.

EDIT: looks like someone even investigated that: https://babbagefiles.xyz/quake-drop-down-terminal-history/

vundercind · a year ago
Terminal.app, even, is no slouch among terminal programs. Uncommonly-great input latency, for one thing.
labster · a year ago
Move on with your day? How could you move on with your day when someone was wrong? Wrong on the internet!
eriri · a year ago
I looked into the issue tracker and gosh, its getting so toxic there.

https://gitlab.com/gnachman/iterm2/-/issues/11470

hiatus · a year ago
With gems like this:

> +1 to the people who'd like to donate 50$ for a version which does not send my input anywhere. Okay, previously I never donated the project. But use the iTerm2 for ~10yrs and would like to continue.

So, this person has been using iTerm2 for 10 years without paying, and would only consider donating if this feature is removed?

tedunangst · a year ago
Interesting incentive system being created here. You're going to give me $50 to remove an AI feature if I add one to another project?
dmix · a year ago
> a version which does not send my input anywhere

clearly hyperbole

Even when it's turn on you have to manually engage it

Rage posters always gloss over details in their rush to tell the world how mad they are

eriri · a year ago
Even more dramatic:

> it is a very unwelcome statement of disagreeable values. Adding OpenAI integration, as optional as it is here, makes it clear that you don't stand against OpenAI and the whole "AI" industry.

Imagine the crime of maintaining a free and open source terminal emulator in one's spare time... how hideous.

phillipcarter · a year ago
Hah! Just another turn of the wheel, this time with AI. Lots of entitled developers out there who sure do have a lot of time on their hands to complain.
mrozbarry · a year ago
I think this issue is actually two issues

    1. A terminal shouldn't be able to ask some resource on the internet what to type and auto-execute it.
    2. AI fear/fatigue/???
I think point 1 is reasonable to an extent, but it should be taken in context. iTerm2 is a free app, and as far as I can tell, not even remotely required on any mac platform, since there is technically a default dumb terminal, which can be customized. I think the context issue is from the video demos I've seen, nothing directly types into your terminal, it's up to the user to review/copy/paste the generated code snippet. The underlying tech has been in iTerm for a while, from the best I can see. Auto-fill also enables things like the 1password integration, and anyone can open a chatgpt client and copy/paste shell code from there in the same way the iTerm2 integration works.

I understand point 2, I have never cared for any AI hype, it has near-zero interest for me, and doesn't affect my work. Almost every editor has some capacity to ask the internet for data and paste it in, from AI or otherwise, and no one is really sounding a major alarm bell around that. You could argue there is a big push for these integrations to train models, but even that requires a key.

lxgr · a year ago
> 1. A terminal shouldn't be able to ask some resource on the internet what to type and auto-execute it.

Every Linux shell can do that, regardless of your terminal emulator, and arguably that's by design:

    curl https://givemesomecoolshellcommands.com | sh
What iTerm can do is essentially just some GUI sugar around that capability.

If you don't like it, just don't do it :)

bloopernova · a year ago
From reading that issue, it sounds like some people are worried about compliance with security policies (whether personal or corporate)

I'm very happy with iTerm2, its features are useful to me, but I can't see myself using the AI chat feature when I have copilot in VS code. I could see a use case for people unfamiliar with certain commands, something like "please sort this output by the first then fourth columns". But if I'm writing a script or small Python utility, then VS code will be where I do it.

For compliance though, the AI integration could be a separate binary that you can access via the command line, although as pointed out in a reply, that's the same as a code path that isn't used. However, it is easier to block a separate binary, so maybe that's the thinking there?

Instead, maybe the people who have an issue with this feature would be happy with an optional setting "assign this keyboard shortcut to the AI binary". Or a feature flag that says "do not access the network under any circumstances".

nulld3v · a year ago
I get the compliance perspective but it feels stupid to bring it up now, especially since iTerm2 has already had integrated network features for a long time.

Agreed on the "do not access the network" feature flag though, every program should have that. Or really it should just be a toggle in the OS on a per-app basis.

iLoveOncall · a year ago
> For compliance though, the AI integration should probably be a separate binary that you can access via the command line. Maybe with an optional setting "assign this keyboard shortcut to the AI binary".

There's no difference at all between a code path that's never called and another binary that's never called.

You are simply wrong for even trying to argue about privacy concerns when it is a feature that is entirely off by default (and also doesn't send anything that you don't enter in the box dedicated to it).

It makes absolutely 0 sense to have any concern about this but not have concerns about the capability of the terminal to perform any other call over the network.

kstrauser · a year ago
Eh. I was the CISO at a HIPAA-covered healthcare company, and I have no problem with the way iTerm handles this. Nothing gets sent from your terminal, other than what you type into the separate AI prompt window. You have to manually enter your ChatGPT key. You have to manually choose to open the AI prompt.

I see this as not substantially different from a programmer having a browser tab open where they could type questions and get answers, just more convenient. If I didn't want my coworkers doing that at all, I'd push out a device policy adding a firewall block for OpenAI's API servers and then not worry about it at all.

oreilles · a year ago
> From reading that issue, it sounds like people are worried about compliance with security policies (whether personal or corporate)

This is incredibly stupid. If they don't trust iTerm to respect their privacy, why were they using it in the first place? For all they know it very well could have been sharing all their data without telling them from the very beginning. Alas, the tool is open source they could just audit instead of yelling at clouds but hey.

dcow · a year ago
If you can use vscode in a compliance environment then you can use this new release of iterm2.
derefr · a year ago
> From reading that issue, it sounds like some people are worried about compliance with security policies (whether personal or corporate)

The right thing to do, then, would be for the OS to have a group policy setting like:

"Disable application features that rely on processing documents, data, or application state using remote third-party inference APIs."

...and then for apps to look for it and respect it; and for corporations concerned about this to set it as part of MDM.

Then apps could offer these features as available by default, but also forcibly disabled when relevant.

sixhobbits · a year ago
It's hard to believe that some of these comments aren't trolling. Do they also consider the fact that iterm can call out to tools like `wget` and `curl` a privacy risk and slippery slope that might share their data if used wrong?
JasserInicide · a year ago
Complete whataboutism. If I fat finger bad data, that's expressly my fault. This is a case where I now need to worry about tools I use that never sent my usage data somewhere now sending it somewhere.
mixmastamyk · a year ago
Choosing a network capable download command is different than an option to send all commands to the cloud for processing. And we know defaults get changed at times, sometimes on purpose (hi facebook!).

We also have decades of experience and culture around how to use network commands properly, especially for FLOSS tools.

Considering that newbies will be attracted to these cloud tools, the risk of information leakage sounds a lot higher in the second instance.

whimsicalism · a year ago
Nietzsche wrote about this sort of stuff, we're currently in an AI ressentiment period.
octernion · a year ago
i'm sure 95% of the people are just trolling - they can't be that dense about what the change actually is as developers. just silliness.
eriri · a year ago
One of the participants is calling for a "dogpile" on Mastodon and I'm not even joking.
ungreased0675 · a year ago
There’s a perspective I haven’t seen yet that may explain the backlash: I would never integrate OpenAI into a terminal program I was building because of hallucinations, a lack of trust in OpenAI, and other reasons. I’m not an anti-AI Luddite either, hopped on the AI train in 2015.

So, when someone else integrates OpenAI where I don’t think it belongs, I feel like I can’t trust their judgment, and therefore can’t trust them to make good product management decisions in the future. Adding questionable features is a red flag indicating the product team isn’t focused or has poor judgment.

Put another way, you don’t put the person that says Taco Bell is quality Mexican food in charge of arranging catering for the company Christmas party. I can’t trust them to not screw it up.

__loam · a year ago
Right, the point is not that the feature is opt in, the point is that you can no longer trust the maintainers to make good decisions.
octernion · a year ago
i think 99% of folks would take an occasional hallucinated ffmpeg option in exchange for never having to look up the man page again. i trust the iterm dev's judgement even more after this since it's an ideal use case.

gpt-4o seems fantastic at generating cli flags and such, not sure what troubles you have run into. i am questioning your judgement of their judgement, hah.

Xelynega · a year ago
The question isn't "is it useful", it's "should it be part of the core of the application".

If this meets the bar for "worth inclusion in the core of the application", I would rather stick to terminal emulators that are spending their time on making the terminal emulator more responsive rather than adding gimmicks that have to be maintained.

tzs · a year ago
Aren't plenty of people already using OpenAI for help with command lines without there being any integration? They copy/paste between the LLM's web interface and their terminal.

All this integration is doing is making it more convenient for those people by removing the need for them to manually relay between the terminal and the LLM.

joshstrange · a year ago
You don’t have to, you can change the API url to whatever you want, like pointing it at your local ollama or LMStudio.

Which anyone who didn’t just read this misinformed blog post would know (it was all over the comments on this release yesterday and also in the iTerm2 wiki about this feature).

zer0tonin · a year ago
I think the feature is dumb but also the backlash is completely pointless. It literally doesn't get in your way in any manner. It's only extremely hypothetical concerns (read bullshit) about sending random data to openai.
throwawa14223 · a year ago
I have a fairly strong preference for tools that don't contain LLM, but you're right compared to all the other uses of LLM this is pointless.

Edit Because I hit submit too soon: George Nachman is a great guy who delivers a cool tool for free and doesn't deserve the pushback.

octernion · a year ago
you are going to hate most software with that attitude over the next few years
delichon · a year ago
Example from earlier today. I wanted to know how to format a date as "06/04/1947", just like that with leading zeros. I can never remember the details of the complex Ruby `strftime` method, so I usually have to spelunk in the docs for the details. Instead I prompted an LLM with "format a Ruby date like '06/04/1947'" and it gave me the correct answer, faster than I could have found the doc page let alone decoded it.

Why the heck not use AI when it's better?

JD557 · a year ago
> Why the heck not use AI when it's better?

I think that, in its current iteration, it is not that easy to know.

I haven't tried GPT 4 (which I've heard is much better), but my experiences with 3.5 have been extremely frustrating and underwhelming. I absolutely hate when it starts making stuff up and I have to fix it via the traditional way, it just wasted my time!

I guess this boils down to personal preference, but so far I just prefer a good old Google search.

I was quite happy with copilot auto complete, though. Mostly because of how low friction it was.

adamomada · a year ago
June 4th or April 6th? Don’t use that date format :)
mixmastamyk · a year ago
I usually hate this expression, but it fits like a glove here:

"Oh, sweet summer child..." :-D

joshstrange · a year ago
If only you could read the source to confirm for yourself or if the developer had a proven track record instead of being someone who just popped up on the scene for the first time…. Oh wait, you can look at the source and the developer has a proven track record maintaining iTerm2 for over a decade.

Your condescension is embarrassing as it shows you have no idea what you are talking about in this context.

jedberg · a year ago
A lot of people are angry or don't understand the feature. Am I the only one who actually loves this feature? I'd say 50% of my interactions with GPT are "Make a command line to do X". Right now I have to copy/paste back and forth to the GPT and tell it what happened.

Having all of that built into the terminal is fantastic! The only feature I would ask for is obfuscation -- when I'm doing it for work, I am always careful that I change my question such that I get the right answer without revealing any company information.

Of course a solution to this is enabling someone to point at another model that they control, either locally or hosted privately.

spockz · a year ago
Why does the emulator have to do this? Can’t it be a plug-in for bash/zsh/whatnot? Those already support tab completion. It could even be a cli:

prompt “command that you want it to generate”. Then inspect it and run it. If you are feeling very lucky type “$(prompt “command you want to run”)” and poof it executes. Why does this need to be in one of the most privileged apps on my machine?

tzs · a year ago
> Why does the emulator have to do this? Can’t it be a plug-in for bash/zsh/whatnot?

I've no interest in using it because I'm a cheap bastard and am not going to pay $20/month for a key, but if I were going to use it I'd want it in the terminal emulator. If it were a shell plugin I'd have to install it in my shell on my Mac, the shells on my two Raspberry Pis, the shell on my Amazon Lightsail instance, and the shells on the half-dozen or so servers at work that I regularly ssh to from home. Oh, and also the shells in the Linux VMs I run my work test environment in.

jedberg · a year ago
It certainly could be built into the shell. But the people who did it first make a terminal. No reason it couldn't be ported to the shell though.
llimllib · a year ago
simonw's llm tool does this with a plugin: https://github.com/simonw/llm-cmd

I'm with the others that want it in a command, not embedded in my terminal

liveoneggs · a year ago
Isn't it much much better suited to be an editor integration you can use inside of fc?
mixmastamyk · a year ago
> change my question such that I get the right answer without revealing any company information.

Well goodbye to that part.

jedberg · a year ago
Hence my request for obfuscation or using a privately hosted model. :)
yareal · a year ago
Would you like iterm to provide Google search results? Should iterm have discord chats or irc channels in it?

It's not what I want iterm to do, I want iterm to be a great terminal.

jedberg · a year ago
If there were a way for me to type a question and get just the command line back by searching on Google, yeah, I'd like that too. But that's not how Google works. ChatGPT can give me exactly what I want.
cloverich · a year ago
No, but I might like iterm to have integrated support for various commands, shortcuts, aliases, etc. This is, IMHO, a very natural extension of that. I would prefer even less friction, as for any command I don't know by heart, asking ChatGPT in natural language is the most efficient way to get the commands and arguments I want, usually the very first time. Again IMHO, in the near term future, "great terminal" and "first class LLM support" will go hand in hand for most (but of course, not all) users.
h0l0cube · a year ago
And that it still is, with a new lightweight feature that needn’t bother anyone unless they wish to enable it.
joshstrange · a year ago
Do you use tmux? If not are you appalled that iTerm2 has support for that? What about the other billion config/settings/features of iTerm2 that you choose not to use?

iTerm2 is a great terminal. It was great before the update and it’s still great after the update.

whimsicalism · a year ago
if google results were as effective at generating exactly the command i want, yeah i'd want them to include it as an optional feature
infecto · a year ago
Don't assign an api key to it then?
throwup238 · a year ago
> It's everywhere, and it's exhausting. Part of my job requires me to keep up with the latest advances with AI and I'm unable to. Everything happens so much.

Good god yes. I'm bullish on AI, have been playing with it a lot over the past year, and am now making a commercial AI-powered desktop app but this is exactly how I feel. I went from feeling the most excited I've been since I started programming as a kid to feeling like I'm drowning.

Today on the front page of HN between a fifth and a quarter of the articles have been about AI of some sort. Not necessarily LLMs but between the Microsoft AI annoucements, a post on RAG and another on neural networks, CLIP representations, it feels like it's non stop onslaught of AI news that I try to keep up with and we're more than a year into the hype.

amanzi · a year ago
This is the bit resonated with me the most: "One of the main bits of feedback I've seen from people online is that iTerm2 having AI involved at all is enough to get them to want to switch away to another terminal emulator. They've cited the reason as exhaustion due to overexposure to AI hype."

I have paid accounts with both OpenAI and Anthropic, so I'm not anti-AI in general, and I understand that it is opt-in, and requires your own API key to get it working. But it's the hype exhaustion that gets me. I really don't want AI in my terminal, just one keystroke away from sending some slop to a remote server. And just knowing that my terminal is capable of that is enough to turn me away.

wrs · a year ago
It’s many keystrokes away. You have to type the thing you want sent to OpenAI. Manually.

It appears 95% of the people complaining about this feature don’t actually know what it is.

amanzi · a year ago
I uninstalled iTerm2 before trying out the feature, but have watched the linked demo video and read the docs, and my point still stands - the generated code is a single key-press away from being sent to a remote server. It's like copying a terminal command from StackOverflow and pasting it directly into your terminal and hoping it works as described.
hombre_fatal · a year ago
Yeah, 95% of the people complaining never even opened up the "Engage AI" box much less the Codecierge panel.