So when those times have occurred I've (we've more accurately) adopted what I refer to the "deer in the headlights" response to just about anything non-trivial. "Hoo boy, that could be doozy. I think someone on the team needs to take an hour or so and figure out what this is really going to take." Then you'll get asked to "ballpark it" because that's what managers do, and they get a number that makes them rise up in their chair, and yes, that is the number they remember. And then you do your hour of due diligence, and try your best not to actually give any other number than the ballpark at any time, and then you get it done "ahead of time" and look good.
Now, I've had good managers who totally didn't need this strategy, and I loved 'em to death. But for the other numbnuts who can't be bothered to learn their career skills, they get the whites of my eyes.
Also, just made meetings a lot more fun.
Second, always pad your estimates. If you have been in the industry longer than 6 months, you'll already know how "off" your estimates can be. Take the actual delivery date, divide that by the estimated date, and that's your multiplier.
> You can have it craft an email, or to review your email, but I wouldn't trust an LLM with anything mission-critical
My point is that an entire world lies between these two extremes.
The theory behind these models so aggressively lags the engineering that I suspect there are many major improvements to be found just by understanding a bit more about what these models are really doing and making re-designs based on that.
I highly encourage anyone seriously interested in LLMs to start spending more time in the open model space where you can really take a look inside and play around with the internals. Even if you don't have the resources for model training, I feel personally understanding sampling and other potential tweaks to the model (lots of neat work on uncertainty estimations, manipulating the initial embedding the prompts are assigned, intelligent backtracking, etc).
And from a practical side I've started to realize that many people have been holding on of building things waiting for "that next big update", but there a so many small, annoying tasks that can be easily automated.
Salary is 140-155k USD.
For reference, here's levels.fyi's breakdown of the New York city area: https://www.levels.fyi/t/software-engineer/locations/new-yor...
Median total comp is 185k.
It seems like their total comp for NYT is slightly above the mark based on reported salaries: https://www.levels.fyi/companies/the-new-york-times-company/...
A senior role in NYC for 155K (plus bonus, which they do offer) is nothing when you factor in the cost of living.
One thing that still suffers is AI autocomplete. While I tried Zed's own solution and supermaven (now part of Cursor), I still find Cursor's AI autocomplete and predictions much more accurate (even pulling up a file via search is more accurate in Cursor).
I am glad to hear that Zed got a round of funding. https://zed.dev/blog/sequoia-backs-zed This will go a long way to creating real competition to Cursor in the form of a quality IDE not built on VSCode