Idk how I'm supposed to talk about the last 2 years in interviews and every move up and out. I would have left earlier but it's challenging
Idk how I'm supposed to talk about the last 2 years in interviews and every move up and out. I would have left earlier but it's challenging
I don't know if companies are just in a "wait and see" stance to see the effect of AI coding agents, or if it's the sign of a wider slowdown.
100% remote is also a tough ask. I've noticed increasingly job roles are listed as 2-3 days in the office as companies awkwardly transition back to the office.
The company I work for is a medium sized business, in residential and commercial construction. For example, a recent react native mobile dev position my company posted had about 300 applications in the first hour, with about 500 total in the first week on indeed. Of those applications, 90% didn’t have most of any of the requirements for the position. The job description says that we don’t sponsor H1B visa’s (because it’s stupidly expensive now). Of the 10% that somewhat met the minimum qualifications, all but 1 required sponsorship. This was listed as a hybrid role, only 20 people applied from the region where the office is.
We already know from previous roles that a huge percentage of people with resume’s that say they have the required skills, actually won’t come close to making it through the interview process.
While as a company we like AI/ML tools, and encourage our staff to learn them, and use them where appropriate, we want to invest in everyone’s skills with new tools. We try not to use AI where a human connection is important (hiring, sales, etc). We’ve had to resort to AI for dealing with the massive influx of low quality job applications and it sucks.
Basically anyone who goes above and beyond at this point automatically get’s at least an interview.
I do understand why so many people are just applying to every job that shows up, it makes sense. But it really does make the prospect of finding those few great people very difficult.
We aren’t a ruby/rails shop otherwise I’d reach out to OP.
I'm not denying that what you describe happens, but in this case - ignoring the warning signs, letting the issue crash into a wall and then complaining online about it doesn't help anyone.
There's a couple of interpretations here.
1. The sales rep really thought they would be able to retain good pricing for them and it fell through, and at the last minute hackclub was blindsided by their inability to retain the pricing.
2. The sales rep thought that hackclub was likely to jump ship if they had time to plan based on the new pricing, and lied to them about the possibility of retaining pricing. And thought that by doing so they could force at least one year of higher cost.
3. Hack Club is misrepresenting their communications with Slack to drum up public approval.
My guess is that option 1 is the most likely, and the optimism of the sales rep ended up being a net negative, and human nature being what it is, Hack Club thought things would work out, and everyone is already busy so why borrow trouble.
As for complaining online, sadly it seems that bad press is the only lever that most people have as a forcing factor for companies these days. I honestly only had a Twitter account for a long time, just so I could complain about companies in public to get them to do the right thing, so unfortunately complaining online does actually help.
How long have they had the bill mentioned in the top comment on this post? At the very least it's 3 weeks, and the comment suggests it is months.
Both times I’ve paid the new price for 1 year and cancelled. Both times our sales rep was surprised the next year when we didn’t renew.
Some topics I end up needing to know a lot about despite lack of interest (looking at you UEFI), and so I learn until I can solve all the problems I’m having. Others I quickly pass up my needs and then continue with interest for a while (networking, routing, etc).
Some background, I'm a "working manager" in that I have some IC responsibilities as well as my management duties, and I'm pretty good at written communication of requirements and expectations. I've also spent a number of years, reading more code than I write, and have a pretty high tolerance for code review at this point. Finally, I'm comfortable with the shift from my value being what I create, to what I help others create.
TLDR: Agentic coding is working very well for me, and allows me to automate things I would have never spent the time on before, and to build things that the team doesn't really have time to build.
Personally, I started testing the waters seriously with agentic coding last June, and it took probably 1-2 months of explicitly only using it with the goal of figuring out how to use it well. Over that time, I went from a high success rate on simple tasks, but mid-to-low success rate on complex tasks to generally a high success rate overall. That said, my process evolved a LOT. I went from simple prompts that lacked context, to large prompts that had a ton of context where I was trying to one-shot the results, to simple prompts, with a lot of questions and answers to build a prompt to build a plan to execute on.
My current process is basically, state a goal or a current problem, and ask for questions to clarify requirements and the goal. Work through those questions and answers which often makes me examine my assumptions, and tweak my overall goal. Eventually have enough clarity to have the agent generate a prompt to build a plan.
Clear out context and feed in that prompt, and have it ask additional questions if I have a strong feeling about direction and what I would personally build, if there's still some uncertainty that usually means I don't understand the space well enough to get a good plan, so I have it build instead with the intention of learning through building and throwing it away once I have more clarity.
Once we have a plan, have the agent break it down into prioritized user stories with individual tasks, tests, and implementation details. Read through those user stories to get a good idea of how I think I would build it so I have a good mental model for my expectations.
Clear out context and have the agent read in the user stories and start implementing. Early on in the implementation, I'll read 100% of the code generated to understand the foundation it's building. I'll often learn a few things, tweak the user stories and implementation plans, delete the generated code and try again. Once I have a solid foundation, I stop reading all the code, and start skimming the boilerplate code and focus only on the business rules / high complexity code.
I focus heavily on strong barriers between modules, and keeping things as stupidly simple as I can get away with. This helps the models produce good results because it requires less context.
Different models prompt differently. While the Opus/Sonnet family of models drive me nuts with their "personality", I'm generally better at getting good results out of them. The GPT series of models, I like the personality more, but kinda suck at getting good results out of them at this point. It takes some time to develop good intuition about how to prompt different models well. Some require more steering as to which files/directories to look in, others are great at discovering context on their own.
If the agent is going down a wrong path, it's usually better to clear context and reset than to try and steer your way out of screwed up context.
Get comfortable throwing away code, you'll get better results if you don't think of the generated code as precious.