Readit News logoReadit News
pixodaros · a month ago
You didn't have to punish athletes to make them wear Nike and Adidas shoes, because they were obviously better than plain sneakers. You didn't have to punish graphic artists to make them use tablets because they are so convenient for digital art. But a lot of bosses are convinced that if their staff don't find these tools useful for their tasks, its the line workers who are wrong.
daniel_iversen · a month ago
Sure. There are however probably also plenty of examples where the opposite is true (people being hesitant to use newer better technologies) like not everyone wanting to use computers early on ("the old lady in accounting" etc), people not trusting new medications, people being slow in adopting tractors, people being afraid of electricity (yes!) etc. Change is hard, and people generally don't really want to change. Makes it even harder if you fear (which ~25% of people do, depending on where you are in the world) that AI can take your job (or a large part of it) in the future
zinodaur · a month ago
I use AI and it makes me a lot more productive. I have coworkers who don’t use AI, and are still productive and valued. I also have coworkers who use AI and are useless. Using AI use as a criteria to do layoffs seems dumb, unless you have no other way to measure productivity
pixodaros · a month ago
If something is really clearly better, people come around. Some people never will but their children and apprentices adopt the new ways. A whole community of practice experimenting is very powerful. Everyone does not move at once, but people on this site know how often the cool new thing turns out to be a time bomb.
iseletsk · a month ago
People wouldn't keep using old shoes, and I am old enough to remember graphic artists who wouldn't use computers. It takes time. At some point, it will be a no-brainer. Yet, it will not be simply because method A is so much better than method B. It will be because people using method B change, retire, or are fired.
general1465 · a month ago
On the other hand, if you have ever been in corporate, you could notice, that some people absolutely refuse to learn how to use Excel. I.e. just simple column filters are beyond capacity of most of Excel users.

For some reason, big companies often tolerate people being horribly inefficient doing their job. Maybe it is starting to change?

zerosizedweasle · a month ago
If people found this useful for putting out "good" work instead of slop they would use it. I promise you that it's the employees who are right, the output is the same AI slop we see everywhere. If you want to turn your company into an AI slop farm that is questionable logic.
zerosizedweasle · a month ago
They've totally bought into the most extreme AI hype if this is happening. Altman convinced them AI is a PhD in your pocket and their lazy employees are costing them money by not using it.
GPerson · a month ago
The more I interact with these the less I’m afraid these tools will make life meaningless. (Can’t speak on art generation tools. Those still depress me.) It doesn’t matter what you’re making there are still a lot of hard parts even with the best versions of these tools. I doubt a good software developer can be replaced totally unless these get way better.

The best use cases are for code that’s clearly not an end product. You can just try way more ideas and get a sense of which are likely to pan out. That is tremendously valuable. When I start reading the code they produce, I quickly find many ways I would have written it differently though.

bravetraveler · a month ago
Ultimatum? Fire away. Don't threaten me with a better time.
WillAdams · a month ago
It would be easier to use AI at work if it would work.

I have a prompt which opens scans of checks placed on a matching invoice (EDIT: Note that the account line is covered when the scan is made so as to preclude any Personal Identifying Information being in the scan) and writes a one line move command to rename the file to include the amount of the check and date, and the invoice ID# and various other information, allowing it to be used to track that the check was entered/deposited and copying a folder full of files as their filepath so that the text of that can be pasted into Notepad, find-replaced to convert the filenames into tab-separated text, then pasted into Excel to total up to check against the adding machine tape (and to check overall deposits).

On Monday, it worked to drag multiple files into Co-Pilot and run the prompt --- on Tuesday, Co-Pilot was updated so that processing multiple files was the bailiwick of "Co-Pilot Pages Mode", so it's necessary to get into that after launching it, requiring a prompt, then pressing a button, then only 20 files at a time can be processed --- even though the prompt removes the files after processing, it only allows running a couple of batches, so for reliability, I've found it necessary to quit after each batch and re-start. However, that only works five or six times, after that, Co-Pilot quits allowing files to upload and generates an error when one tries --- until it resets the next day and a few more can be processed.

I've been trying various LLM front-ends, but Jan.ai only has this on their roadmap for v0.8, and the other two I tried didn't pan out --- anyone have an LLM which will work for processing multiple files?

do_not_redeem · a month ago
You're sending people's bank account numbers to Microsoft?
WillAdams · a month ago
No, the checks, when scanned have a pen placed over the account line so that there is no personal identifying information (should have mentioned that).
nlh · a month ago
Haven't RTFA (paywall) but an anecdote:

I know a startup founder whose company is going through a bit of a struggle - they hired too many engineers, they haven't gotten product-market fit yet, and they are down to <1 year of runway.

The founder needed to do a layoff (which sucks in every dimension) and made the decision to go all-in on AI-assisted coding. He basically said "if you're not willing to go along, we're going to have to let you go." Many engineers refused and left, and the ones that stayed are committed to giving it a shot with Claude, Codex, etc.

Their runway is now doubled (2 years), they've got a smaller team, and they're going to see if they can throw enough experiments at the wall over the next 18 months to find product-market fit.

If they fail, it's going to be another "bad CEO thought AI could fix his company's problems" story.

But if they succeed....

(Curious what you all would have done in this situation btw...!)

Esophagus4 · a month ago
For the people who refused, why?

Not meaning to sound accusatory, just asking. Was it the tools provided that they didn’t like? Ideological reasons not to use AI? Was the CEO being too prescriptive with their day to day?

I guess I find it hard to imagine why someone would dig in so much on this issue that they’d leave a job because of it, but 1) I don’t know the specifics of that situation and 2) I like using AI tooling at work for stuff.

nlh · a month ago
You ask a great question. My sense is that the engineers fell into three camps (as they do here on HN as well):

1) I don’t really like these AI tools I write better code anyway and they just slow me down

2) I like these tools they make me 10% faster but they’re more like spell check / autocomplete for me than life-changing and I don’t want to go all in on agentic coding, etc and I still want to hand write everything, and:

3) I am no longer writing code, I am using AI tools (often in parallel) to write code and I am acting like an engineering manager / PM instead of an IC.

For better or for worse, and there is much to debate about this, I think he wanted just the (3) folks and a handful of (2) folks to try and salvage things otherwise it wasn’t worth the burn :(

Ekaros · a month ago
Personally I might choose to leave too. I just don't feel like taking responsibility of something iterated with AI. Something I will take the blame when it goes wrong.

This especially so after I have seen someone trying to use AI after I had provided simple and clear manual steps. Instead trying to do something different with very unfitting scenario. Where also the AI really did not understand that the solution would not have even worked.

Deleted Comment

luxuryballs · a month ago
I wish, where I’m at we had to agree not to use it without “disclosure”, not even sure what that means. Oh but also we agree to do code reviews, and since we would review the code regardless of how it was written I don’t know what the concern is about… notably there was never anything written about not using code generation tools which have existed for many decades… anyways I just use AI anyways but it would of course be better if work would fund it!