Readit News logoReadit News
cushychicken · a year ago
This is a paid ad by Upwork masquerading as an article.

It’s based on a study that Upwork sponsored, and it cites a bunch of conclusions that make freelancers seem attractive to big orgs:

C-suite executives, bringing in freelance talent into their workforce say freelancers are meeting productivity demands and often exceeding them, outpacing full-time employees. The level of well-being and engagement has improved. And they have doubled the following outcomes for their business: organizational agility (45%), quality of work being produced (40%), innovation (39%), scalability (39%), revenue and bottom line (36%) and efficiency (34%). The findings also show that 80% of leaders who leverage freelance talent say it is essential to their business, and 38% of leaders who don’t already leverage this talent pool intend to start in the coming year.

This is a stealth ad meant to engender FOMO in business decision makers: “Your existing employees won’t get results quickly because AI is overloading them - instead, hire a clever freelancer who has figured out how to use AI to their advantage, and reap the rewards!”

cs702 · a year ago
Came here to say the same thing, after trying to find a link to the original source, i.e., the purported "study" mentioned in the headline. I was unable to find it.[a]

This doesn't deserve to be on HN.

---

[a] I mean a proper study with an explanation of methodology, proper statistics, and sources of data. I could only find a press release (https://investors.upwork.com/news-releases/news-release-deta...) and a short blog post with limited information (https://www.upwork.com/research/ai-enhanced-work-models).

_sword · a year ago
That’s the whole point of Forbes today
onion2k · a year ago
Turns out I only needed one out five 'whys' to find the root cause:

The majority of global C-suite leaders (81%) acknowledge they have increased demands on their workers in the past year.

If your C-suite have bought into the dream that AI magically makes everyone more productive, but haven't invested the time or cash to roll it out in well-understood, provably useful ways, then employees are going to find themselves fighting to get the 'expected' (read 'magical dreams') productivity gains, and they'll waste a lot of time trying to apply AI to problems where it doesn't really fit as a solution.

None of that says anything about AI and it's usefulness in the right context. It's all just people who don't understand a problem or the right solution jumping in and saying "I know best because I'm the highest paid!"

noobermin · a year ago
One would expect this is at least somewhat related to the efficacy of LLMs or genAI in general in the solving of actual problems but this is still a horrifying stat, what ever is to blame. 81% is beyond a bubble level scale of investment across the economy in vaporware.
apwell23 · a year ago
> C-suite have bought into the dream that AI magically makes everyone more productive

I don't think they are just that naive. Modern stock markets expect CEOs to collude with them in pumping up the stock price by any means necessary( including outright lying).

"LLMs are going to lead to AGI, and its just around the corner". Everyone knows this is bullshit but we have CEOs of mega corps trafficking in these lies openly because markets expect them to and keep the gravy train rolling.

fsndz · a year ago
This is what happens when expectations of productivity gains thanks to AI are not realistically set:

"Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains." (cf. Forbes article below)

The reality: AI models (generative or not) are useful in specific cases, not all cases. Failing to acknowledge that and failing to strategise accordingly only leads to short term success and long term pain. For example, use cases that imply relying on LLMs as reasoning engines are doomed to fail given the current state of the art. If you want to know which use cases make sense, check out my articles on medium (DMs also open):

https://medium.com/thoughts-on-machine-learning/where-genera...

https://medium.com/thoughts-on-machine-learning/chatgpt-and-...

everdrive · a year ago
We keep getting assaulted by co-pilot, which sucks in the first place, but also doesn't really work correctly given our non-standard Microsoft environment. One of my analysts tries to use GPT to solve problems, but he doesn't understand the problems he's solving, so he can't properly evaluate the solutions GPT is spitting out. Honestly, I hate AI so much, I hate the hype, I hate how companies are pushing it. For most cases it's an enormous waste of energy, and companies are much more afraid of missing out on revenue than they are of wasting or misusing technology.
nottorp · a year ago
It's useful if you can't be arsed to type trivial stuff in, but you have to know how to do the trivial stuff in the first place because you have to be ready to correct it.

So it wouldn't help a beginner to learn. Kinda like modern StackOverflow.

Mind, I've only used the public LLMs, not copilot's code completion. That one I've only tried once and it has the potential to be extremely annoying.

sgarland · a year ago
> It's useful if you can't be arsed to type trivial stuff in, but you have to know how to do the trivial stuff in the first place because you have to be ready to correct it.

This. The only exception I’ve found to this rule is shell one-liners. I’m not sure why they’re so good at them; maybe the terse nature helps? However, I’ve also never had it do anything that I couldn’t figure out in awk on my own.

poikroequ · a year ago
Thankfully my company isn't trying to inject AI into everything, and we're the better for it. I do use chatgpt a little in my job, but only a little. This generation of LLMs are simply far too unreliable to be depended upon for anything serious. AI is not going to make you more productive if you need to double check everything it outputs.

Intellij recently introduced a feature which uses AI to complete a line of code. Almost every time, it produced incorrect code, like it would generate the line of code and immediately there would be red squiggly underlines highlighting all the errors. It wasted my time and I'm more productive by turning that feature off.

No doubt AI will continue to improve, but the current state of the art simply isn't good enough to make most of us any more productive. Often the opposite.

I may be using the term AI too broadly here, but hopefully you understand I'm referring to LLM chatbots like chatgpt, and related technologies like copilot.

williamcotton · a year ago
This generation of LLMs are simply far too unreliable to be depended upon for anything serious.

I have managed to learn ways to make LLMs very productive. Most of this project was written by Claude where I acted as a very high level architect.

https://github.com/williamcotton/guish

It benefited from using a Claude Project and keeping the source up to date. It is also a good practice to bail on a thread quickly if it is being unhelpful. And the biggest tip is to be an overly pedantic technical communicator.

balazspeczeli · a year ago
This article reads like a covert ad for Upwork, a platform for hiring freelancers.

> C-suite executives, bringing in freelance talent into their workforce say freelancers are meeting productivity demands and often exceeding them, outpacing full-time employees.

> a fundamental shift in how we organize talent and work

> leveraging alternative talent pools

= "You should hire freelancers instead of full-time employees."

> outdated work models

I guess they try to project the idea that full-time employment is outdated.

throwthrowuknow · a year ago
> Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains.

Well, there’s the problem. This is just like the 80s and early 90s when execs decided to drop computers into the workflow of their employees and expect instant improvements.

kragen · a year ago
computerized! as seen on tv! new, enhanced with uranium! radium brand stockings really make your legs shine!
james-bcn · a year ago
Very poor article. It doesn't link to the study, and it doesn't give details about what type of workers the study examined.
n4r9 · a year ago
It looks like it tries and fails to link to the study. I did a quick google and found this, which is what I think it's trying to link to:

https://www.upwork.com/research/ai-enhanced-work-models

gortok · a year ago
This is not a study. This is an executive summary of a study.

This sort of stuff rankles me. Without the numbers, questions, and methodology there’s no way to ascertain what errors the folks who created the study committed, if any.

safety1st · a year ago
This part resonated for me

> To add insult to injury, nearly half (47%) of employees using AI say they don’t know how to achieve the expected productivity gains their employers expect, and 40% feel their company is asking too much of them when it comes to AI.

I routinely talk to clients and partners where the business decision makers are just utterly delusional about what AI will produce for them. They genuinely seem to think the age of employing people is ending which I guess isn't a shock since that's what Sam Altman and the media have been telling them.

Meanwhile internally we're just puttering along using Copilot and it's definitely a force that can be used for great good or great ill. I can say it has... Further reduced my appetite for hiring people to do programming tasks that should be automated out of existence anyway? That seems like a fair assessment. It's somewhere between a tool and a toy for helping complete the real work.

Edit: oh yeah, and sooooo many tech products out there right now burning dev time on AI features that aren't really useful.

tbrake · a year ago
> They genuinely seem to think the age of employing people is ending which I guess isn't a shock since that's what Sam Altman and the media have been telling them.

I wonder if they have good answers as to who will buy their products after all the jobs are gone. Reminds me of https://quoteinvestigator.com/2011/11/16/robots-buy-cars/ . An anecdote 70 years old at this point but seemingly evergreen in its applicability.

whoknowsidont · a year ago
>I routinely talk to clients and partners where the business decision makers are just utterly delusional

You can just stop there; the problem is not necessarily with AI but the type of people that have power.