Readit News logoReadit News
MattGaiser commented on Bank forced to rehire workers after lying about chatbot productivity, union says   arstechnica.com/tech-poli... · Posted by u/ndsipa_pomu
taylodl · 2 days ago
How many times has a chatbot successfully taken care of a customer support problem you had? I have had success, but the success rate is less than 5%. Maybe even way less than 5%.

Companies need to stop looking at customer support as an expense, but rather as an opportunity to build trust and strengthen your business relationship. They warn against assessing someone when everything is going well for them - the true measure of the person is what they do when things are not going well. It's the same for companies. When your customers are experiencing problems, that's the time to shine! It's not a problem, it's an opportunity.

MattGaiser · 2 days ago
85%? I suppose most of the time I’m just seeking very specific information, so an RAG approach of it retrieving some document works fine for me.
MattGaiser commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
rs186 · 11 days ago
3X if not 10X if you are starting a new project with Next.js, React, Tailwind CSS for a fullstack website development, that solves an everyday problem. Yeah I just witnessed that yesterday when creating a toy project.

For my company's codebase, where we use internal tools and proprietary technology, solving a problem that does not exist outside the specific domain, on a codebase of over 1000 files? No way. Even locating the correct file to edit is non trivial for a new (human) developer.

MattGaiser · 11 days ago
Yeah, anecdotally it is heavily dependent on:

1. Using a common tech. It is not as good at Vue as it is at React.

2. Using it in a standard way. To get AI to really work well, I have had to change my typical naming conventions (or specify them in detail in the instructions).

MattGaiser commented on Goodbye, Six-Figure Tech Jobs. Young Coders Seek Work at Fast-Food Joints   nytimes.com/2025/08/10/te... · Posted by u/Physkal
tom_m · 13 days ago
I don't know, start with a lower salary. I did. Ok, that was like nearly 20 years ago, but it was in Manhattan. It wasn't cheap. Stop trying to work at stupid FAANG companies. I can't understand why anyone would want to work at those places. You learn nothing because there's too many people. On the other side of the fence, the pressure cooker startups adopting those insane 12 hour 6 work day weeks are also bad.

So I get it, there's not much left...but expecting six figures for your first job is crazy.

MattGaiser · 13 days ago
Is this possible? As if AI is replacing junior devs, they can't work for little enough when the subscription is $40 a month.
MattGaiser commented on Goodbye, Six-Figure Tech Jobs. Young Coders Seek Work at Fast-Food Joints   nytimes.com/2025/08/10/te... · Posted by u/Physkal
bicepjai · 13 days ago
Exploring the counterpoint: in this era of LLMs, how do you assess code quality?
MattGaiser · 13 days ago
I haven't heard anyone claim LLMs are good at the architecture yet.
MattGaiser commented on We're Losing Our Love of Learning and AI Is to Blame   mndaily.com/294949/opinio... · Posted by u/jruohonen
MattGaiser · 14 days ago
> Most importantly, we need to remember why we came to college in the first place.

I’d be surprised to find learning and intellectual curiousity a priority for more than 25% of freshmen at any point since 2008.

When I started university a decade ago, it was all about the resume building. We were there to ensure our entry to the upper middle class.

MattGaiser commented on The surprise deprecation of GPT-4o for ChatGPT consumers   simonwillison.net/2025/Au... · Posted by u/tosh
TimTheTinker · 15 days ago
Because more than any other phenomenon, LLMs are capable of bypassing natural human trust barriers. We ought to treat their output with significant detachment and objectivity, especially when they give personal advice or offer support. But especially for non-technical users, LLMs leap over the uncanny valley and create conversational attachment with their users.

The conversational capabilities of these models directly engages people's relational wiring and easily fools many people into believing:

(a) the thing on the other end of the chat is thinking/reasoning and is personally invested in the process (not merely autoregressive stochastic content generation / vector path following)

(b) its opinions, thoughts, recommendations, and relational signals are the result of that reasoning, some level of personal investment, and a resulting mental state it has with regard to me, and thus

(c) what it says is personally meaningful on a far higher level than the output of other types of compute (search engines, constraint solving, etc.)

I'm sure any of us can mentally enumerate a lot of the resulting negative effects. Like social media, there's a temptation to replace important relational parts of life with engaging an LLM, as it always responds immediately with something that feels at least somewhat meaningful.

But in my opinion the worst effect is that there's a temptation to turn to LLMs first when life trouble comes, instead of to family/friends/God/etc. I don't mean for help understanding a cancer diagnosis (no problem with that), but for support, understanding, reassurance, personal advice, and hope. In the very worst cases, people have been treating an LLM as a spiritual entity -- not unlike the ancient Oracle of Delphi -- and getting sucked deeply into some kind of spiritual engagement with it, and causing destruction to their real relationships as a result.

A parallel problem is that just like people who know they're taking a placebo pill, even people who are aware of the completely impersonal underpinnings of LLMs can adopt a functional belief in some of the above (a)-(c), even if they really know better. That's the power of verbal conversation, and in my opinion, LLM vendors ought to respect that power far more than they have.

MattGaiser · 15 days ago
> We ought to treat their output with significant detachment and objectivity, especially when it gives personal advice or offers support.

Eh, ChatGPT is inherently more trustworthy than average if simply because it will not leave, will not judge, it will not tire of you, has no ulterior motive, and if asked to check its work, has no ego.

Does it care about you more than most people? Yes, by simply being not interested in hurting you, not needing anything from you, and being willing to not go away.

MattGaiser commented on The surprise deprecation of GPT-4o for ChatGPT consumers   simonwillison.net/2025/Au... · Posted by u/tosh
CodingJeebus · 15 days ago
> or trying prompt additions like “think harder” to increase the chance of being routed to it.

Sure, manually selecting model may not have been ideal. But manually prompting to get your model feels like an absurd hack

MattGaiser · 15 days ago
Anecdotally, saying "think harder" and "check your work carefully" has always gotten me better results.
MattGaiser commented on Jules, our asynchronous coding agent   blog.google/technology/go... · Posted by u/meetpateltech
mattnewton · 17 days ago
Getting the environment set up in the cloud is a pain vs just running in your environment imo. I think we’ll probably see both for the foreseeable future but I am betting on the worse-is-better of cli tools and ide integrations winning over the next 2 years.
MattGaiser · 17 days ago
It’s surprisingly good. If you try Copilot in GitHub, it has had no issues setting up temporary environments every single time in my case.

No special environment instructions required.

MattGaiser commented on GitHub pull requests were down   githubstatus.com/incident... · Posted by u/lr0
MattGaiser · 18 days ago
GitHub gives everyone an extra long lunch.
MattGaiser commented on Job-seekers are dodging AI interviewers   fortune.com/2025/08/03/ai... · Posted by u/robtherobber
DavidWoof · 19 days ago
Give me an AI chatbot over someone with poor English skills reading a script any day of the week. My problem probably isn't unique, it's probably something fairly obvious that was vague in the instructions.

Now, the important thing is offer a way to upgrade to a human. But I have no problem at all starting with AI, in fact I honestly prefer it.

MattGaiser · 19 days ago
At this point I find the humans know so little that an LLM referencing documentation or past support answers is superior.

u/MattGaiser

KarmaCake day18006March 4, 2020
About
My email is 13mdg5@queensu.ca
View Original