Readit News logoReadit News
maltalex · a month ago
It's not just open source though. Many high quality sources of information are being (over-)exploited and hurt in the process. StackOverflow is effectively dead [0], the internet archive is being shunned by publishers [1], scientific journals are bombarded by fake papers [2] (and anecdotally, low-effort LLM-driven reviews), projects like OpenStreetMap incur significant costs due to scraping [3], and many more.

We went from data mining to data fracking.

[0]: https://blog.pragmaticengineer.com/stack-overflow-is-almost-...

[1]: https://www.niemanlab.org/2026/01/news-publishers-limit-inte...

[2]: https://www.theregister.com/2024/05/16/wiley_journals_ai/

[3]: https://www.heise.de/en/news/OpenStreetMap-is-concerned-thou...

_aavaa_ · a month ago
StackOverflow was well on its way to death even without ChatGPT, just look at the graph from [0]. It has been in steady consistent decline since 2014 (minus a very transient blip from covid).

Then the chagpt effect is a sudden drop in visitors. But the rate of decline after that looks more or less the same as pre-chatgpt.

silverwind · a month ago
StackOverflow was killed by its toxic moderators. I hope it stays online thought because it's massive source of knowledge, although in many cases outdated already.
ggregoire · 25 days ago
> StackOverflow was well on its way to death even without ChatGPT, just look at the graph from [0]. It has been in steady consistent decline since 2014.

> [0] https://blog.pragmaticengineer.com/stack-overflow-is-almost-... (monthly question asked on Stack Overflow)

"monthly questions asked" is a weird metric to measure the decline of StackOverflow tho. How many times are people gonna ask how to compare 2 dates in python, or how to efficiently iterate an array in javascript? According to the duplicates rule on SO, should be once anyway. So it's just inevitable that "monthly questions asked" will forever decrease after reaching its peak, since everything has already been asked. Didn't mean it was dead tho, people still needed to visit the site to read the responses.

A better metric to measure its decline would be "monthly visits", which I guess was still pretty high pre LLM (100s of millions per month?), even if the "monthly questions asked" was declining. But now I imagine their "monthly visits" is closer to zero than 1M. I mean, even if you don't use Claude and its friends, searching anything about programming on Google returns a Gemini answer that probably comes from StackOverflow, removing any reason to ever visit the site…

raxxorraxor · a month ago
Mods made asking questions a very hostile experience since they had a flawed ideal of SO becoming some form of encyclopedia. So no wonder people jumped on another train as quickly as possible, especially since it so often was a mistake to close a question whose next best answer was a long deprecated solution.

It still has some corners where people are better, but this is mostly the smaller niches.

lelele · a month ago
I don't know about others, but I switched to Reddit or forums for asking and answering questions because it offered a much smoother experience.
torginus · a month ago
We can only hope reddit shares the same fate. Its only saving grace - as much as it pains me to say it - is that it's still not Facebook
AbstractH24 · a month ago
StackOverflow is the next iteration of Yahoo Answers.
rurp · 25 days ago
Even if we completely avoid the worst case scenarios where AI obliterates the job market or evolves into a paperclip maximizer, it has a good shot of being the most destructive technology in generations. The tech industry has already done a lot of harm to our social fabric with social media, gambling, and other addictive innovations replacing real life experiences and personal connections. This has led to well documented increases in depression, loneliness, and political extremism.

Now it seems AI is poised to eliminate most of the good innovations that tech brought about, and will probably crank social strife up to 11. It already feels like the foundations of the developed world have gotten shaky; I shudder to think what a massive blow will bring about.

I've read enough history to know that I really, really don't want to live through a violent revolution, or a world war, or a great depression.

renegade-otter · 21 days ago
After the iPhone, every single "innovation" has wrecked all kinds of havoc. Whatever we have is not healthy, and AI is going to supercharge that.
nunez · a month ago
AI also killed Reddit (the API changes were motivated by early GPT iirc)

So so SO much good stuff is gone now and much of what's left is AI cruft

randomNumber7 · a month ago
I think reddit was killed by a moderation that only allows the most norrowminded persons to have their echo-chamber.
gf000 · a month ago
Well, Reddit surely didn't help the issue with how it was all handled.
AbstractH24 · a month ago
AI has certainly killed Reddit.

But where do people turn next? There were a lot of benefits to some of its niche communities.

lawstkawz · a month ago
That’s entropy for you.

Society is a Ship Theseus; each generation ripping off planks and nailing their own in place.

Having been online since the late 80s (am only mid 40s...grandpa worked at IBM, hooked me and my siblings up with the latest kit on the regular) I have read comments like this over and over as the 90s internet, 00s internet, now the 2010s state of the "information super highway" has been replaced.

Tbh things have felt quite stagnant and "stuck" the last 20 years. All the investment in and caretaking of web SaaS infrastructure and JS apps and jobs for code camp grads made it feel like tech had come to a standstill relative to the pace of software progress prior to the last 15-ish years.

d_silin · a month ago
Overpromises and overhyping of AI is making all of IT industry worse.
sixtyj · a month ago
Everytime I start to discuss LLM/AI with non-IT people it is the same. Absurd expectations. Or denial of AI.

But as CEOs like Altman, Musk or Amodei have some much space in media, they can amplify their products - as good salesmen :)

I think that we are in times similar to 1997-1999, “everything will be web”.

csomar · a month ago
Stack Overflow is an interesting case because these days most people ask questions on Discord instead. The data isn't public, and the search functionality is terrible. It makes no sense, but somehow companies still prefer it even though it's inefficient and the same questions keep getting asked over and over.
arcologies1985 · a month ago
> and the same questions keep getting asked over and over.

This is a feature not a bug. The people asking those questions are new blood and accepting and integrating them is how you sustain your community.

m4rtink · a month ago
Looks like at least Discord is recently decided to finally fix the issues caused by having users & are trying very hard to not have any going forward through insane identity verification mandates enforced by the most toxic partner companies ever. :)
karmakurtisaani · a month ago
> the same questions keep getting asked over and over.

More user engagement, users spend more time on the platform. These companies don't have the best interest of users in mind.

nicbou · 25 days ago
Google AI Overviews and ChatGPT are also killing traffic to information websites
BrandoElFollito · 25 days ago
StackOverflow was destroyed by a steady stream of miserable questions, and then by the infinite ego of moderators and power users.

They forgot that there are still people asking good questions and started to close everything.

Z downvote from z bozo weights the sama as one from an expert.

You need to bend backwards znd then lay flat to not annoy mods

Meta is the nest of psychopathic narcissists.

And many more.

Stack Exchange sites such as cooking or latex (and other niche ones) work very well. It is just that people are not full of themselves.

I started with SE ca 2014, loved it, participated a lot, accumulated half a million internet points and now hate the place. It did not age well.

mcny · a month ago
I feel like we are talking past each other.

1. I write hobby code all the time. I've basically stopped writing these by hand and now use an LLM for most of these tasks. I don't think anyone is opposed to it. I had zero users before and I still have zero users. And that is ok.

2. There are actual free and open source projects that I use. Sometimes I find a paper cut or something that I think could be done better. I usually have no clue where to begin. I am not sure if it even is a defect most of the time. Could it be intentional? I don't know. Best I can do is reach out and ask. This is where the friction begins. Nobody bangs out perfect code on first attempt but usually maintainers are kind to newcomers because who knows maybe one of those newcomers could become one of the maintainers one day. "Not everyone can become a great artist, but a great artist can come from anywhere."

LLM changed that. The newcomers are more like Linguini than Remy. What's the point in mentoring someone who doesn't read what you write and merely feeds it into a text box for a next token predictor to do the work. To continue the analogy from the Disney Pixar movie Ratatouille, we need enthusiastic contributors like Remy, who want to learn how things work and care about the details. Most people are not like that. There is too much going on every day and it is simply not possible to go in depth about everything. We must pick our battles.

I almost forgot what I was trying to say. The bottom line is, if you are doing your own thing like I am, LLM is great. However, I would request everyone to have empathy and not spread our diarrhea into other people's kitchens.

If it wasn't an LLM, you wouldn't simply open a pull request without checking first with the maintainers, right?

sheepscreek · a month ago
The real problem is that OSS projects do not have enough humans to manually review every PR.

Even if they were willing to deploy agents for initial PR reviews, it would be a costly affair and most OSS projects won’t have that money.

mycall · a month ago
PRs are just that: requests. They don't need to be accepted but can be used in a piecemeal way, merged in by those who find it useful. Thus, not every PR needs to be reviewed.
softwaredoug · a month ago
Many open source projects are also (rightly) risk adverse and care more about avoiding regressions
bigiain · a month ago
I've been following Daniel from the Curl project who's speaking out widely about slop coded PRs and vulnerability reports. It doesn't sound like they have ever had any problem keeping up with human generated PRs. It's the mountain of AI generated crap that's now sitting on top of all the good (or even bad but worth mentoring) human submissions.

At work we are not publishing any code or part of the OSS community (except as grateful users of other's projects), but even we get clearly AI enabled emails - just this week my boss has forwarded me two that were pretty much "Him do you have a bug bounty program? We have found a vulnerability in (website or app obliquely connected to us)." One of them was a static site hosted on S3!

There's always been bullshitters looking to fraudulently invoice your for unsolicited "security analysis". But the bar for generating bullshit that looks plausible enough to have to have someone spend at least a few minutes to work out if it's "real" or not has become extremely low, and the velocity with which the bullshit can be generated then have the victim's name and contact details added and vibe spammed to hundreds or thousands of people has become near unstoppable. It's like SEO spammers from 5 or 10 years back but superpowered with OpenAI/Anthropic/whoever's cocaine.

leoqa · a month ago
My hot take: reviewing code is boring, harder than writing code, and less fun (no dopamine loop). People don’t want to do it, they want to build whatever they’re tasked with. Making reviewing code easier (human in the loop etc) is probably a big rock for the new developer paradigm.
cryptonector · a month ago
Oh no! It's pouring PRs!

Come on. Maintainers can:

  - insist on disclosure of LLM origin
  - review what they want, when they can
  - reject what they can't review
  - use LLMs (yes, I know) to triage PRs
    and pick which ones need the most
    human attention and which ones can be
    ignored/rejected or reviewed mainly
    by LLMs
There are a lot of options.

And it's not just open source. Guess what's happening in the land of proprietary software? YUP!! The same exact thing. We're all becoming review-bound in our work. I want to get to huge MR XYZ but I've to review several other people's much larger MRs -- now what?

Well, we need to develop a methodology for working with LLMs. "Every change must be reviewed by a human" is not enough. I've seen incidents caused by ostensibly-reviewed but not actually understood code, so we must instead go with "every change must be understood by humans", and this can sometimes involve a plain review (when the reviewer is a SME and also an expert in the affected codebase(s), and it can involve code inspection (much more tedious and exacting). But also it might involve posting transcripts of LLM conversations for developing and, separately, reviewing the changes, with SMEs maybe doing lighter reviews when feasible, because we're going to have to scale our review time. We might need to develop a much more detailed methodology, including writing and reviewing initial prompts, `CLAUDE.md` files, etc. so as to make it more likely that the LLM will write good code and more likely that LLM reviews will be sensible and catch the sorts of mistakes we expect humans to catch.

nunez · a month ago
The issue here is that LLMs are great for hobbyist stuff like you describe, but LLMs are obscenely expensive to run and keep current, so you almost HAVE to shove them in front of everything (or, to use your example, spread the diarrhea into everyone elses kitchens) to try and pay the bill.
AbstractH24 · a month ago
Destroying open-source coding is only a concern if the code is the end, not the means.

Will AI [in time] bring about a growth in community-built products rather than code? Is that really a bad thing?

conartist6 · 25 days ago
Well, no, not unless it develops its own version of open source. That's kind of the point. Without healthy OSS, even AI's ability to create value would enter freefall
worthless-trash · a month ago
I pretty much always open an issue, then a PR, they can close it if they want.. I usually have 'some' idea of the issue and use the PR as a first stab and hope the maintainer will tell me if i'm going about it the right or wrong way.

I fully expect most of my PR's to need at least a second or third revision.

pikseladam · a month ago
thats why blocking pr feature is coming to github
devsda · a month ago
If not for the accomplishments, advancements and potential benefits, the whole AI story since LLMs looks a lot like sophisticated DDOS attack at multiple levels.

AI bots are literally DDOS'ing servers. Adoption is consuming and making both physical and computing resources either inaccessible or expensive for almost everyone.

The most significant one is the human cost. We suddenly found ourselves dealing with overwhelming levels of AI content/code/images/video that is mostly subpar. May be as AI matures we'll find it more easy and have better tools to work with the volume but for now it feels like it is coming from bad actors even when it is done by well meaning individuals.

There's no doubt AI has its uses and it is here to stay but I guess we'll all have to struggle until we reach that point where it is a net benefit. The hype by those financially invested isn't helping a bit though.

ori_b · a month ago
AI is passive consumption cosplaying as productivity. Any place where humans have to do something is a bug in the product.

Of course it's going to be damaging to places where people actually want to craft things.

Gud · a month ago
Ok, but I am using ChatGPT and Claude to develop usable products five times faster than if I had developed them myself.
fiatpandas · a month ago
Sufficiently advanced technology always looks like a DDoS on society. It overwhelms the senses, and when we come to the realization we cannot comprehend or fully predict its implications it puts a subset of the population into a bit of a crisis. We’re in that phase right now where we just need to brace ourselves.
jongjong · a month ago
My current position is that AI companies should be taxed and the money should be distributed to open source developers.

There is a strong legal basis for this to happen because if you read the MIT license, which is one of the most common and most permissive licenses, it clearly states that the code is made available for any "Person" to use and distribute. An AI agent is not a person so technically it was never given the right to use the code for itself... It was not even given permission to read the copyrighted code, let alone ingest it, modify it and redistribute it. Moreover, it is a requirement of the MIT license that the MIT copyright notice be included in all copies or substantial portions of the software... Which agents are not doing in spite of distributing substantial portions of open source code verbatim, especially when considered in aggregate.

Moreover, the fact that a lot of open source devs have changed their views on open source since AI reinforces the idea that they never consented to their works being consumed, transformed and redistributed by AI in the first place. So the violation applies both in terms of the literal wording of the licenses and also based on intent.

Moreover, the usage of code by AI goes beyond just a copyright violation of the code/text itself; they appropriated ideas and concepts, without giving due credit to their originators so there is a deeper ethical component involved that we don't have a system to protect human innovation from AI. Human IP is completely unprotected.

That said, I think most open source devs would support AI innovation, but just not at their expense with zero compensation.

foxglacier · a month ago
> they appropriated ideas and concepts, without giving due credit to their originators so there is a deeper ethical component

No there isn't. We're all free to copy each other's ideas and concepts and not give any credit to their "originators" who aren't usually even the first people to think of them but just the previous person in the chain of copying ideas. That's how progress happens. No we should not inhibit our use of knowledge because every idea "belongs" to somebody.

I'm not talking about copyright here, which is different and doesn't usually protect ideas and concepts anyway, at least none that are useful.

jongjong · a month ago
That's why I alluded to the fact that this was more of an ethical matter than a legal matter. Though it should be a legal matter. It's just hard to measure, for the reasons you suggested... Doesn't mean we shouldn't try to approximate something fairer.

We've crossed a threshold whereby economic value creation is not fairly rewarded. The economy became a kind of winner-takes-all game of who can convince people to pay for stuff and lock them in first... Or who can wedge themselves first between large pre-existing corporate money flows.

It's like the office politics, bureaucracy and corruption that everyone hates has become the core reward mechanism of the economy. It was never designed that way but a combination of factors exacerbated by underlying system flaws and perverse incentives got us there.

There's already way too much false advertising. The winners of this game are those who can sell a dream . It doesn't matter if they don't deliver because by the time people figure it out, they already sold their startup and onto other things. Everyone is kept in a constant state of chasing the next big thing and it doesn't solve any problems. Human potential is just wasted on creating elaborate illusions which ultimately satisfy no one.

Deleted Comment

strawhatguy · 24 days ago
God please no. Do not involve government in this. That's a terrible terrible idea, and would do the opposite of intended.
xtreak29 · a month ago
Reviewing code was also a big bottleneck. With lot more untested code where authors don't care about reviewing their own code it will take even more toll on open source maintainers. Code quality between side projects and open source projects are different. Ensuring good code quality enables long term maintenance for open source projects that have to support the feature through the years as a compatibility promise.

Deleted Comment

sodapopcan · a month ago
That's where pair programming came in but it turns out that most people hate each other so much that they'd rather work with a machine pretending to be a person.

I realize there are many levels to this claim but I'm not being sarcastic at all here.

mycall · a month ago
Using an LLM is a form of pair programming.
nblgbg · a month ago
Isn’t it also destroying the internet with low-quality content and affecting content creation in general? Can LLMs still rely on data from the open internet for training?
bmurphy1976 · a month ago
I'm going to take issue with AI destroying the internet. Our short attention span profit driven culture was already well on it's way to trashing everything that was good. AI is only accelerating the inevitable.
slopinthebag · a month ago
Ya but that's like saying we were going 10kmh, it's nbd that we accelerate to 1000kmh since we were gonna hit the wall anyways
api · a month ago
Beat me to it. Facebook/Meta, Twitter/X, Google/YouTube, and TikTok have done quite a bit more damage to the Internet than AI.

The future of the net was closed gated communities long before AI came along. At worst it’s maybe the last nail in the coffin. But the coffin lid was already on and the man inside was already dead.

AI is, I think, more mixed. It is creating more spam and noise, but AI itself is also fascinating to play with. It’s a genuine innovation and playing with it sometimes makes me feel the way I did first exploring the web.

mmooss · a month ago
Agreed: The Internet has long been up-to-your-eyeballs with low quality content (i.e., bullsh-t). Blaming LLM software for it is ignoring the well-known reality of just a year or two ago.
krater23 · a month ago
Nope. You just miss the millions of SEO websites that was normally easy to spot and to ignore. Now you have millions AI generated SEO webites that are difficult to spot and only contain slop that doesn't help to find the information you search.
add-sub-mul-div · a month ago
This is the same stupid reasoning that told us Trump would be a good outcome because the system was imperfect and ruining it fully would magically create a better one.
snarfy · a month ago
It doesn't have to be low quality. It really is another tool like any other. You can put low effort in and get working results. This low effort, working result gets shipped immediately and gives the whole process a bad wrap. The source is generated crap that lacks craftsmanship and quality. But this gets AI dismissed when it shouldn't be. You can get quality, well crafted source code if you make that a goal and keep iterating.
krater23 · a month ago
You can, but when you go through this effort to bring AI to generate good code, you could just self write it. So there are only two kinds of code that are falling out of AI tools. Boilerplate code and shitty code.
fullshark · a month ago
The Economics of content platforms already started destroying the internet. A lot of the reason the internet was so good for a long time was faith by creators that good content would win, that turned out to be false.
randomNumber7 · a month ago
Actually most of the stuff on the internet I really enjoyed was non profit driven. What really destroyed it imo is the attention seeking attitude that results from earning money with advertisements.
strawhatguy · 24 days ago
Possibly, but AIs might shift to more curated content, which has it's own dangers I suppose.

There are definitely challenges, but I've been around long enough now that we'll adapt, and muddle through.

The trouble will come from humans' reaction to the changes, less from the changes themselves

TiredOfLife · a month ago
Places like microsoft support community forums predate llm and they are filled with wrong and useless information that drown out useful information due to sheer volume. Same with countless websites that scrape forums and other websites and republish the text. Same with auto generated youtube videos - those existed pre llm.

Deleted Comment

stickynotememo · a month ago
So what's the alternative? Should we go back to reading encyclopedias from the 2010s? I ask this because the need for information hasn't decreased for human beings, just because the capability to produce slop has suddenly increased.
skeeter2020 · a month ago
>> I ask this because the need for information hasn't decreased for human beings, just because the capability to produce slop has suddenly increased.

Isn't that the complaint to which you're responding? the SUPPLY side of the equation is the problem, so reading encyclopedias wouldn't impact that. Funny enough the criticism of Wikipedia was that a bunch of amateurs couldn't beat the quality from a small group of experts curating a controlled collection, and we saw that wasn't true. Maybe AI has pushed this to a new level where we need to tighten access and attention once again?

0xbadcafebee · a month ago
Remember when projects were getting overwhelmed by PRs from students just editing a line in a README so they could win a t-shirt? That was 2020, and they weren't using AI. The open source community has been going downhill for a while. The new generation isn't getting mentored by the old generation, so stable, old-fogey methods established by Linux distributions are eschewed by the new kids. Technology advancement has made open source interactions a little too easy, and unnecessarily fragile. Some ecosystems focus way too much on crappy reusable components, and don't focus enough on supply chain security.

Here's the good news: AI cannot destroy open source. As long as there's somebody in their bedroom hacking out a project for themselves, that then decides to share it somehow on the internet, it's still alive. It wouldn't be a bad thing for us to standardize open source a bit more, like templates for contributors' guides, automation to help troubleshoot bug reports, and training for new maintainers (to help them understand they have choices and don't need to give up their life to maintain a small project). And it's fine to disable PRs and issues. You don't have to use GitHub, or any service at all.

skeeter2020 · a month ago
I get your core point, but the reality is it CAN destroy the ecosystem around OSS upon which it heavily relies: discoverability and community. I don't think you're accurately representing just how much noise and confusion AI slop creates. When it comes to using github it's not because it is an amazing application, but because that's were the people are.
TiredOfLife · a month ago
> Remember when projects were getting overwhelmed by PRs from students just editing a line in a README so they could win a t-shirt? That was 2020, and they weren't using AI.

similar thing happened again when a popular educational video demonstrated and called to action to add your name to a popular github repo.

charcircuit · a month ago
>As long as there's somebody in their bedroom hacking out a project for themselves, that then decides to share it somehow on the internet, it's still alive.

You don't even need somebody. AI agents themselves can make and share projects.

overfeed · a month ago
> AI agents themselves can make and share projects

Copyright can't be assigned yo agents. You cant have Open Source without copyright as the enforcement mechanism. Millions of AI-generated, public-domain projects with no social proof to distinguish them is uncharted territory. My prediction is it would be shit-territory amd worse than what we have currently.

strawhatguy · 24 days ago
But open source didn't do the Linux model. It did the GitHub model of open to anyone.

If anything the heirarchy of trust the Linux model uses will be more important now.

debarshri · a month ago
This weekend, I found an issue with Microsoft's new Golang version of sqlcmd. Ran Claude code, fixed the issue, which I wouldn't have done if agent stuff did not exist. The fix was contributed back to the project.

I think it is about who is contributing, intention, and various other nuances. I would still say it is net good for the ecosystem.

atomicnumber3 · a month ago
Did you actually fix the issue, or did you fix the issue and introduce new bugs?

The problem is the asymmetry of effort. You verified you fixed your issue. The maintainers verified literally everything else (or are the ones taking the hit if they're just LGTMing it).

Sorry, I am sure your specific change was just fine. But I'm speaking generally.

How many times have I at work looked at a PR and thought "this is such a bad way to fix this I could not have come up with such a comically bad way if I tried." And naturally couldn't say this to my fine coworker whose zeal exceeded his programming skills (partly because someone else had already approved the PR after "reviewing" it...). No, I had to simply fast-follow with my own PR, which had a squashed revert of his change, with the correct fix, so that it didn't introduce race conditions into parallel test runs.

And the submitter of course has no ability to gauge whether their PR is the obvious trivial solution, or comically incorrect. Therein lies the problem.

snovv_crash · a month ago
This is why open source projects need good architecture and high test coverage.

I'd even argue we need a new type of test coverage, something that traces back the asserts to see what parts of the code are actually constrained by the tests, sort of a differential mutation analysis.

rixed · a month ago
This could have happened before AI agents though, but yes that's another step in that direction.
mysterydip · a month ago
I think the problem is determining who is contributing, intention, and those other nuances take a human’s time and effort. And at some point the number of contributions becomes too much to sort through.
debarshri · a month ago
I think building enough barriers, processes, and mechanisms might work. I don't think it needs to be human effort.
kermatt · a month ago
If you used Claude to fix the issue, built and tested your branch, and only then submitted the PR, the process is not much is different from pre-LLM days.

I think the problem is where bug-bounty or reputation chasers are letting LLM's write the PRs, _without_ building and testing. They seek output, not outcomes.

thrance · a month ago
Genuinely interested in the PR, if you would kindly care to link it.

Deleted Comment

scwoodal · a month ago
View the projects open pull requests, and compare usernames.

https://github.com/microsoft/go-sqlcmd/pulls

softwaredoug · a month ago
That’s the positive case IMO - a human, you, remain responsible for the fix. It doesn’t matter if AI helped.

The negative case are free running OpenClaw slop cannons that could even be malicious.

_joel · a month ago
I agree, but that's assuming the project accepts AI generated code, of course. Especially around the legality of accepting commits written by an AI trained on god knows what dataset.
krater23 · a month ago
And are you sure that you fixed it without creating 20 new bugs? For the reader this could mean that you never understood the bug, so how you can sure that you've done anything right?
saghm · a month ago
How do you make sure you don't create bugs in the code you write without an LLM? I imagine for most people, the answer is a combination of self-review and testing. You can just do those same things with code an LLM helps you write and at that point you have the same level of confidence.
debarshri · a month ago
Pretty much sure did not create bugs. Because I validated it thoroughly, as I had to deploy it into production in a fintech environment.

So I am pretty much confident as well as convinced about the change. But then I know what I know.

Aurornis · a month ago
Using an LLM as an assistant isn’t necessarily equivalent to not understanding the output. A common use case of LLMs is to quickly search codebases and pinpoint problems.
mycall · a month ago
Code complexity is often the cause for more bugs. Complexity naturally comes from more code. It is not uncommon. As they say, the best code I ever wrote was no code.
silverwind · a month ago
If the test coverage is good it will most likely be fine.