Readit News logoReadit News
badosu commented on AI tooling must be disclosed for contributions   github.com/ghostty-org/gh... · Posted by u/freetonik
ivanjermakov · 3 days ago
Your story highlights the clash of two mindsets regarding open source development. On one side there are artisans who treat programming as a craft and care about every line of code. On the other side, vibe-coders who might not even seen the code LLMs have generated and don't care about it as long as the program works.

And of course there is everything in between.

badosu · 3 days ago
Some context: it is a tightly knit community, where I interact with the contributors often in a conversational manner.

I know their intent is to push the project forward and well-meaning, I don't care about whether they are vibe-coding. I care about knowing they are vibe-coding so I can assist them to vibe-code in a way they can actually achieve their goal, or help them realize early that they lack the capacity to contribute (not of their own fault necessarily, maybe they just require orientation on how to reason about problems and solutions or their use of the tools).

badosu commented on AI tooling must be disclosed for contributions   github.com/ghostty-org/gh... · Posted by u/freetonik
ivanjermakov · 3 days ago
Hot take: if you can't spot any issues in the code review it's either good code, code that needs further changes, or review was not done properly. I don't see how "I used LLMs" fit here, because it means nothing to the quality of the code submitted.

If such mention would mean increased reviewer attention, then every code review should include it.

badosu · 3 days ago
I contribute to an open-source game with decades of legacy and employing many unusual patterns.

Two weeks ago someone asked me to review a PR, which I did pointing out some architectural concerns. The code was not necessarily bad, but it added boilerplate to a point it required some restructuring to keep the codebase maintainable. I couldn't care less if it was written by AI or not in this case, it was not necessarily simple but was straightforward.

It took considerable effort to conjure a thoughtful and approachable description of the problem and the possible solutions. Keep in mind this is a project with considerable traction and a high amount of contributions are from enthusiasts. So having a clear, maintainable and approachable codebase is sometimes the most important requirement.

They asked for a second pass but it took two weeks for me to get around it, in the meantime they sent 3 different PRs, one closed after the other. I found it a bit strange, then I put some effort to review the last iteration. It had half baked solutions where for example there would be state cleanup functions but state was never written in the first place, something that would never go through normally, I gave the benefit of the doubt and pointed it out. I suspected it was AI generated most likely.

Then they showed me another variation of the PR where they implement a whole different architecture, some incredibly overengineered fluent interface to resolve a simple problem with many layers of indirection that reflects complete indifference to the more nuanced considerations relevant to the domain and project, that I tried to impair. The code might work, but even if it does it's obvious that the change is a net negative to the project.

What I suspected indeed was the case, as they finally disclosed the use of AI, but that is not necessarily the problem, as I hope to convey. The problem is that I was unable to gauge the submitters commitment to perform the humble job of _understanding_ what I proposed. The proposal, in the end, just becoming mere tokens for inclusion into a prompt. Disclosure wouldn't necessarily have caused me to not take the PR seriously, instead I would invested my time in the more productive effort of orienting the submitter on the tractability of achieving their goal.

I would rather have known they didn't intend or gauged their capacity beforehand. It would have been better for both of us: they would have had their initial iteration merged (which was fine, I would just have shrugged the refactor for another occasion) and I wouldn't have lost time.

badosu commented on AI tooling must be disclosed for contributions   github.com/ghostty-org/gh... · Posted by u/freetonik
rane · 3 days ago
Using AI to generate code in a PR does not necessarily mean however that the user has not taken time to understand the changes and is not willing to learn. There are AI users who generate whole files without understanding the contents, and then there are AI users who generate the exact same files but have known in advance what they want, and merely use AI as a tool to save typing.

The intention here seems to be to filter out low quality submissions for which the only purpose is to only pimp Github resume for having contributions in highly starred repo. Not sure if the people doing that will be disclosing use of AI anyway.

badosu · 3 days ago
> The intention here seems to be to filter out low quality submissions for which the only purpose is to only pimp Github resume for having contributions in highly starred repo. Not sure if the people doing that will be disclosing use of AI anyway.

That is a fair way to see it, and I agree that it is a losing battle if your battle is enforcing this rule.

However, from a different perspective - if one sees it more as a gentlemen agreement (which it de facto is) - it fosters an environment where like-minded folks can cooperate better.

The disclosure assists the reviewer in finding common issues in AI generated code, the specificity of the disclosure even more so.

For example, a submitter sends a PR where they disclose a substantial amount of the code was AI assisted but all tests were manually written. The disclosure allows the reviewer to first look at the tests to gauge how well the submitter understands and constrained the solution to their requirements, the next step being to then look at the solution from a high-level perspective before going into the details. It respects the reviewer time, not necessarily because the reviewer is above AI usage, but because without disclosure the whole collaborative process falls apart.

Not sure how long this can work though, it's still easy to distinguish bad code written by a human from AI slop. In the first case your review and assistance is an investment into the collaborative process, in the latter it's just some unread text included in the next prompt.

badosu commented on uBlock Origin Lite now available for Safari   apps.apple.com/app/ublock... · Posted by u/Jiahang
lulzury · 20 days ago
That’s a pretty limited way of looking at the world: “Why would someone only do x instead of y?”

Part of learning to understand others means developing cognitive flexibility.

badosu · 19 days ago
It would be limited if it _literally_ wasn't a question, right?

I'm opening myself to understand things. I don't understand the combativeness.

badosu commented on GitHub pull requests were down   githubstatus.com/incident... · Posted by u/lr0
rileymichael · 20 days ago
badosu · 20 days ago
Reminder that Github _still_ does not support IPv6: https://github.com/orgs/community/discussions/10539
badosu commented on uBlock Origin Lite now available for Safari   apps.apple.com/app/ublock... · Posted by u/Jiahang
badosu · 20 days ago
I don't understand why someone, with some technical education, would use any chromium based browser instead of firefox, any ideas?
badosu commented on Show HN: Beyond Z²+C, Plot Any Fractal   juliascope.com/... · Posted by u/akunzler
mg · a month ago
I'm not sure if every fractal can be expressed as an iterative formula f(z,c).

In 2012 I found a fractal by using a fundamentally different approach. It arises when you colorize the complex plane by giving each pixel a grey value that corresponds to the percentage of gaussian integers that it can divide:

https://www.gibney.org/does_anybody_know_this_fractal

badosu · a month ago
You can make a fractal out of the state graph of a double pendulum: https://www.youtube.com/watch?v=dtjb2OhEQcU

I don't doubt there could be an iterative formula that maps to it, but I'd be very surprised.

badosu commented on OpenAI wins $200M U.S. defense contract   cnbc.com/2025/06/16/opena... · Posted by u/erikrit
paulvnickerson · 2 months ago
Paul Graham tends to be against Silicon Valley participation in the defense industry, while Marc Andreessen is all for it. Palmer Lucky makes a very good case why AI applications in military are a very good thing, and I tend to agree with him: https://www.youtube.com/watch?v=ooMXEwl7N8Y

It's American technology and industry that won the major wars of the 20th century. If Western technology companies abdicate that responsibility, we will all need to learn Mandarin.

badosu · 2 months ago
I really appreciated the information I was unaware of folks in the industry.

Also a pretty acceptable, and hopefully self evident, statement that technology and industry are relevant to "winning wars" (regardless of nation), even with the loaded assumption on "[american tech and industry]... won the major wars of the 20th century".

> If Western technology companies abdicate that responsibility, we will all need to learn Mandarin.

This bit killed it for me. It's completely reasonable and I encourage others to understand and perhaps accept a Realist[0] worldview, where obviously major powers are engaged in security competition.

I observe this too often, and it saddens me, that "western" people might truly believe in things like this.

My country and region was actively interfered, militarily and politically, by the US. We never were approached for deals as respectful partners, always a condescending and agenda driven deal with strings attached. Chinese relations with my country, and economic opportunities, flourish and give me hope that it might kickstart improvements I've lingered for since my teens (infrastructure, particularly rail for me).

Don't get me wrong, I am extremely suspicious of China politics and its goals. And of course this is part of its soft power ambitions, believe me that us "non-westerns" are perhaps not as dumb as we might seem (perhaps as problematic in other areas though).

Unless US, EU, Israel, or whatever considered to be "western" do not paranoia themselves and believe their own propaganda that China should be nuked you should indeed learn Mandarin, reason a little bit different than perhaps you assumed for on this statement: they treat others with; even if some underlying goal might exist; actual respect.

Look at yourselves in the mirror.

0 - https://en.wikipedia.org/wiki/Realism_(international_relatio...

badosu commented on Why Quantum Engineering Is Emerging as a Distinct Industrial Sector   spectrum.ieee.org/quantum... · Posted by u/rbanffy
condensedcrab · 6 months ago
Quantum optics experiments are probably the most accessible for garage hobbyists but it’s still a ~$k hobby once you start buying lasers, electronics and optics.
badosu · 6 months ago
I remember a very simple experimental setup we did at my initial physics class, my university at the time was in construction (we didn't have equipped labs), with a result similar to the two slit experiment by shining a cheap laser at an angle on a cd/dvd. Something like this: https://www.reddit.com/r/Physics/comments/paqnya/doubleslit_...
badosu commented on Natural occurring molecule rivals Ozempic in weight loss, sidesteps side effects   medicalxpress.com/news/20... · Posted by u/pseudolus
cma · 6 months ago
Don't you also need a bigger heart when you gain weight and not need as big of one at less weight? Liposuction and amputations can also result in muscle loss in the heart from it having less work to do.
badosu · 6 months ago
You don't want to have too much hypertrophy in the heart for sure. My understanding though is that it's very hard (almost impossible?) for it to be a problem without exogenous hormones, or some other condition that allows you to accrue an abnormal amount of muscle mass (e.g. myostatin defficiency).

Edit: I mean someone with a healthy fat percentage body composition. Of course having to pump blood to a 300lb-140kg body is problematic for the heart, be it a mostly fat or mostly muscle body composition. My point is it's just much easier to be fat enough for it to be a problem than muscular enough without exogenous hormones or an abnormal condition.

u/badosu

KarmaCake day1092May 12, 2014View Original