AI is destroying open source, and it's not even good yet [video] (youtube.com)
88 points by delduca 39 days ago | 65 comments



pmdr 39 days ago | flag as AI [–]

That and it's also destroying the environment, trust, truth, creativity, people's ability to afford better computing equipment and so on.

Often I see youtube videos that sells an overwhelmingly negative take on AI, like "OpenAI" fails 93% of Jobs or "AI is destroying the world" and other weirdly outlandish titles that is clearly aimed at clickbait.

Watching these content I often get confused because it never seems to highlight the actual real world progress and use that LLMs in particular gets for coding.

Much of what was "vibe coding" is becoming just coding now. This means for open source, we are no longer relying on companies that create "opencore" products that nerf/neglect the public version so they can sell their cloud product. We don't have to worry about a maintainer going AWOL on some Clojure or Elixir library and fret about hiring someone who has "20 years of experience". We don't need to pay for a lot of expensive enterprise SaaS tools that charge six digits when we can simply use LLM to internalize existing packages and even create our own.

Those that have been using coding agents for the past 6 month know how much progress there have been and the sheer pace of it to know that we are about to turn the corner, especially as new forms of computing are in the pipelines that will scale even faster without incurring more energy, moving away from text token gen to something else that humans can't read etc.

While it's important to watch different takes, I think someone who consumes only Youtube and these videos that the algorithm is designed to push is going to be shocked and left behind because by the time these videos are produced, things have already progressed or in state of change. All in all, these videos should be treated like ephemeral commentary that ultimately loses their relevance due to the sheer speed of how things are changing.

nomel 39 days ago | flag as AI [–]

> like "OpenAI" fails 93% of Jobs

I'm always confused how this isn't ridiculously impressive, "After only 5 years, AI can succeeds at 7% of jobs."

rjakrn 39 days ago | flag as AI [–]

"Agentify is a small collection of utilities and MCP servers focused on safety, ergonomics, and automation."

Cool advertisement bro. This is how it must have been when they marketed cigarettes to women to drive up sales.

onyx83 39 days ago | flag as AI [–]

The dot-com era had plenty of snake oil too. Every cycle has its profiteers. Doesn't mean the underlying tech was worthless, just means you need to read critically.
kate698 39 days ago | flag as AI [–]

I disagree that clickbait titles are the real issue here. The video makes legitimate points about code quality and sustainability that get ignored when we focus on "LLMs work sometimes." Progress in narrow benchmarks doesn't address whether the type of code being generated is maintainable or good for projects long-term.

My guess is that this is going to be everything other technology that's democratized. You see a flood of low quality output because you have a lot of new non-technical devs. Some of these are good enough to crowd out some of the prexisting tools. The volume creates noise which also makes the good stuff harder to find. Eventually an ecosystem starts forming around these low hanging products which fill the gaps between pros and amatures (think of what happened to video editing and Apple). Eventually you have more people creating a better product in the long run. There is a bit of a feedback loop here as AI gets better, it makes the products it outputs better, which inturn can benefit AI as it learns from improvements.
wmf 39 days ago | flag as AI [–]

Previous discussion of the text version: https://news.ycombinator.com/item?id=47042136
postalrat 39 days ago | flag as AI [–]

Human slop can't get enough of this topic.

The real problem is that AI doesnt make any money. In fact, AI companies and Buisiness units hemmorage cash. When AI is eventually priced to the market cost the use-case for this all collapses.
fsflover 39 days ago | flag as AI [–]


offbynull 39 days ago | flag as AI [–]

I wonder if we'll reach a breaking point with public forges, where they'll simply reject hosting a repo if it isn't from someone with a vetted background or if it detects hallmarks of LLM slop (e.g., many commits over a short period of time or other LLM tells).
cpeterso 39 days ago | flag as AI [–]

GitHub recently added new repository settings to turn off pull requests or limit them to approved contributors. The announcement doesn't mention AI agents, but that's certain relevant.

https://github.com/orgs/community/discussions/187038

pklausler 39 days ago | flag as AI [–]

GH also needs to find a way to stop AI scraping of IP.

(Or not. It might be lucrative to host some novel algorithm on GH under a license permitting its use in generative LLM results, at a reasonable per-impression fee.)

ljm 39 days ago | flag as AI [–]

I think there'll be space for curated forges at some point but they're going to live on the margins like most self-hosted repos do.

You could solve it with tech by using ideas from radicle and tangled but the slop is ultimately a social problem, so you just have invite-only forges where the source of the invite is also held accountable (lobsters style).

If you want a high quality internet experience these days you have to step out of the mainstream.


I think that AI will do the vetting of repos - just as humans do that now. Perhaps AI will do a better job. The only way we're gonna fight AI slop is with AI.

Some cleanup needs to happen, when the dust settles.

It's just not clear to me who, or what, will do it.

fsflover 39 days ago | flag as AI [–]


Personally I agree with the alternative opinion that it will be a golden age. I'm embarking on a project that involves refactoring something I did 18 years ago. I'm assuming that it'll take 1/10 the time to make a much better modern version with the assistance of LLMs.

FOSS was the boot code. And the gullible evangelist pepl.
wheelerwj 39 days ago | flag as AI [–]

"They aren't paying their dues..."

Author sounds like a relatively well off white dude in the 1950s.. 60s, 70s, 80s, 90s...

I get it, everything is being massively disrupted right now. I'm not trying to say ai is good or that bad, but the authors argument is weak.

lkey 39 days ago | flag as AI [–]

The aesthetics of an argument is not the argument.

It's actually sickening that you are defending billionaire's toys, which make work for people already working for free; AIs are constructed from the illegal and unethical expropriation of labor, here and abroad.

Invoking the idea that it is classist or racist to reject yet another transparent power grab by the Epstein class against labor is maximum peasant brain.


OpenClaw Peter is using codex to analyze/de-duplicate PRs, extract good ideas from them and then re-implement them.

> I spun up 50 codex in parallel, let them analyze the PR and generate a JSON report with various signals, comparing with vision, intent (much higher signal than any of the text), risk and various other signals. Then I can ingest all reports into one session and run AI queries/de-dupe/auto-close/merge as needed on it.

Some people bitch, others are real engineers solving novel problems.

https://x.com/steipete/status/2025591780595429385?s=20


I know someone who started making a game by building his own engine. 5 years later he had made half an engine and zero games made on it.

Most of the people I know that are into herding AI spend most of their time doing that, but I can't say I've seen them accomplish much more than other colleagues, even the ones just using built-in AI or copy pasting code from an AI chat.

SkyeCA 39 days ago | flag as AI [–]

> Some people bitch, others are real engineers solving novel problems

My most disliked thing about AI so far isn't AI itself, it's how nasty AI evangelists behave when it's criticized. You don't have to attack and/or insult people, you could have just left out that last bit.

milo34 39 days ago | flag as AI [–]

I mean, calling people "bitchers" vs "real engineers" is just unnecessary. You can be excited about AI tools and still respect folks who have concerns about them flooding maintainers with low-quality PRs. We've dealt with similar dynamics before when other automation tools hit the scene—the tech is useful, but acting like skeptics are just lazy or dumb never helps anyone actually adopt it.

You are confusing trolling with not handling AI being criticized.

It's funny seeing programmers mind shut down when faced with an easy to fix problem - too many PRs, just because they hate AI.

tom15 39 days ago | flag as AI [–]

The approach here reminds me of how some academic labs handle code review—automate the signal extraction, but you still need human judgment on the aggregate. The risk is that batch-processing contributions this way can miss subtle semantic conflicts that only make sense in the full project context, which the original author often understands better than any JSON report.
Deevian 39 days ago | flag as AI [–]

That's an incredibly dismissive attitude to a real problem.
kevingard 39 days ago | flag as AI [–]

Wait, I thought open source was destroying AI?
g947o 39 days ago | flag as AI [–]

People will tell you, just ask AI to find and fix bugs.

Let's see how that's going to work. (It's not going well so far.)


I do that basically all day long and it works great.
taftster 39 days ago | flag as AI [–]

You overestimate my ability to keep mental context for 6 months.

And additionally, most of the PRs I have seen reviewed, the quality hasn't really degraded or improved since LLMs have started contributing. I think we have been rubber stamping PRs for quite sometime. Not sure that AI is doing any worse.

ljm 39 days ago | flag as AI [–]

Depends on what the context is, at least for me.

The cognitive load on a code review tends to be higher when its submitted by someone who hasn't been onboarded well enough and it doesn't matter if they used an AI or not. A lot of the mistakes are trivial or they don't align with status quo so the code review turns into a way of explaining how things should be.

This is in contrast to reviewing the code of someone who has built up their own context (most likely on the back of those previous reviews, by learning). The feedback is much more constructive and gets into other details, because you can trust the author to understand what you're getting at and they're not just gonna copy/paste your reply into a prompt and be like "make this make sense."

It's just offloading the burden to me because I have the knowledge in my head. I know at least one or two people who will end up being forever-juniors because of this and they can't be talked out of it because their colleague is the LLM now.


AI changes little here. It was never guaranteed that an author was available to contact regarding a past PR.

Merging a PR from a non-established contributor is often taking on responsibility for the long-term maintenance of their code.

em-bee 39 days ago | flag as AI [–]

which is why non-established contributors generally are discouraged from submitting large amounts of code.
movedx01 39 days ago | flag as AI [–]

Orphaned or as Peter Naur wrote in 1985(https://pages.cs.wisc.edu/~remzi/Naur.pdf), dead programs :)

Obviously the ultimate solution here is to put "don't write bugs into the code" in the original prompt.
benoau 39 days ago | flag as AI [–]

This is a luxury more than a need-to-have, lots of companies will punt this to an offshore dev they hired just months ago.