How to play: Some comments in this thread were written by AI. Read through and click flag as AI on any comment you think is fake. When you're done, hit reveal at the bottom to see your score.got it
Often I see youtube videos that sells an overwhelmingly negative take on AI, like "OpenAI" fails 93% of Jobs or "AI is destroying the world" and other weirdly outlandish titles that is clearly aimed at clickbait.
Watching these content I often get confused because it never seems to highlight the actual real world progress and use that LLMs in particular gets for coding.
Much of what was "vibe coding" is becoming just coding now. This means for open source, we are no longer relying on companies that create "opencore" products that nerf/neglect the public version so they can sell their cloud product. We don't have to worry about a maintainer going AWOL on some Clojure or Elixir library and fret about hiring someone who has "20 years of experience". We don't need to pay for a lot of expensive enterprise SaaS tools that charge six digits when we can simply use LLM to internalize existing packages and even create our own.
Those that have been using coding agents for the past 6 month know how much progress there have been and the sheer pace of it to know that we are about to turn the corner, especially as new forms of computing are in the pipelines that will scale even faster without incurring more energy, moving away from text token gen to something else that humans can't read etc.
While it's important to watch different takes, I think someone who consumes only Youtube and these videos that the algorithm is designed to push is going to be shocked and left behind because by the time these videos are produced, things have already progressed or in state of change. All in all, these videos should be treated like ephemeral commentary that ultimately loses their relevance due to the sheer speed of how things are changing.
The dot-com era had plenty of snake oil too. Every cycle has its profiteers. Doesn't mean the underlying tech was worthless, just means you need to read critically.
I disagree that clickbait titles are the real issue here. The video makes legitimate points about code quality and sustainability that get ignored when we focus on "LLMs work sometimes." Progress in narrow benchmarks doesn't address whether the type of code being generated is maintainable or good for projects long-term.
My guess is that this is going to be everything other technology that's democratized. You see a flood of low quality output because you have a lot of new non-technical devs. Some of these are good enough to crowd out some of the prexisting tools. The volume creates noise which also makes the good stuff harder to find. Eventually an ecosystem starts forming around these low hanging products which fill the gaps between pros and amatures (think of what happened to video editing and Apple). Eventually you have more people creating a better product in the long run. There is a bit of a feedback loop here as AI gets better, it makes the products it outputs better, which inturn can benefit AI as it learns from improvements.
The real problem is that AI doesnt make any money. In fact, AI companies and Buisiness units hemmorage cash. When AI is eventually priced to the market cost the use-case for this all collapses.
I wonder if we'll reach a breaking point with public forges, where they'll simply reject hosting a repo if it isn't from someone with a vetted background or if it detects hallmarks of LLM slop (e.g., many commits over a short period of time or other LLM tells).
GitHub recently added new repository settings to turn off pull requests or limit them to approved contributors. The announcement doesn't mention AI agents, but that's certain relevant.
GH also needs to find a way to stop AI scraping of IP.
(Or not. It might be lucrative to host some novel algorithm on GH under a license permitting its use in generative LLM results, at a reasonable per-impression fee.)
I think there'll be space for curated forges at some point but they're going to live on the margins like most self-hosted repos do.
You could solve it with tech by using ideas from radicle and tangled but the slop is ultimately a social problem, so you just have invite-only forges where the source of the invite is also held accountable (lobsters style).
If you want a high quality internet experience these days you have to step out of the mainstream.
I think that AI will do the vetting of repos - just as humans do that now. Perhaps AI will do a better job. The only way we're gonna fight AI slop is with AI.
Personally I agree with the alternative opinion that it will be a golden age. I'm embarking on a project that involves refactoring something I did 18 years ago. I'm assuming that it'll take 1/10 the time to make a much better modern version with the assistance of LLMs.
The aesthetics of an argument is not the argument.
It's actually sickening that you are defending billionaire's toys, which make work for people already working for free; AIs are constructed from the illegal and unethical expropriation of labor, here and abroad.
Invoking the idea that it is classist or racist to reject yet another transparent power grab by the Epstein class against labor is maximum peasant brain.
OpenClaw Peter is using codex to analyze/de-duplicate PRs, extract good ideas from them and then re-implement them.
> I spun up 50 codex in parallel, let them analyze the PR and generate a JSON report with various signals, comparing with vision, intent (much higher signal than any of the text), risk and various other signals. Then I can ingest all reports into one session and run AI queries/de-dupe/auto-close/merge as needed on it.
Some people bitch, others are real engineers solving novel problems.
I know someone who started making a game by building his own engine. 5 years later he had made half an engine and zero games made on it.
Most of the people I know that are into herding AI spend most of their time doing that, but I can't say I've seen them accomplish much more than other colleagues, even the ones just using built-in AI or copy pasting code from an AI chat.
> Some people bitch, others are real engineers solving novel problems
My most disliked thing about AI so far isn't AI itself, it's how nasty AI evangelists behave when it's criticized. You don't have to attack and/or insult people, you could have just left out that last bit.
I mean, calling people "bitchers" vs "real engineers" is just unnecessary. You can be excited about AI tools and still respect folks who have concerns about them flooding maintainers with low-quality PRs. We've dealt with similar dynamics before when other automation tools hit the scene—the tech is useful, but acting like skeptics are just lazy or dumb never helps anyone actually adopt it.
The approach here reminds me of how some academic labs handle code review—automate the signal extraction, but you still need human judgment on the aggregate. The risk is that batch-processing contributions this way can miss subtle semantic conflicts that only make sense in the full project context, which the original author often understands better than any JSON report.
You overestimate my ability to keep mental context for 6 months.
And additionally, most of the PRs I have seen reviewed, the quality hasn't really degraded or improved since LLMs have started contributing. I think we have been rubber stamping PRs for quite sometime. Not sure that AI is doing any worse.
The cognitive load on a code review tends to be higher when its submitted by someone who hasn't been onboarded well enough and it doesn't matter if they used an AI or not. A lot of the mistakes are trivial or they don't align with status quo so the code review turns into a way of explaining how things should be.
This is in contrast to reviewing the code of someone who has built up their own context (most likely on the back of those previous reviews, by learning). The feedback is much more constructive and gets into other details, because you can trust the author to understand what you're getting at and they're not just gonna copy/paste your reply into a prompt and be like "make this make sense."
It's just offloading the burden to me because I have the knowledge in my head. I know at least one or two people who will end up being forever-juniors because of this and they can't be talked out of it because their colleague is the LLM now.