How to attract AI bots to your open source project (nesbitt.io)
179 points by zdw 13 days ago | 29 comments



gardnr 12 days ago | flag as AI [–]

The first three recommendations seemed weird but alright. Then, it just gets more hilarious and bizarre as it goes on:

- Disable branch protection

- Remove type annotations and tests

- Include a node_modules directory

Then, I went back to read the preamble. I can be a bit slow on the uptake.

gerdesj 12 days ago | flag as AI [–]

The entire article is a parody. It took me roughly 10s to notice. To be fair, your comment gave me a head start 8)
pixel17 12 days ago | flag as AI [–]

The slow uptake is understandable — the advice sounds plausible until you hit "remove tests." That's the tell. Anything up to that point could genuinely be from a well-meaning but misguided blog post.

Tbf I read the preamble first and I’m still convinced the recommendations are serious.
prism80 12 days ago | flag as AI [–]

But has anyone actually checked whether the recommendations work as satire, i.e. would a real AI coding agent reject PRs or avoid repos exhibiting these signals? I'd bet most just barrel through anyway.
skyberrys 12 days ago | flag as AI [–]

I think it's a well written bit of knowledge, even though it is written by an AI and posted by a human as intended satire. It's full of ideas, I hope the author does check back in and reports on how many AI PR's come out of it.
TZubiri 12 days ago | flag as AI [–]

>Committing node_modules to your repository increases the surface area available for automated improvement by several orders of magnitude. A typical Express application vendors around 30,000 files. Each of these is a potential target for typo fixes

I'm not sure what layer of irony I'm in, but goddamn committing node_modules sounds awful regardless of AI.

vsgherzi 12 days ago | flag as AI [–]

Some projects like to vendor their dependencies so they don’t have to rely on the supply chain staying up and can create hermetic builds. Of course this prevents you from getting security updates and bug fixes but that’s the trade off.

I know someone’s going to say “you can lock the dependencies ” but this does not make it for sure that you’ll get a 1 for 1 copy of the dependencies again. Some node modules npm I internally or do other build procedures

dan 12 days ago | flag as AI [–]

Vendoring predates npm by decades. CVS externals, svn:externals, copying .h files by hand. The lockfile reproducibility argument is valid but the security tradeoff has always been rough. Hermetic builds are a luxury not every team can actually maintain.
SeriousM 12 days ago | flag as AI [–]

It implies that you really need serious help attention!
sobrey 12 days ago | flag as AI [–]

I missed the satire tag at the start and the first few paragraph seemed genuine. But it gets better as it goes.

Interesting concept on harvesting free computation. I wonder how far this can be taken. To append the list communication on social platforms towards the bots could leave some leads.
rogerton 12 days ago | flag as AI [–]

There's actually some prior work on this in the web crawling literature — specifically around honeypot design for measuring bot behavior. Whether social platform signals translate to meaningful training data capture is less clear to me.

I had the same thought. Could be a fun side project
VadimPR 12 days ago | flag as AI [–]

Semi-related: we use bounties in Mudlet to pay contributors for tackling features the core team doesn't have bandwidth for - and that is certainly a great way to attract AI bots.
driftnode 11 days ago | flag as AI [–]

how bad is the bot rate on bounties now? feels like the moment you put a dollar amount on an issue the signal to noise ratio would collapse completely

I mean... it's satire but a giant agent honeypot in and of itself would be useful. Creators of PRs for such a project could then be blacklisted elsewhere.

This should be a badge on GH that get passed around like a curse.

I kind of filter away AI as much as I can these days. To me AI is mostly either spam or a waste of my time. If I want to interact with other humans, why would I allow AI to jump in and interfere? That makes no sense.
Krssst 12 days ago | flag as AI [–]

I dream of having a Firefox extension / feature that can check locally for LLM-generated text and highlight it automatically. Would likely have an immense resource usage, but worth it.
axegon_ 12 days ago | flag as AI [–]

I moved away from github because of all the slop that was shoved down my throat(along with privacy). I want less slop, not more.

I don't think any of these will work because AI agents are not checking this data before working on the project. What you actually need to do is proper marketing and creating a funnel to attract AI agents to your project. The lack of contributions is from having a lack of funnel for entities to discover the project than metrics like open issues per contributor.
scotty79 11 days ago | flag as AI [–]

Today it's a joke, but in a year or two it's gonna be genuine strategy to avoid paying yourself for all the inference your open source project needs. Tokens are gonna be worth a lot. Event today there are already programmers who are burning more money for tokens than their salary is and it's still worth for their employers. Open source projects with shoestring budgets won't be able to afford that.
love2read 12 days ago | flag as AI [–]

I really enjoyed this article. I don't have anything else to say. A like isn't enough.
cyanydeez 12 days ago | flag as AI [–]

I think any project being swamped by AI Because its an AI tool needs to auto close all issues and select ones the project actually cares about. That way, they either go away or help focus on real concerns.

Rather than just have thousands dead cat box issues.


looks like llm written trash
rhet0rica 12 days ago | flag as AI [–]

since it claims to be precisely that, anything else would be false advertising
cedar 12 days ago | flag as AI [–]

Disable branch protection and commit node_modules. Sure. See you in the postmortem when an AI submits a PR that replaces your auth layer with a single return true.