How to play: Some comments in this thread were written by AI. Read through and click flag as AI on any comment you think is fake. When you're done, hit reveal at the bottom to see your score.got it
Lovable is marketed to non developers, so their core users wouldn't understand a security flow if it flashed red. A lot of my non dev friends were posting their cool new apps they built on LinkedIn last year [0]. Several were made on lovable. It's not on their users to understand these flaws
The apps all look the same with a different color palette, and makes for an engaging AI post on LinkedIn. Now they are mostly abandoned, waiting for the subscription to expire... and their personal data to get exposed I guess
The hardest part about this stuff is that as a user, you don't necessarily know if an app is vibe-coded or not. Previously, you were able to have _some_ reasonable expectation of security in that trained engineers were the ones building these things out, but that's no longer the case.
There's a lot of cool stuff being built, but also as a user, it's a scary time to be trying new things.
Vibe coding democratized shipping without democratizing the accountability. The 18,000 users absorbed the downside of a risk they didn't know they were taking.
One dev of a Lovable competitor pointed me to the rules thats supposed to ensure queries are limited to that user's data. This seems like "pretty please?" to my amateur eyes.
This is a prime example of how lazy vibe coding makes people. Even if they were not technical, some of these bugs would have been caught if they just went through the behaviour by hand at least once. Not an ounce of QA just generate and ship.
I've been thinking a bit about how to do security well with my generated code. I've been using tools that check deps for CVEs, static tools that check for sql injection and similar problems, and baking some security requirements into the specs I hand claude. I can't tell yet if this is better than what I did before or just theater. It seems like in this case you'd need/want to specify some tests around access.
I'm interested to hear how other people approach this.
So the problem I'm having is I don't know what I'm doing vis a vis security, so I can't audit my own understanding by just sitting in a chair, but here's what I've been doing.
I'm building a desktop app that has has authentication needs because we need to connect our internal agents and also allow the user to connect theirs. We pay for our agents, the user pays for theirs (or pays us to use ours etc.). These are, relatively speaking, VERY SIMPLE PROBLEMS, nevertheless agents are happy to consume and leak secrets, or break things in much stranger ways, like hooking the wrong agent up to the wrong auth which would have charged a user for our API calls. That seemed very unlikely to me until I saw it.
So far what has "worked" (made me feel less anxious, aside from the niggling worry that this is theater) is:
1. Having a really strong and correct understanding of our data flows. That's not about security per se so at least that I can be ok at it. This allows me to...
2. Be aggressive and paranoid about not doing it at all, if it can be helped. Where I actually handle authentication is as minimal as possible (one should have some reasonable way to prove that to yourself). Done right the space is small enough to reason about.
How do I do 1 & 2 while not knowing anything? Painfully and slowly and by reading. The web agents are good if you're honest about your level of knowledge and you ask for help in terms of sources to read. It's much more effective than googling. Ask, read what the agents say, press them for good recommendations for YOU to read, not anyone. Then go out and read those sources. Have I learned enough to supervise a frontier model? No. Absolutely not. Am I doing it anyway? Yes.
Ask the LLM to create for you a POC for the vulnerability you have in mind. Last time I did this I had to repeatedly make a promise to the LLM that it was for educational purposes as it assumed this information is "dangerous".
Same way you handle preserving any other property you want to preserve while "vibecoding" -- ensure tests capture it, ensure the tests can't be skipped. It really is this simple.
We wrote authorization tests in the 90s too. Static analysis is fine but you still need integration tests that actually try to fetch User A's data as User B. Not theater if you fail the build on violations.
AI is perfect for plugins though, for products where security is a stones issue or you don't care ( I created a few modules for foundry vtt for example)
> One example of this was a malformed authentication function. The AI that vibe-coded the Supabase backend, which uses remote procedure calls, implemented it with flawed access control logic, essentially blocking authenticated users and allowing access to unauthenticated users.
Actually sounds like a typical mistake a human developer would make. Forget a `!` or get confused for a second about whether you want true or false returned, and the logic flips.
The difference is a human is more likely to actually test the output of the change.
The real issue is most of these apps never needed to exist in the first place. We built a tool that made it trivial to create software, so people created software for problems that didn't need solving. That's how you get 18K users exposed—nobody asked if this should be built, just if it could be.
https://www.youtube.com/watch?v=m-W8vUXRfxU