I dunno. The effort needed to ensure you have backups is tiny compared to the work done to create the product. And to pull a backup before deleting stuff in production only needs a smidgen of experience.
They were extremely lucky. Imagine what the boss would have said if they hadn't managed to recover the data.
Owww. The first or second paragraph of this made me cringe
"I had just finished what I thought was a clean migration: moving our entire database from our old setup to PostgreSQL with Supabase" ... on a Friday.
Never do prod deploys on a Friday unless you have at least 2 people available through the weekend to resolve issues.
The rest of this post isn't much better.
And come one. Don't do major changes to a prod db when critical team members have signed off for a weekend or holiday.
I'm actually quite happy OP posted their experiences. But it really needs to be a learning experience. We've all done something like this and I bet a lot of us old timers have posted similar stories.
I hope the poster will learn about transactions at some point. Postgres even lets you alter the schema within a transaction.
What I learned, once upon a time, is that with a database, you shouldn't delete data you want to keep. If you want to keep something, you use SQL's fine UPDATE to update it, you don't delete it. Databases work best if you tell them to do what you want them to do, as a single transaction.
This is such a poorly written post, and im sure there are on-going disasters waiting to happen -- I've built 3 startups and sold 2 of them and never ever developed on production. ?? What level of crazy is this?
>Here's the technical takeaway: Never use CASCADE deletes on critical foreign keys.
The technical takeaway, as others have said, is to do prod deployment during business hours when there are people around to monitor and to help recover if anything goes wrong, and where it will be working hours for quite a while in the future. Fridays are not that.
Who is this guy? He seems like a poser. I wouldn't be surprised if these articles are AI-generated.
I'm sorry, but there's "move fast and break things" and then there's a group of junior devs not even bothering to google a checklist of development or moving to production best practices.
Your Joe AI customers should be worried. Anyone actually using the RankBid you did a Show HackerNews on 8 months ago should be worried (particularly by the "Secure by design: We partner with Stripe to ensure your data is secure." line.
If you don't want to get toasted by some future failure where you won't be accidentally saved by a vendor, then maybe start learning more on the technical side instead of researching and writing blogspam like "I Read 10 Business Books So You Don't Have To".
This might sound harsh, but it's intended as sound advice that clearly nobody else is giving you.
this is exactly how you earn your prod stripes. dropped the db on day 3? good. now you’re officially a backend engineer.
no backups? perfect. now you'll never forget to set one up again. friday night? even better. you got the full rite of passage.
people act like this's rare. it’s not. half of us have nuked prod, the other half are lying or haven't been given prod access yet.
you’re fine. just make the checklist longer next time. and maybe alias `drop` to `echo "no"` for a while
Did I read that correctly? They’re on Supabase’ free plan in production?
We’re just getting started and we’re even in Supabase’ paid plan.
I dropped the production database at the first startup I worked at, three days after we went live. We were scrappy™ and didn’t have backups yet, so we lost all the data permanently. I learned that day that running automated tests on a production database isn’t a good idea!
Your website title is "Profitable Programming" with a blog post "How I Dropped the Production Database on a Friday Night"
Thats not very profitable
Uhh, no, the answer is not to avoid cascading deletes. The answer is to not develop directly on a production database and to have even the most basic of backup strategies in place. It is not hard.
Also, “on delete restrict” isn’t a bad policy either for some keys. Make deleting data difficult.
Assuming storage cost is not a huge concern, I’m a big fan of soft deletes everywhere. Also leaves an easy “audit trail” to see who tried to delete something.
Of course - there are exceptions (gdpr deletion rules etc)
The “and honestly?” phrase smells like AI writing to the point I stopped there and closed the post.
Don’t fuck your database up and do have point-in-time rollbacks. No excuses it’s not hard. Not something to be proud of.
Echoing the other comments about just how bad the setup here is. Setting up staging/dev environments does not take so much time as to put you behind your competition. There's a vast, VAST chasm between "We're testing on the prod DB with no backups" and the dreaded guardrails and checkboxes.
That being said, I would love to see more resources about incident management for small teams and how to strike this balance. I'm the only developer working on a (small, but somehow super political/knives-out) company's big platform with large (F500) clients and a mandate-from-heaven to rapidly add features -- and it's by far the most stressed out I've ever been in my career if not life. Every incident, whether it be the big GCP outage from last week or a database crash this week, leads to a huge mental burden that I have no idea how to relieve, and a huge passive-aggressive political shitstorm I have no idea how to navigate.
i dropped the dev database once at PayPal back in 2006
This is a good story and something everyone should experience in their career even just for the lesson in humility. That said:
> Here's the technical takeaway: Never use CASCADE deletes on critical foreign keys. Set them to NULL or use soft deletes instead. It's fine for UPDATE operations, but it's too dangerous for DELETE ones. The convenience of automatic cleanup isn't worth the existential risk of chain reactions.
What? The point of cascading foreign keys is referential integrity. If you just leave dangling references everywhere your data will either be horribly dirty or require inconsistent manual cleanup.
As I'm sure others have said: just use a test/staging environment. It isn't hard to set up even if you are in startup mode.
Developing directly on the production database with no known backups. Saved from total disaster by pure luck. Then a bunch of happy talk about it being a "small price to pay for the lessons we gained" and how such failures "unleash true creativity". It's amazing what people will self-disclose on the internet.