I made a mistake that taught me more about software development than any textbook ever could. I called my code "bulletproof."
Within minutes, a senior engineer tore it apart. Not to be mean. But because that one word—"bulletproof"—revealed everything wrong with how I was thinking about software.
If you're starting out in software development—or even if you've been coding for a while and want to level up—this guide will save you from the same painful lesson. We'll break down why the "bulletproof" mindset fails, what actually makes software reliable, and how to think like a professional engineer.
I'm going to tell you the whole story, explain every concept in plain English, and give you real examples you can learn from. No computer science degree required. Just an open mind and a willingness to question your assumptions.
What Does "Bulletproof" Actually Mean in Software?
Let's start with the word itself. When most developers say their code is "bulletproof," they usually mean something like:
- "I tested it and it works"
- "I can't think of how it would break"
- "It handles the cases I thought of"
- "It looks solid to me"
None of that is bulletproof. That's just hope wearing a confident mask.
Real "bulletproof" software has a very specific definition that most beginners never learn. Here's how an experienced engineer explained it during the code review that changed my thinking:
Let me break this down piece by piece, because this definition is the foundation of everything else in this guide.
"Enforced by the System Itself"
This means the rules and protections aren't just in your head or in your code's comments—the computer actively prevents bad things from happening.
Real-world analogy: Think about a car. When you put on your seatbelt, the car doesn't just display a note saying "seatbelts are recommended." Many cars will beep at you, refuse to let you drive, or make the experience uncomfortable until you comply. The system enforces the safety rule.
In software, this means things like:
- The database refuses to store invalid data (not just your code checking)
- The system won't let unauthorized users access data (not just hidden URLs)
- Related data can't become disconnected (the system maintains relationships)
"Continuously Tested"
This means you don't just test once when you write the code. You test automatically, every single time anyone makes any change.
Real-world analogy: Imagine a factory that makes medicine. They don't just test the first batch and assume everything after that is fine. They test continuously, throughout production. If something changes in the ingredients or process, they catch it immediately.
In software, this means:
- Automated tests run every time code changes (called "continuous integration" or CI)
- If a test fails, the change can't go live
- Tests cover not just "does it work?" but "does it still work after this change?"
"Prove It Would Fail a Test"
This is the hardest concept but maybe the most important. If you say "this can't be hacked," you should be able to show exactly why an attack attempt would fail.
Real-world analogy: A locksmith can tell you exactly why a lock is secure. They can explain the mechanisms, show you test results from attempts to pick it, and demonstrate what happens when someone tries to break in. They don't just say "trust me, it's secure."
In software, this means:
- You have tests that attempt to break your own system
- You can explain exactly what stops each type of attack or failure
- Your explanation is backed by actual test results, not assumptions
The Story: How My "Perfect" Design Fell Apart
Now let me tell you what happened to me, because stories teach better than rules.
I had been working for weeks on a database design for a new application. This was a system where multiple businesses would use the same software—like how Shopify works. Many stores use Shopify, but each store only sees their own orders, customers, and products.
In technical terms, this is called a multi-tenant application. "Tenant" just means "customer who uses your software." Multiple tenants share the same system.
I thought my design was solid. I had spent a lot of time on it. I had considered security. I had thought about how data would be organized. When I presented it to the team, I made one fatal mistake.
I said: "This design is bulletproof."
The response was immediate and brutal:
Ouch. My first reaction was defensive. Who does this person think they are? But then came the follow-up that made me pause:
I couldn't.
I had designed something I believed would work. I had not proven it would work. And that gap—between belief and proof—is where software fails.
Over the next several hours, we had an intense technical debate. Some parts got heated. But by the end, I understood why my "bulletproof" design had holes you could drive a truck through.
Let me walk you through each lesson I learned, explained for beginners.
Lesson 1: The "Sharing" Problem (Multi-Tenancy Explained)
First, What is Multi-Tenancy?
Let's make sure you understand the concept, because it's fundamental to modern software.
Imagine you're building software that businesses will pay to use. Maybe it's an accounting app, a project management tool, or an online store builder. You have two main options:
Option 1: One installation per customer
You give each customer their own complete copy of your software, running on their own server. This is like selling someone a house. They own it completely. Very secure—their data is totally isolated. But expensive. If you have 1,000 customers, you need to maintain 1,000 separate installations.
Option 2: Shared installation (Multi-tenancy)
All your customers use the same installation of your software. This is like an apartment building. Everyone lives in the same building, but they have separate apartments. Much cheaper to operate. But you need very good "walls" between the apartments.
Most modern software (Slack, Notion, Shopify, Salesforce) uses multi-tenancy because it's economically practical. But the "walls" between customers must be incredibly strong.
The Three Approaches to Multi-Tenancy
Within multi-tenancy, there are different approaches. Think of them like different types of apartment buildings:
1. Separate databases per tenant
Like giving each tenant their own building. Very strong isolation. If Company A's database has a problem, Company B is unaffected. But expensive—you're managing many databases.
2. Shared database, separate schemas
Like one building with very thick walls between apartments. Everyone shares the same building (database), but each tenant has their own section (schema) within it. Good isolation, moderate cost.
3. Shared everything
Like a single open floor plan with partitions. Everyone's data is in the same tables, and you use rules to keep them apart. Cheapest to run. But if a partition fails, everyone can see everyone.
I chose option 3—shared everything. It's the most common choice for startups because it's the easiest to build and cheapest to operate.
What Went Wrong With My Design?
The critic pointed out something that made my blood run cold:
Let me make this concrete with an example.
Say you're building project management software. Company A (a law firm) and Company B (a marketing agency) both use it. They have projects stored in your database like this:
- Project: "Client Smith Lawsuit" - belongs to Company A
- Project: "Summer Campaign" - belongs to Company B
To keep them separate, you write a rule: "When showing projects, only show projects that belong to the current user's company."
This seems simple. But consider:
- What if there's a bug in how you determine the "current user's company"?
- What if someone finds a way to trick the system into thinking they're from a different company?
- What if a new feature accidentally bypasses your check?
- What if an API endpoint was created without adding the company check?
One bug. One missed check. One moment of oversight. And suddenly you have a massive data breach.
This isn't theoretical. This exact type of bug has caused real breaches at real companies, exposing millions of users' data.
The Solution: Defense in Depth
The fix isn't to avoid shared databases. It's to have multiple independent layers of protection that back each other up. If one fails, another catches the problem.
Layer 1: Application-level checks
Your code checks that users can only access their company's data. This is what most beginners stop at.
Layer 2: Database-level security rules
The database itself enforces rules about who can see what. Even if your application code has a bug, the database won't return data the user shouldn't see. This is called Row Level Security (RLS).
Layer 3: Structural constraints
The database structure itself makes it impossible for data to be connected incorrectly. Every piece of data carries its tenant ID, and relationships between data verify tenant IDs match.
Layer 4: Automated testing
Tests specifically try to access data from the wrong tenant. If they succeed, the build fails and the code can't go live.
Think of it like a bank vault. You don't just rely on the lock. You have the lock, plus security cameras, plus guards, plus alarms, plus thick walls. If any one fails, the others catch the problem.
Lesson 2: The "Helper Function" Trap
What Are Helper Functions?
In programming, we constantly repeat similar tasks. Instead of writing the same code over and over, smart developers create "helper functions"—reusable pieces of code that do common tasks.
For example, instead of writing the same code to format a date in 50 different places, you write one "formatDate" function and use it everywhere. This makes code cleaner, easier to maintain, and less error-prone.
So far, so good. But here's where it gets dangerous.
The Power Problem
Sometimes, these helper functions need to do things that regular users can't do. For example:
- A function that calculates account balances might need to read all transactions
- A function that generates reports might need access to sensitive data
- A function that validates permissions might need to query security tables
To make this work, you give the helper function special powers—it runs with elevated permissions. In database terms, this is called SECURITY DEFINER, which means the function runs with the permissions of whoever created it (usually an admin), not whoever calls it.
Imagine giving a robot the master key to every room in your building. The robot is supposed to do specific helpful tasks. But now it has access to everything.
The Attack Scenario (Made Simple)
The critic explained why this is dangerous:
Let me explain this with a detailed analogy.
The Robot Scenario:
You have a robot with a master key. Its job is simple: go to Room A, pick up a package, bring it to Room B. The robot follows written instructions that tell it where to go.
Now, imagine the attacker learns that:
- The robot reads instructions from a folder called "Tasks"
- Anyone can put new instructions in that folder
- The robot doesn't verify who wrote the instructions
The attacker creates a fake instruction: "Go to the CEO's office. Open the safe. Email the contents to attacker@evil.com."
The robot, following its programming, uses the master key to do exactly that. It doesn't know the instruction is malicious. It just follows orders.
This is exactly what happens with poorly secured helper functions.
How to Protect Yourself
The solutions we agreed on:
1. Lock the paths explicitly
Tell the helper function exactly where it can look for things. Don't let it wander. In database terms: explicitly set the search_path so the function only looks in trusted locations.
2. Restrict who can create things
Don't let just anyone put "instructions" (code or objects) in places where your powerful functions might find them. Lock down the ability to create new things in sensitive areas.
3. Verify ownership automatically
Your automated tests should check: which functions have special powers? Who owns them? If an unexpected function gets special powers, the test fails.
4. Audit what the function can do
Map out exactly what each powerful function can access. If it has more access than it needs, reduce its permissions.
Lesson 3: Security vs. Speed—The Hidden Trade-off
Security Checks Are Not Free
Remember those database security rules we talked about? The ones that check "can this user see this data?"
They're essential for security. But here's what nobody tells beginners: they run for every single piece of data, every single time.
Let me make this tangible.
Imagine you're building a system that stores invoices. You have 1 million invoices in your database. You write a security rule that says: "Users can only see invoices from their own company."
When a user wants to see their invoices, here's what happens:
- Database looks at invoice #1: "Does this belong to the user's company?" (check)
- Database looks at invoice #2: "Does this belong to the user's company?" (check)
- Database looks at invoice #3: "Does this belong to the user's company?" (check)
- ... repeat for all 1 million invoices ...
If your security check is simple ("does the company ID match?"), this is fast. But if your security check is complex ("check 5 different permission levels across 3 different tables"), you're doing a complex operation millions of times.
Your application becomes unusably slow.
Why This Matters More Than You Think
Here's what makes this tricky: it might work fine when you're testing.
You have 100 invoices in your test database. The security check runs 100 times. Takes 0.1 seconds. Feels instant. You think "great, it works!"
Then you launch. Real customers start using the system. You have 1 million invoices. The same check now runs 1 million times. Takes 30 seconds. Users complain. Your support inbox explodes.
This is why experienced engineers always ask: "How will this perform at 10x scale? 100x scale?"
Finding the Balance
The solution isn't to remove security. It's to make security smart:
Keep security checks simple
"Does the company ID match?" is a fast check. The database can even use indexes (like a book's index) to make it nearly instant. "Check five different permission tables and calculate whether this user has the special Thursday discount override" is slow.
Test with realistic data volumes
Don't test with 100 records and call it done. Test with the amount of data you expect in production—or more.
Measure, don't guess
Actually measure how long operations take. If a query takes more than a few hundred milliseconds, investigate. Use database tools that show you exactly how queries are being executed.
Make it a test
In your automated testing, include performance checks. If a critical query suddenly gets slow, the test should fail.
Lesson 4: Data Grows Forever (Unless You Plan)
The Appeal of "Never Delete"
For some types of data—financial records, audit logs, legal documents—there's a strong argument for never deleting anything.
Benefits of never deleting:
- Complete history for compliance and auditing
- You can always go back and see what happened
- No accidental data loss
- Simpler code (no delete logic to write)
I designed several of my tables this way. I called it "immutable data." Only additions allowed, no updates, no deletes.
It seemed wise. It was incomplete.
Let's Look at Real Numbers
Say you're building an application that logs user activity. Every click, every page view, every action. Useful for analytics and debugging.
You have 1,000 users. Each user generates 100 log entries per day. That's 100,000 new records daily.
After one year: 36.5 million records.
After three years: 109.5 million records.
After five years: 182.5 million records.
And that's with just 1,000 users. Imagine 100,000 users. Now you're adding 10 million records per day. Over 3.6 billion records per year.
Every query against this table slows down. Storage costs climb. Backups take longer. Database maintenance becomes a nightmare.
The Missing Piece: Data Lifecycle
"Never delete" data needs a complete lifecycle plan:
1. Time-based partitioning
Organize data by time periods (monthly, quarterly). Instead of one massive table, you have many smaller tables: activity_2026_01, activity_2026_02, etc. Queries that only need recent data can skip old partitions entirely.
2. Tiered storage
Recent data stays on fast (expensive) storage. After 3 months, move to slower (cheaper) storage. After a year, move to archive storage. Still never deleted, but no longer slowing down active queries.
3. Retention policies
Define what happens when: "After 7 years, activity logs can be deleted or moved to cold archive." Make it explicit and documented.
4. Automatic management
Don't rely on someone remembering to do this. Set up automated jobs that handle partition creation, data movement, and cleanup.
Lesson 5: The Flexibility Trap
The Allure of "Store Anything"
Modern databases have a feature that seems magical: you can store flexible, unstructured data. Instead of defining exactly what fields your data will have, you can just throw in whatever you want.
This is often called JSONB storage (in PostgreSQL) or document storage. Think of it like a box where you can put anything, rather than a form with specific fields.
It's incredibly useful for:
- Custom fields that vary by customer
- Metadata that might change structure over time
- Integrations with external systems that send varying data
- Quickly building features without database migrations
I used this everywhere. Big mistake.
Why Flexible Data Is Slow to Search
Let me explain why with an analogy.
Traditional storage is like a spreadsheet. Every row has the same columns. If you want to find all entries where "status" is "active," the database knows exactly where to look. It can build an index (like a book's index) that points directly to matching rows.
Flexible storage is like a pile of sticky notes. Each note can have different information. If you want to find all notes about "status," you have to look at every single note to see if it even has a "status" field, then check if it matches.
At small scale, this is fine. Check 100 sticky notes, takes a second.
At large scale, it's a disaster. Check 10 million sticky notes? Hope you brought lunch.
The Rules We Agreed On
Flexible storage is for STORING data, not SEARCHING data.
- Good use: Customer preferences, custom metadata, extra fields you rarely query
- Bad use: Anything you need to filter by, sort by, or search frequently
If you need to search a field, give it a proper home.
Take that field out of the flexible storage and make it a regular column. Yes, it requires more upfront planning. Yes, it might need a database migration. But your queries will be fast and predictable.
Test with real data volumes.
Flexible data queries that work fine with 1,000 records can collapse with 1 million records. Always test at realistic scale.
The Mindset Shift: From "It Works" to "I Can Prove It"
After all these lessons, here's the biggest thing that changed in my thinking. This is the core message of this entire article.
- Can you break it? If you haven't tried to break your own code, you don't know if it's secure. Write tests that attempt attacks. Try to access data you shouldn't.
- Can you measure it? If you haven't tested performance with realistic data, you don't know if it's fast. Run benchmarks. Measure response times.
- Can you automate the proof? If your tests don't run automatically every time you change code, you don't know if it still works. Set up continuous integration.
- Can you explain it? If you can't explain exactly why something is secure/fast/reliable to another person, you don't actually know. Write documentation. Do code reviews.
The conversation that started with harsh criticism ended with a breakthrough moment:
That's the shift. Engineering isn't about believing your code works. It's about proving it works.
The Complete Beginner's Checklist
Here's what I do differently now. Use this checklist for your own projects, no matter how small.
Before Writing Code
- List 5 ways it could fail: What could go wrong? What could an attacker try? What happens if the network is slow? What if data is missing?
- Define "done" in measurable terms: How will you know it works? Not "it looks right"—actual measurable criteria.
- Think about scale: What happens at 10x your expected usage? 100x? Will your approach still work?
- Consider the data lifecycle: Where does the data come from? Where does it go? How long does it live? Who can access it?
While Writing Code
- Multiple protections for security: Never rely on just one check. If it's important, have at least two independent safeguards.
- Test the unhappy paths: What happens with invalid input? Missing data? Network failures? Test these, not just the happy scenario.
- Measure performance: Don't assume it's fast. Measure. Use realistic amounts of data.
- Least privilege: Only give code the minimum permissions it needs. Nothing more.
After Writing Code
- Try to break it: Pretend you're an attacker. What would you try? Actually try it.
- Get fresh eyes: Have someone else review your code. They'll see things you've gone blind to.
- Automate your tests: Write tests that run automatically. If it's not automated, it won't be run consistently.
- Document your reasoning: Why did you make these choices? Future you (or your teammates) will thank you.
10 Common Mistakes Beginners Make (And How to Avoid Them)
1. Trusting User Input
The mistake: Assuming data from users is safe and valid.
The reality: Users make mistakes, and attackers send malicious data. Always validate and sanitize everything that comes from outside your system.
The fix: Treat all external input as potentially dangerous until proven otherwise.
2. Testing Only the Happy Path
The mistake: Only testing that things work when used correctly.
The reality: Most bugs happen when things go wrong. Edge cases, invalid data, network timeouts, concurrent access.
The fix: Write tests specifically for failure scenarios. Ask "what if?" constantly.
3. Ignoring Performance Until Later
The mistake: "We'll optimize when we have more users."
The reality: Architectural problems discovered late are expensive to fix. Sometimes impossible without rewriting.
The fix: Test with realistic data volumes from the start. Know your performance characteristics.
4. Security Through Obscurity
The mistake: Hiding code or using "secret" URLs as your security.
The reality: Attackers will find your hidden things. They have tools that scan for them automatically.
The fix: Real security doesn't depend on keeping things secret. It works even if the attacker knows how it works.
5. Single Point of Failure
The mistake: Relying on one check, one server, or one backup.
The reality: Everything fails eventually. Networks go down. Servers crash. Backups get corrupted.
The fix: Build redundancy into critical systems. Assume any single thing will fail and plan for it.
6. Copy-Pasting Without Understanding
The mistake: Using code from tutorials or Stack Overflow without understanding what it does.
The reality: Example code often skips security measures for simplicity. It might work but have serious vulnerabilities.
The fix: Understand every line you add. If you can't explain it, don't use it.
7. Storing Sensitive Data Carelessly
The mistake: Putting passwords, API keys, or personal data in plain text or obvious places.
The reality: This data will be found. Logs get exposed. Databases get breached. Code repositories get shared.
The fix: Encrypt sensitive data. Use proper secrets management. Never log sensitive information.
8. Not Using Version Control Properly
The mistake: Committing everything without thinking, or not using version control at all.
The reality: You'll need to undo changes. You'll need to track who changed what. You'll accidentally commit secrets.
The fix: Learn Git properly. Use .gitignore. Review what you're committing before you commit.
9. Assuming Your Framework Handles Security
The mistake: "I'm using [popular framework], so security is handled."
The reality: Frameworks provide tools, not guarantees. You still need to use them correctly and understand their limitations.
The fix: Read the security documentation for your tools. Understand what's automatic and what requires your action.
10. Not Learning From Others' Mistakes
The mistake: Thinking you'll figure everything out yourself.
The reality: Every mistake you can make has been made before. Learning from others is faster than learning from failure.
The fix: Read post-mortems of real incidents. Study security breaches. Join communities where engineers share experiences.
Final Thoughts: Embrace the Criticism
That brutal code review changed how I think about software. It wasn't fun in the moment. Nobody likes hearing their work called "a mess."
But here's what I've learned since then: the engineers who get defensive stop growing. The engineers who ask "tell me more" become great.
When someone criticizes your code, they're not attacking you as a person. They're helping you see blind spots you didn't know you had. The best response isn't defensiveness—it's curiosity.
"Tell me more. Help me understand. What am I missing?"
The goal of code review isn't to win the argument or prove you were right. The goal is to build something that won't break when real people depend on it.
Every piece of criticism I received that day made our system stronger. The design that emerged from that debate was genuinely more secure, more performant, and more maintainable than what I walked in with.
Key Takeaways
If you remember nothing else from this article, remember these principles:
- "Bulletproof" is proven, not claimed. If you can't demonstrate why something is secure, fast, or reliable, you don't actually know if it is.
- Defense in depth. Never rely on a single protection. Assume any one safeguard might fail, and have backups.
- Test the failures. Don't just test that things work correctly—test what happens when they fail.
- Measure, don't assume. Especially for performance. Your intuition is probably wrong. Numbers don't lie.
- Plan for growth. Data grows. Usage grows. What works at small scale might collapse at larger scale.
- Embrace criticism. The feedback that stings the most often teaches the most.
Building reliable software is hard. But it's a skill you can learn. And it starts with letting go of the idea that anything is "bulletproof"—and embracing the discipline of actually proving it.
Now you know what took me years to learn the hard way. Go build something great—and when someone criticizes it, thank them.
Want to dive deeper? Check out our full technical post-mortem with specific code examples, or read about PostgreSQL database design best practices.
Have questions about building reliable software? Get in touch—we love helping developers level up.
