Every Tool Is the Right Tool (Somewhere)
PostgreSQL is a fantastic database. So is ClickHouse. So is DynamoDB.
They’re also designed for completely different jobs.
PostgreSQL excels at transactional workloads: creating users, updating orders, maintaining referential integrity. ClickHouse excels at analytics: aggregating millions of rows across a handful of columns in milliseconds. DynamoDB excels at massive-scale key-value access with predictable latency.
None of them is “better.” They’re each the right tool for a specific class of problems. The skill isn’t picking the best technology. It’s figuring out what job you actually need done.
The Screwdriver Problem
Here’s a framing I keep coming back to: when a tool isn’t working, it’s rarely because the tool is bad. It’s usually because you’re using a screwdriver to hammer nails.
The screwdriver is a perfectly good screwdriver. It’s just not a hammer.
PostgreSQL isn’t slow. It’s slow for analytics workloads because it was never optimized for scanning millions of rows to compute aggregates. That’s not a flaw; it’s a design choice. PostgreSQL optimized for other things (transactions, consistency, flexibility) that matter more for its intended use case.
When you find yourself fighting a tool, the first question isn’t “what’s wrong with this tool?” It’s “what job was this tool actually designed for, and is that the job I’m asking it to do?”
How to Know What You Need
When evaluating whether a tool fits your situation, I think about these questions:
What problem are you actually solving?
Not the problem you might have in two years. Not the problem your investor thinks you’ll have at scale. The problem you have right now, with your current users and your current data.
Be specific. “We need to handle more traffic” is too vague. “Our checkout endpoint is timing out because the inventory check takes 3 seconds under load” is something you can actually solve.
Once you understand the specific problem, you can ask: what tools are designed to solve this?
What are the access patterns?
Different tools optimize for different access patterns:
-
Lots of small reads and writes to individual records? Traditional relational databases (PostgreSQL, MySQL) or key-value stores (Redis, DynamoDB) are built for this.
-
Aggregations across large datasets? Columnar databases (ClickHouse, DuckDB) or data warehouses (Snowflake, BigQuery) are built for this.
-
High-throughput event streaming with ordering guarantees? Message brokers like Kafka are built for this.
-
Full-text search with relevance ranking? Search engines (Elasticsearch, Typesense) are built for this.
The pattern matters more than the scale. A million rows with transactional access patterns wants a different tool than a million rows with analytical access patterns.
What are you trading away?
Every specialized tool makes tradeoffs. Understanding what you’re giving up helps you decide if the trade is worth it.
Some examples:
Using Kafka gives you incredible throughput and durability, but adds lots of operational complexity and latency.
DynamoDB gives you near-infinite scale with predictable performance, but limits your query flexibility and costs more for complex access patterns.
ClickHouse gives you blazing-fast analytics, but struggles with single-row updates and point queries.
These aren’t flaws. They’re the price of optimizing for something else. The question is whether the benefits are worth that price for your situation.
Can you solve this a simpler way?
Before reaching for a specialized tool, it’s worth asking: is there a simpler solution that’s good enough?
PostgreSQL’s full-text search isn’t as powerful as Elasticsearch. But it might be sufficient for your use case, and it’s one less system to operate.
A database-backed job queue isn’t as robust as a dedicated message broker. But it might handle your workload fine with tools you already have.
Sometimes “good enough” really is good enough. Other times it’s not. The point is to make that choice consciously rather than by default.
Real Examples
Analytics on transactional data. You’ve got PostgreSQL powering your application. Product wants dashboards showing trends across millions of events. You could try to optimize PostgreSQL (partial indexes, materialized views, read replicas). Or you could replicate the data to ClickHouse and run analytics there in milliseconds. The second approach often ends up simpler, because you’re using each tool for its intended purpose instead of fighting against the grain.
Search. PostgreSQL has full-text search, and it’s decent. But if search quality is core to your product (relevance ranking, typo tolerance, faceted filtering) Elasticsearch or Typesense will get you there faster and with better results. You’re not adding complexity for its own sake; you’re using a tool designed for exactly this job.
Message queues. If you need to decouple background jobs from web requests, a database-backed queue might be fine. If you need exactly-once delivery, replay capability, and multiple consumers processing the same stream, Kafka is probably worth the operational overhead. The complexity exists to solve real problems, problems you may or may not actually have.
Container orchestration. Deploying a single application? A VPS with systemd is probably fine. Deploying fifty services that need independent scaling, rolling deployments, and automatic failover? Kubernetes starts to make sense. The question is whether you have the problems it was designed to solve.
The Point
The goal isn’t to avoid complex tools or to chase them. It’s to match each tool to its intended job.
When you’re fighting a tool, when something feels harder than it should be, that’s often a signal that you’re asking it to do something it wasn’t designed for. Not because the tool is bad. Because it’s a screwdriver, and you’re trying to hammer.
Every tool is the right tool, somewhere. The skill is figuring out where.
Struggling with a technology decision, or suspect you’re fighting the wrong tool? Let’s talk about it.