Skip to main content

Pick the right database.
Backed by your data, not your gut.

A 10-day benchmark engagement for founders and CTOs locking in a load-bearing database decision. Three systems, your workload, reproducible methodology — so you ship on numbers instead of vibes.

The Decision

Picking the Wrong Database Is Cheap. Now.

It gets expensive the day your access patterns change and the system you bet on stops scaling with them.

Analysis Paralysis

Postgres vs MySQL vs Mongo. A dozen blog posts, all contradicting each other. Three months in, still no decision.

Vendor Benchmarks Lie

Every cloud DB has a benchmark that proves it wins. They are marketing artifacts — cherry-picked configs on workloads that flatter the product.

Wrong at Scale Is Expensive

Discovering the wrong choice at 10x load means a migration, downtime, and rework. Six figures of pain — usually paid for in weekends.

What's Included

Exactly What You Get

Reproducible Benchmark Repository

Yours to keep, re-run, extend

  • Workload generators tuned to your access patterns
  • Harness with deterministic seeding + warm-up
  • Raw result data (latency histograms, throughput, p50/p95/p99)
  • Infrastructure-as-code for the benchmark hardware
  • README with how to re-run on your own infra

Written Report

Methodology + numbers

  • Workload definition + why it matches your product
  • Hardware class, configuration, tuning applied
  • Results across all systems under test
  • Failure-mode observations (where each DB hurts)
  • References to TPC / Jepsen / vendor docs used

Recommendation Memo

1 page, decision-ready

  • Headline recommendation with confidence level
  • Trade-offs you are accepting by choosing it
  • Conditions under which the recommendation flips
  • What to monitor in production to validate it

Review Session

60-min walkthrough

  • Live walk-through of methodology + results
  • Q&A with engineering / leadership
  • Recorded for stakeholders who could not attend
  • Optional follow-up workshop priced separately
Methodology

How We Benchmark

Workloads are derived from your real access patterns — read/write mix, query shapes, concurrency, dataset size — not synthetic stand-ins. Each system is provisioned on identical hardware, tuned following its own vendor guidance (no cherry-picking), and warmed up before measurement.

We capture latency histograms (p50, p95, p99), throughput, and failure modes across multiple runs to surface variance. Results are sanity-checked against vendor-published numbers and the best-available open methodologies — including TPC workload definitions and Jepsen consistency findings where they apply.

Everything we measure is shipped as the benchmark repo: workload generators, harness, infrastructure-as-code, raw data. You can re-run it on your own infrastructure, audit our numbers, and extend the suite as your workload evolves.

How the Engagement Runs

1

Kickoff

(Day 0)

  • Map your product and its access patterns
  • Confirm the three systems under test
  • Agree on the workload class (OLTP by default)
  • Lock the success criteria for the recommendation
2

Scope Workloads

(Day 1)

  • Translate access patterns into workload generators
  • Pick hardware class to match production
  • Define warm-up, run duration, and repetition count
  • Set tuning rules per system (no cherry-picking)
3

Run Benchmarks

(Days 2-8)

  • Provision identical infra per DBMS
  • Run workloads, capture latency + throughput data
  • Re-run for variance; flag anomalies
  • Validate results against vendor-published numbers
4

Write Report

(Day 9)

  • Compile methodology section + raw results
  • Draft the 1-page recommendation memo
  • Stress-test conclusions against edge cases
  • Internal sanity review before delivery
5

Review Session

(Day 10)

  • 60-min walkthrough with your team
  • Q&A on methodology and trade-offs
  • Hand off the benchmark repo and report
  • Recorded for stakeholders not present
Day 0
Kickoff
Day 1
Scope Workloads
Days 2-8
Benchmark Runs
Day 9
Report
Day 10
Review
Pricing

One Fixed Price€10,000

Three systems under test, one workload class, 10 working days, full repo and report delivered. Cheaper than discovering at scale that the wrong choice was made.

If the report doesn't change your decision, you don't pay.

Questions?

Common Questions

Those are excellent — and generic. They tell you how a database behaves under a synthetic workload on reference hardware. Your workload, your data shape, and your read/write mix are not those. The point of this engagement is to measure your case, not the published case.

Vendor benchmarks are marketing artifacts. They cherry-pick configurations and workloads that flatter the product. We publish full methodology and raw data — including the runs that did not flatter anyone — so you can replicate and audit every number.

A wrong database choice at scale costs months of migration work, downtime, and rework — typically six figures by the time it surfaces. €10k buys you a decision backed by your data, plus a repo you keep and can re-run as your workload evolves.

If the report doesn't change your decision, you don't pay. Inconclusive is itself a signal — usually that the systems are closer than the marketing claims — but we won't bill for a recommendation we can't stand behind.

Yes. The benchmark repo is yours: workload generators, harness, infrastructure-as-code, and raw data. Re-run it when your workload shifts, when a new DB version drops, or when you want to validate a production hypothesis.

By default Postgres, MySQL, and MongoDB — the most common OLTP candidates. Scoped variants can swap in DuckDB, ClickHouse, SQLite, or Redis depending on your workload class. We cap at three systems per engagement so the timeline stays honest.

One workload class per engagement. OLTP is the default. OLAP, vector search, and time-series are available as scoped variants — picked on the kickoff call based on what your product actually does.

Founders and CTOs who have just locked their MVP foundation and now face a database choice they cannot afford to redo at scale. Not a fit for teams already in production — that is a more expensive engagement (migration, not selection).

Book Your Call

Ready to decide on data, not vibes?

Book a 30-minute scoping call. We'll confirm fit, agree on the three systems and the workload class, and you decide whether to start the engagement.

Free scoping call. No credit card. No obligation.