I am wary of user reports that claim product X was lousy for them, then they moved to product Y and everything was awesome. Sometimes this means that product X was lousy -- in general or for their use case. Other times it means the team using product X did a lousy job deploying it. It is hard for the reader to figure this out. It can also be hard for some authors to figure this out thanks to the Dunning-Kruger effect so lousy reports will continue to be published. These reports are not my favorite form of marketing and some of the bad ones linger for years. We deserve better especially in the open-source database market where remarkable progress is being made.
I have written before on benchmarketing. Other posts that mention it are here.
Subscribe to:
Post Comments (Atom)
Postgres vs tproc-c on a small server
This is my first post with results from tproc-c using HammerDB . This post has results for Postgres. tl;dr - across 8 workloads (low and me...
-
I need stable performance from the servers I use for benchmarks. I also need servers that don't run too hot because too-hot servers caus...
-
This has results to measure the impact of calling fsync (or fdatasync) per-write for files opened with O_DIRECT. My goal is to document the ...
-
I previously used math to explain the number of levels that minimizes write amplification for an LSM tree with leveled compaction. My answe...
No comments:
Post a Comment