As part of my changes to the Insert Benchmark to support longer-running benchmark steps without growing the size of the benchmark tables I added a feature to optionally do deletes at the same rate as inserts and with this the table is like a queue -- inserts at one end, deletes from the other end.
The benchmark steps that use deletes must run a query when they start to determine the minimum value of the PK column (transactionid) currently in the table and in some cases that query is very slow for InnoDB, somewhat slow for MyRocks and not really slow for Postgres. The query is: select min(transactionid) from table.
tl;dr
- Postgres does great, MyRocks and InnoDB have problems
- I can tune away the problem for InnoDB via the innodb_max_purge my.cnf options but that comes at a big cost -- less write throughput
- I don't know yet whether I can tune the problem away for MyRocks
- One response to this post is that doing a queue in a SQL DBMS is an anti-pattern. One response to that response is too frequent usage of X is an anti-pattern leads to fixable problems that don't get fixed.
Workload
This is the table for MySQL. For Postgres I use bigserial rather than auto_increment for transactionid.
transactionid bigint NOT NULL AUTO_INCREMENT,
dateandtime datetime DEFAULT NULL,
cashregisterid int NOT NULL,
customerid int NOT NULL,
productid int NOT NULL,
price float NOT NULL,
data varchar(4000) DEFAULT NULL,
PRIMARY KEY (transactionid),
KEY pi1_marketsegment (price, customerid),
KEY pi1_registersegment (cashregisterid, price, customerid),
KEY pi1_pdc (price, dateandtime, customerid)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
The benchmark has multiple steps run in this order:
- l.i0
- insert X million rows across all tables without secondary indexes
- l.x
- create 3 secondary indexes. I usually ignore performance from this step.
- l.i1
- insert and delete another X million rows per table with secondary index maintenance. The number of rows/table at the end of the benchmark step matches the number at the start. The inserts are done to the table head and the deletes are done from the tail.
- q100
- do queries as fast as possible with 100 inserts/s/client and the same rate for deletes/s done in the background.
- q500
- do queries as fast as possible with 500 inserts/s/client and the same rate for deletes/s done in the background.
- q1000
- do queries as fast as possible with 1000 inserts/s/client and the same rate for deletes/s done in the background.
At the start of the l.i1, q100, q500 and q1000 steps the select min(transactionid) from table query is done (see here) to determine the smallest transactionid in the table and then deletes are done via statements like delete from table where transactionid >= X and transactionid <= Y (see here). The deletes are done at the same rate as inserts and the thread handling the deletes remembers where it should do the next delete.
The problem
The problem is that this query can be slow when MVCC GC gets behind, otherwise it should take a few milliseconds. And in my testing so far it can too for InnoDB and MyRocks.
These are from the c2-standard-30 servers I have been using for the Insert Benchmark with 8 clients. Full benchmark reports are here for MyRocks, for InnoDB and for Postgres. Results from a larger server are pending and from early results they will be a lot worse.
Tables that show the query response time in seconds by benchmark step (l.i0, q100, q500, q1000) for each of the DBMS + configurations tested. The worst-case is likely to occur at the start of the q100 benchmark step because that immediately follows the l.i1 benchmark step and l.i1 is the most write-heavy.
- InnoDB - cached by InnoDB, cached by OS, IO-bound
- MyRocks - cached by MyRocks, cached by OS, IO-bound
- Postgres
- v1 - cached by Postgres, cached by OS, IO-bound
- v2 - cached by Postgres, cached by OS, IO-bound
- InnoDB
- Cached by InnoDB
- For MySQL 8.0.33 the a13 config has the best response time (1 millisecond). It uses smaller values for the innodb_max_purge_lag options. The response time is ~4 seconds for the other configs except for the a1 config where it is ~100 seconds. The a1 config disables native AIO.
- For MySQL 5.7.40 the best response times are between 20 and 30 seconds.
- For MySQL 5.6.51 the best response times are ~30 seconds.
- Cached by OS
- For MySQL 8.0.33 the a13 config again has the best response time (545ms). Most of the configs have a response time of ~15 seconds. The a18 and a19 configs are bad outliers.
- For MySQL 5.6.51 and 5.7.40 the best response time is ~25 seconds.
- IO-bound
- For MySQL 8.0.33 the a13 config again has the best response time (171ms). The response time for many configs is ~10 seconds. The a18 and a19 configs are bad outliers. The results for ps8033 are from Percona Server and they are similar to upstream.
- For MySQL 5.6.51 and 5.7.40 the best response time is ~15 seconds.
- MyRocks
- Cached by MyRocks
- Response time is ~4 seconds for the c6 and c7 configs. I focus on these configs because they provided the best results for the benchmark.
- Cached by OS
- Response time is ~4 seconds for the c6 and c7 configs.
- IO-bound
- Response time is 6 to 7 seconds for the c6 and c7 configs.
- Postgres
- Cached by Postgres
- Response time is between 4ms and 8ms for the a21 and a22 configs.
- Cached by OS
- Reponse time is between 32ms and 76ms for the a21 and a22 configs.
- IO-bound
- Response time is between 139ms and 197ms for the a21 and a22 configs.
No comments:
Post a Comment