Wednesday, February 26, 2025

Feedback from the Postgres community about the vector index benchmarks

This is a response to some of the feedback I received from the Postgres community about my recent benchmark results for vector indexes using MariaDB and Postgres (pgvector). The work here isn't sponsored and required ~2 weeks days of server time and a few more hours of my time (yes, I contribute to the PG community).

tl;dr

  • index create is ~4X faster when using ~4 parallel workers. I hope that parallel DDL comes to more open source DBMS.
  • parallel query does not help pgvector
  • increasing work_mem does not help pgvector
  • pgvector gets less QPS than MariaDB because it uses more CPU to compute the distance metric
The feedback

The feedback I received includes
  • benchmarks are useless, my production workload is the only relevant workload
    • I disagree, but don't expect consensus. But this approach means you are unlikely to do comparative benchmarks because it costs too much to port your workload to some other DBMS just for the sake of a benchmark and only your team can run this so you will miss out on expertise from elsewhere.
  • this work was sponsored so I don't trust it
    • There isn't much I can to about that.
  • this is benchmarketing!
    • There is a marketing element to this work. Perhaps I was not the first to define benchmarketing, but by my definition a benchmark report is not benchmarketing when the results are explained. I have been transparent about how I ran the benchmark and shared some performance debugging results here. MariaDB gets more QPS than pgvector because pgvector uses more CPU to compute the distance metric. 
  • you should try another Postgres extension for vector indexes
    • I hope to do that eventually. But pgvector is a great choice here because it implements HNSW and MariaDB implements modified HNSW. Regardless time is finite, the numbers of servers I have is finite and my work here (both sponsored and volunteer) competes with my need to finish my taxes, play video games, sleep, etc.
  • this result is bogus because I didn't like some config setting you used
    • Claims like this are cheap to make and expensive to debunk. Sometimes the suggested changes make a big difference but that has been rare in my experience. Most of the time the changes at best make a small difference.
  • this result is bogus because you used Docker
    • I am not a Docker fan, but I only used Docker for Qdrant. I am not a fan because I suspect it is overused and I prefer benchmark setups that don't require it. While the ann-benchmarks page states that Docker is used for all algorithms, it is not used for MariaDB or Postgres in my fork. And it is trivial, but time consuming, to update most of ann-benchmarks to not use Docker.
Claims about the config that I used include:
  • huge pages were disabled
    • Yes they were. Enabling huge pages would have helped both MariaDB and Postgres. But I prefer to not use them because the feature is painful (unless you like OOM). Posts from me on this are here and here.
  • track_io_timing was enabled and is expensive
    • There wasn't IO when queries ran as the database was cached so this is irrelevant. There was some write IO when the index was created. While I won't repeat tests with track_io_timing disabled, I am skeptical that the latency of two calls to gettimeofday() per IO are significant on modern Linux.
  • autovacuum_vacuum_cost_limit is too high
    • I set it to 4000 because I often write write-intensive benchmarks on servers with large IOPs capacities. This is the first time anyone suggested they were too high and experts have reviewed my config files. I wish that Postgres vacuum didn't require so much tuning -- and all of that tuning means that someone can always claim your tuning is wrong. Regardless, the benchmark workload is load, create index, query and I only time the create index and query steps. There is little impact from vacuum on this benchmark. I have also used hundreds of server hours to search for good Postgres configs.
  • parallel operations were disabled
    • Parallel query isn't needed for the workloads I normally run but parallel index create is nice to have. I disable parallel index create for Postgres because my primary focus is efficiency -- how much CPU and IO is consumed per operation. But I haven't been clear on that in my blog posts. Regardless, below I show there is a big impact from parallel index create and no impact from parallel query.
  • work_mem is too low
    • I have been using the default which is fine for the workloads I normally run (sysbench, insert benchmark). Below I show there is no impact from increasing it to 8M.

Benchmark

This post has much more detail about my approach in general. I ran the benchmark for 1 session. I use ann-benchmarks via my fork of a fork of a fork at this commit.  I used the dbpedia-openai dataset with 1M rows. It uses angular (cosine) for the distance metric. The ann-benchmarks config files is here for Postgres and in this case I only have results for pgvector with halfvec (float16).

I used a large server (Hetzner ax162-s) with 48 cores, 128G of RAM, Ubuntu 22.04 and HW RAID 10 using 2 NVMe devices. I tested three configurations for Postgres and all of the settings are here:
  • def
    • def stands for default and is the config I used in all of my previous blog posts. Thus, it is the config for which I received feedback.
  • wm8
    • wm8 stands for work_mem increased to 8MB. The default (used by def) is 4MB.
  • pq4
    • pq4 stands for Parallel Query with ~4 workers. Here I changed a few settings from def to support that.
Output from the benchmark is here.

The command lines to run the benchmark using my helper scripts are:
    bash rall.batch.sh v1 dbpedia-openai-1000k-angular c32r128

Results: QPS vs recall

These charts show the best QPS for a given recall. The graphs appears to be the same but the differences are harder to see as recall approaches 1.0 so the next section has a table with numbers.

But from these graphs, QPS doesn't improve with the wm8 or pq4 configs.


The chart for the def config which is what I used in previous blog posts.
The chart for the wm8 config with work_mem=8M
The chart for the pq4 config that uses ~4 parallel workers

Results: best QPS for a given recall

Many benchmark results are marketed via peak performance (max throughput or min response time) but these are usually constrained optimization problems -- determine peak performance that satisfies some SLA. And the SLA might be response time or efficiency (cost).

With ann-benchmarks the constraint is recall. Below I share the best QPS for a given recall target along with the configuration parameters (M, ef_construction, ef_search) at which that occurs.

Summary
  • pgvector does not get more QPS with parallel query
  • pgvector does not get more QPS with a larger value for work_mem
Legend:
  • recall, QPS - best QPS at that recall
  • isecs - time to create the index in seconds
  • m= - value for M when creating the index
  • ef_cons= - value for ef_construction when creating the index
  • ef_search= - value for ef_search when running queries
Best QPS with recall >= 1.000
no algorithms achived this for the (M, ef_construction) settings I used

Best QPS with recall >= 0.99
recall  QPS     isecs   config
0.990   317.9   7302    PGVector_halfvec(m=48, ef_cons=256, ef_search=40)
0.990   320.4   7285    PGVector_halfvec(m=48, ef_cons=256, ef_search=40)
0.990   316.9   1565    PGVector_halfvec(m=48, ef_cons=256, ef_search=40)

Best QPS with recall >= 0.98
0.983   412.0   4120    PGVector_halfvec(m=32, ef_cons=192, ef_search=40)
0.984   415.6   4168    PGVector_halfvec(m=32, ef_cons=192, ef_search=40)
0.984   411.4    903    PGVector_halfvec(m=32, ef_cons=192, ef_search=40)

Best QPS with recall >= 0.97
0.978   487.3   5070    PGVector_halfvec(m=32, ef_cons=256, ef_search=30)
0.970   508.3   2495    PGVector_halfvec(m=32, ef_cons=96, ef_search=30)
0.970   508.4   2495    PGVector_halfvec(m=32, ef_cons=96, ef_search=30)

Best QPS with recall >= 0.96
0.961   621.1   4120    PGVector_halfvec(m=32, ef_cons=192, ef_search=20)
0.962   632.3   4168    PGVector_halfvec(m=32, ef_cons=192, ef_search=20)
0.962   622.0    903    PGVector_halfvec(m=32, ef_cons=192, ef_search=20)

Best QPS with recall >= 0.95
0.951   768.7   2436    PGVector_halfvec(m=16, ef_cons=192, ef_search=30)
0.952   770.2   2442    PGVector_halfvec(m=16, ef_cons=192, ef_search=30)
0.953   753.0    547    PGVector_halfvec(m=16, ef_cons=192, ef_search=30)

Results: create index

The database configs for Postgres are shared above and parallel index create is disabled by default because my focus has not been on DDL performance. Regardless, it works great for Postgres with pgvector. The summary is:
  • index create is ~4X faster when using ~4 parallel workers
  • index sizes are similar with and without parallel create index
Sizes: table is ~8G and index is ~4G

Legend
  • M - value for M when creating the index
  • cons - value for ef_construction when creating the index
  • secs - time in seconds to create the index
  • size(MB) - index size in MB
                def     wm8     pq4
M       cons    secs    secs    secs
 8       32      412     405     108
16       32      649     654     155
 8       64      624     627     154
16       64     1029    1029     237
32       64     1901    1895     412
 8       96      834     835     194
16       96     1387    1393     312
32       96     2497    2495     541
48       96     3731    3726     798
 8      192     1409    1410     316
16      192     2436    2442     547
32      192     4120    4168     903
48      192     6117    6119    1309
64      192     7838    7815    1662
 8      256     1767    1752     400
16      256     3146    3148     690
32      256     5070    5083    1102
48      256     7302    7285    1565
64      256     9959    9946    2117

Thursday, February 20, 2025

How to find Lua scripts for sysbench using LUA_PATH

sysbench is a great tool for benchmarks and I appreciate all of the work the maintainer (Alexey Kopytov) put into it as that is often a thankless task. Today I struggled to figure out how to load Lua scripts from something other than the default location that was determined when sysbench was compiled. It turns out that LUA_PATH is the thing to set, but the syntax isn't what I expected.

My first attempt was this, because the PATH in LUA_PATH implies directory names. But that failed.
  LUA_PATH="/mnt/data/sysbench.lua/lua" sysbench ... oltp_insert run

It turns out that LUA_PATH uses special semantics and this worked:
  LUA_PATH="/mnt/data/sysbench.lua/lua/?.lua" sysbench ... oltp_insert run


The usage above replaces the existing search path. The usage below prepends the new path to the existing (compiled in) path:

  LUA_PATH="/mnt/data/sysbench.lua/lua/?.lua;;" sysbench ... oltp_insert run


Wednesday, February 19, 2025

My database communities

I have been working on databases since 1996. In some cases I just worked on the product (Oracle & Informix), in others I consider myself a member of the community (MySQL, Postgres & RocksDB). And for MongoDB I used to be in the community.

I worked on Informix XPS in 1996. I chose Informix because I could live in Portland OR and walk to work. I was fresh out of school, didn't know much about DBMS, but got a great starter project (star query optimization). The company wasn't in great shape so I left by 1997 for Oracle. I never used Informix in production and didn't consider myself as part of the Informix community.

I was at Oracle from 1997 to 2005. The first 3 years were in Portland implementing JMS for the app server team and the last 5 years at Oracle HQ working on query execution.  I fixed many bugs, added support for ieee754 types, rewrote sort and maintained the sort and bitmap index row sources. The people there were great and I learned a lot but I did not enjoy the code base and left for a startup. I never used Oracle in production and don't consider myself as part of the Oracle community.

I lead the MySQL engineering teams at Google for 4 years and at Facebook/Meta for 10 years. I was very much immersed in production and have been active in the community since 2006. The MySQL teams got much done at both Google (GTID, semi-sync, crash-safe replication, rewrote the InnoDB rw lock) and Facebook/Meta (MyRocks and too many other things to mention). Over the years at FB/Meta my job duties got in the way of programming so I used performance testing as a way to remain current. I also filed many bugs might still be in the top-10 for bug reports. While Oracle has been a great steward for the MySQL project I have been critical about the performance regressions from older MySQL to newer MySQL. I hope that eventually stops because it will become a big problem.

I contributed some code to RocksDB, mostly for monitoring. I spent much more time doing performance QA for it, and filing a few bugs. I am definitely in the community.

I don't use Postgres in production but have spent much time doing performance QA for it over the past ~10 years. A small part of that was done while at Meta, I had a business case, and was able to use some of their HW and my time. But most of this has been a volunteer effort -- more than 100 hours of my time and 10,000+ hours of server time. Some of those server hours are in public clouds (Google, Hetzner) so I am also spending a bit on this. I found a few performance bugs. I have not found large performance regressions over time which is impressive. I have met many of the contributors working on the bits I care about, and that has been a nice benefit.

I used to be a member of the MongoDB community. Like Postgres, I never supported it in production but I spent much time doing performance QA with it. I wrote mostly positive blog posts, filed more than a few bugs and even won the William Zola Community Award. But I am busy enough with MySQL, Postgres and RocksDB so I haven't tried to use it for years. Regardless, I continue to be impressed by how fast they pay down tech debt, with one exception (no cost-based optimizer).

Saturday, February 15, 2025

Vector indexes, large server, dbpedia-openai dataset: MariaDB, Qdrant and pgvector

My previous post has results for MariaDB and pgvector on the dbpedia-openai dataset. This post adds results from Qdrant. This uses ann-benchmarks to compare MariaDB, Qdrant and Postgres (pgvector) with a larger dataset, dbpedia-openai at 500k rows. The dataset has 1536 dimensions and uses angular (cosine) as the distance metric. This work was done by Small Datum LLC and sponsored by the MariaDB Corporation.

tl;dr

  • I am new to Qdrant so the chance that I made a mistake are larger than for MariaDB or Postgres
  • If you already run MariaDB or Postgres then I suggest you also use them for vector indexes
  • MariaDB usually gets ~2X more QPS than pgvector and ~1.5 more than Qdrant

Editorial

I have a bias. I am skeptical that you should deploy a new DBMS to support but one datatype (vectors) unless either you have no other DBMS in production or your production DBMS does not support vector indexing. 
  • Production is expensive -- you have to worry about security, backups, operational support
  • A new DBMS is expensive -- you have to spend time to learn how to use it
My initial response to Qdrant is that the new developer experience isn't very good. This can be fixed, but right now the product is complicated (has many features), configuration is complicated (also true for the DBMS I know, but I already paid that price), and the cognitive load is large. Just one example of the cognitive load is the need to learn the names that Qdrant uses for things that already have well-known names in the SQL DBMS world.

Deploying Qdrant

The more DBMS you include in one benchmark, the more likely you are to make a mistake because you lack expertise in all of those DBMS. I will soon learn whether I made a mistake here but I made a good faith effort to get good results from Qdrant.

I first tried to compile from source. But that failed. The docs state that The current list of required libraries can be found in the Dockerfile and while I was able to figure that out, I prefer that they just list the dependencies. Alas, my attempts to compile from source failed with error messages about problems with (gRPC) protocol definitions.

So I decided to try the Docker container they provide. I ended up not changing the Qdrant configuration provided in the Docker container. I spent some time doing performance debugging and didn't see anything to indicate that a config change was needed. For example, I didn't see disk IO during queries. But the performance debugging was harder because that Docker container image doesn't come with my favorite debug tools installed. Some of the tools were easy to install, others (perf) were not.

Benchmark

This post has much more detail about my approach in general. I ran the benchmark for 1 session. I use ann-benchmarks via my fork of a fork of a fork at this commit

The ann-benchmarks config files are here for MariaDB, Postgres and Qdrant. For Postgres I use the values for M and ef_construction. But MariaDB doesn't support ef_construction so I only specify the M values. While pgvector requires ef_construction to be >= 2*M, I do not know whether Qdrant has a similar requirement. Regardless I only test cases where that constraint is true.

Some quantization was used
  • MariaDB uses 16-bit integers rather than float32
  • pgvector uses float32, pgvector halfvec uses float16
  • For Qdrant I used none (float32) and scalar (int8)
I used a larger server (Hetzner ax162-s) with 48 cores, 128G of RAM, Ubuntu 22.04 and HW RAID 10 using 2 NVMe devices. The database configuration files are here for MariaDB and for Postgres. Output from the benchmark is here.

I had ps and vmstat running during the benchmark and confirmed there weren't storage reads as the table and index were cached by MariaDB and Postgres.

The command lines to run the benchmark using my helper scripts are:
    bash rall.batch.sh v1 dbpedia-openai-500k-angular c32r128

Results: QPS vs recall

These charts show the best QPS for a given recall. MariaDB gets more QPS than Qdrant and pgvector but that is harder to see as the recall approaches 1, so the next section has a table for best QPS per DBMS at a given recall.

Results: create index

The database configs for Postgres and MariaDB are shared above, and parallel index create is disabled by the config for Postgres and not supported yet by MariaDB. The summary is:
  • index sizes are similar between MariaDB and pgvector with halfvec
  • time to create the index varies a lot and it is better to consider this in the context of recall which is done in next section. But Qdrant creates indexes a lot faster than MariaDB or pgvector.
  • I did not find an accurate way to determine index size for Qdrant. There is a default method in ann-benchmarks that a DBMS can override. The default just compares process RSS before and after creating an index which isn't accurate for small indexes. The MariaDB and Postgres code override the default and query the data dictionary to get a more accurate estimate.
The max time to create an index for MariaDB and Postgres exceeds 10,000 seconds on this dataset when M and/or ef_construction are large. The max time for Qdrant was <= 400 seconds for no quantization and <= 300 seconds for scalar quantization. This is excellent. But I wonder if things Qdrant does (or doesn't do) to save time during create index contributes to making queries slower because MariaDB has much better QPS.

More details on index size and index create time for MariaDB and Postgres are in my previous post.

Results: best QPS for a given recall

Many benchmark results are marketed via peak performance (max throughput or min response time) but these are usually constrained optimization problems -- determine peak performance that satisfies some SLA. And the SLA might be response time or efficiency (cost).

With ann-benchmarks the constraint is recall. Below I share the best QPS for a given recall target along with the configuration parameters (M, ef_construction, ef_search) at which that occurs for each of the algorithms -- MariaDB, pgvector with float32 and float16/halfvec, Qdrant with no and scalar quantization.

Summary
  • Qdrant with scalar quantization does not get a result for recall=1.0 for the values of M, ef_construction and ef_search I used
  • MariaDB usually gets ~2X more QPS than pgvector and ~1.5 more than Qdrant
  • Index create time was much less for Qdrant (described above)
Legend:
  • recall, QPS - best QPS at that recall
  • rel2ma - (QPS for me / QPS for MariaDB)
  • m= is the value for M when creating the index
  • ef_cons= is the value for ef_construction when creating the index
  • ef_search= is the value for ef_search when running queries
  • quant= is the quantization used by Qdrant
  • dbms
    • MariaDB - MariaDB, there is no option for quantization
    • PGVector - Postgres with pgvector and float32
    • PGVector_halfvec - Postgres with pgvector and halfvec (float16)
    • Qdrant(..., quant=none) - Qdrant with no quantization
    • Qdrant(..., quant=scalar) - Qdrant with scalar quantization
MariaDB gets more QPS than a DBMS when rel2ma is less than 1.0 and when rel2ma is 0.5 then MariaDB gets 2X more QPS. Below, the rel2ma values are always much less than 1.0 except in the first group of results for recall = 1.0. 

Best QPS with recall = 1.000
recall  QPS     rel2ma
1.000   18.3    1.00    MariaDB(m=32, ef_search=200)
1.000   49.4    2.70    PGVector(m=64, ef_construct=256, ef_search=400)
1.000   56.4    3.08    PGVector_halfvec(m=64, ef_construct=256, ef_search=400)
1.000  153.9    8.41    Qdrant(m=32, ef_construct=256, quant=none, hnsw_ef=400)

Best QPS with recall >= 0.99
recall  QPS     rel2ma
0.993   861     1.00    MariaDB(m=24, ef_search=10)
0.991   370     0.43    PGVector(m=16, ef_construct=256, ef_search=80)
0.990   422     0.49    PGVector_halfvec(m=16, ef_construct=192, ef_search=80)
0.990   572     0.66    Qdrant(m=32, ef_construct=256, quant=none, hnsw_ef=40)
0.990   764     0.89    Qdrant(m=48, ef_construct=192, quant=scalar, hnsw_ef=40)

Best QPS with recall >= 0.98
recall  QPS     rel2ma
0.983  1273     1.00    MariaDB(m=16, ef_search=10)
0.981   492     0.39    PGVector(m=32, ef_construct=192, ef_search=30)
0.982   545     0.43    PGVector_halfvec(m=32, ef_construct=192, ef_search=30)
0.981   713     0.56    Qdrant(m=16, ef_construct=192, quant=none, hnsw_ef=40)
0.980   895     0.70    Qdrant(m=16, ef_construct=256, quant=scalar, hnsw_ef=40)

Best QPS with recall >= 0.97
recall  QPS     rel2ma
0.983  1273     1.00    MariaDB(m=16, ef_search=10)
0.971   635     0.50    PGVector(m=32, ef_construct=192, ef_search=20)
0.971   724     0.57    PGVector_halfvec(m=32, ef_construct=192, ef_search=20)
0.972   782     0.61    Qdrant(m=16, ef_construct=192, quant=none, hnsw_ef=30)
0.970   982     0.77    Qdrant(m=16, ef_construct=192, quant=scalar, hnsw_ef=30)

Best QPS with recall >= 0.96
recall  QPS     rel2ma
0.969  1602     1.00    MariaDB(m=12, ef_search=10)
0.965   762     0.48    PGVector(m=16, ef_construct=192, ef_search=30)
0.964   835     0.52    PGVector_halfvec(m=16, ef_construct=192, ef_search=30)
0.963   811     0.51    Qdrant(m=16, ef_construct=96, quant=none, hnsw_ef=30)
0.961   996     0.62    Qdrant(m=16, ef_construct=96, quant=scalar, hnsw_ef=30)

Best QPS with recall >= 0.95
recall  QPS     rel2ma
0.969  1602     1.00    MariaDB(m=12, ef_search=10)
0.954   802     0.50    PGVector(m=16, ef_construct=96, ef_search=30)
0.955   880     0.55    PGVector_halfvec(m=16, ef_construct=96, ef_search=30)
0.954   869     0.54    Qdrant(m=8, ef_construct=256, quant=none, hnsw_ef=40)
0.950  1060     0.66    Qdrant(m=16, ef_construct=192, quant=scalar, hnsw_ef=20)

Monday, February 10, 2025

Vector indexes, MariaDB & pgvector, large server, dbpedia-openai dataset

This post has results from ann-benchmarks to compare MariaDB and Postgres with a larger dataset, dbpedia-openai at 100k, 500k and 1M rows. It has 1536 dimensions and uses angular (cosine) as the distance metric. By larger I mean by the standards of what is in ann-benchmarks. This work was done by Small Datum LLC and sponsored by the MariaDB Corporation.

tl;dr

  • Index create time was much less for MariaDB in all cases except the result for recall >= 0.95
  • For a given recall, MariaDB gets between 2.1X and 2.7X more QPS than Postgres
Benchmark

This post has much more detail about my approach in general. I ran the benchmark for 1 session. I use ann-benchmarks via my fork of a fork of a fork at this commit.  The ann-benchmarks config files are here for MariaDB and for Postgres.

I used a larger server (Hetzner ax162-s) with 48 cores, 128G of RAM, Ubuntu 22.04 and HW RAID 10 using 2 NVMe devices. The database configuration files are here for MariaDB and for Postgres. Output from the benchmark is here.

I had ps and vmstat running during the benchmark and confirmed there weren't storage reads as the table and index were cached by MariaDB and Postgres.

The command lines to run the benchmark using my helper scripts are:
    bash rall.batch.sh v1 dbpedia-openai-100k-angular c32r128
    bash rall.batch.sh v1 dbpedia-openai-500k-angular c32r128
    bash rall.batch.sh v1 dbpedia-openai-1000k-angular c32r128

Results: QPS vs recall

These charts show the best QPS for a given recall. MariaDB gets about 2X more QPS than Postgres for a specific recall level 

With 100k rows

With 500k rows

With 1M rows

Results: create index

The database configs for Postgres and MariaDB are shared above, and parallel index create is disabled by the config for Postgres and not supported yet by MariaDB. The summary is:
  • index sizes are similar between MariaDB and pgvector with halfvec
  • time to create the index varies a lot and it is better to consider this in the context of recall which is done in next section
Legend
  • M - value for M when creating the index
  • cons - value for ef_construction when creating the index
  • secs - time in seconds to create the index
  • size(MB) - index size in MB
Table sizes:
* Postgres is 7734M
* MariaDB is 7856M

                -- pgvector --          -- pgevector --
                -- float32 --           -- halfvec   --
M       cons    secs    size(MB)        secs    size(MB)
 8       32       458   7734             402    3867
16       32       720   7734             655    3867
 8       64       699   7734             627    3867
16       64      1144   7734            1029    3867
32       64      2033   7734            1880    3867
 8       96       934   7734             843    3867
16       96      1537   7734            1382    3867
32       96      2730   7734            2482    3867
48       96      4039   7734            3725    3867
 8      192      1606   7734            1409    3867
16      192      2778   7734            2435    3867
32      192      4683   7734            4154    3867
48      192      6830   7734            6106    3867
64      192      8601   7734            7831    3958
 8      256      2028   7734            1764    3867
16      256      3609   7734            3151    3867
32      256      5838   7734            5056    3867
48      256      8224   7734            7283    3867
64      256     11031   7734            9931    3957

mariadb
M       secs    size(MB)
 4        318   3976
 5        372   3976
 6        465   3976
 8        717   3976
12       1550   3976
16       2887   3976
24       7248   3976
32      14120   3976
48      36697   3980

Results: best QPS for a given recall

Many benchmark results are marketed via peak performance (max throughput or min response time) but these are usually constrained optimization problems -- determine peak performance that satisfies some SLA. And the SLA might be response time or efficiency (cost).

With ann-benchmarks the constraint is recall. Below I share the best QPS for a given recall target along with the configuration parameters (M, ef_construction, ef_search) at which that occurs for each of the algorithms (MariaDB, pgvector with float32, pgvector with float16/halfvec).

Summary
  • Postgres does not get recall=1.0 for the values of M, ef_construction and ef_search I used
  • Index create time was much less for MariaDB in all cases except the result for recall >= 0.95
  • For a given recall target, MariaDB gets between 2.1X and 2.7X more QPS than Postgres
Legend:
  • recall, QPS - best QPS at that recall
  • isecs - time to create the index in seconds
  • m= - value for M when creating the index
  • ef_cons= - value for ef_construction when creating the index
  • ef_search= - value for ef_search when running queries
Best QPS with recall >= 1.000, pgvector did not reach the recall target
recall  QPS     isecs
1.000    20     36697   MariaDB(m=48, ef_search=40)

Best QPS with recall >= 0.99, MariaDB gets >= 2.2X more QPS than Postgres
recall  QPS     isecs
0.990   287     8224    PGVector(m=48, ef_cons=256, ef_search=40)
0.990   321     7283    PGVector_halfvec(m=48, ef_cons=256, ef_search=40)
0.992   731     7248    MariaDB(m=24, ef_search=10)

Best QPS with recall >= 0.98, MariaDB gets >= 2.7X more QPS than Postgres
recall  QPS     isecs
0.984    375    4683    PGVector(m=32, ef_cons=192, ef_search=40)
0.984    418    4154    PGVector_halfvec(m=32, ef_cons=192, ef_search=40)
0.981   1130    2887    MariaDB(m=16, ef_search=10)

Best QPS with recall >= 0.97, MariaDB gets >= 2.3X more QPS than Postgres
recall  QPS     isecs
0.974    440    6830    PGVector(m=48, ef_cons=192, ef_search=20)
0.973    483    6106    PGVector_halfvec(m=48, ef_cons=192, ef_search=20)
0.981   1130    2887    MariaDB(m=16, ef_search=10)

Best QPS with recall >= 0.96, MariaDB gets >= 2.2X more QPS than Postgres
recall  QPS     isecs
0.962    568    4683    PGVector(m=32, ef_cons=192, ef_search=20)
0.961    635    4154    PGVector_halfvec(m=32, ef_cons=192, ef_search=20)
0.965   1433    1550    MariaDB(m=12, ef_search=10)

Best QPS with recall >= 0.95, MariaDB gets >= 2.1X more QPS
recall  QPS     isecs
0.953    588    2730    PGVector(m=32, ef_cons=96, ef_search=20)
0.957    662    1382    PGVector_halfvec(m=16, ef_cons=96, ef_search=40)
0.965   1433    1550    MariaDB(m=12, ef_search=10)

Tuesday, January 28, 2025

Vector indexes, MariaDB & pgvector, large server, large dataset: part 1

This post has results from ann-benchmarks to compare MariaDB and Postgres with a larger dataset, gist-960-euclidean.  Previous posts (here and here) used fashion-mnist-784-euclidean which is a small dataset. By larger I mean by the standards of what is in ann-benchmarks. This dataset has 1M rows and 960 dimensions. The fashion-mnist-784-euclidean dataset has 60,000 rows and 784 dimensions. Both use Euclidean distance. This work was done by Small Datum LLC and sponsored by the MariaDB Corporation.

tl;dr

  • MariaDB gets between 2.5X and 3.9X more QPS than Postgres for recall >= 0.95

Benchmark

This post has much more detail about my approach in general. I ran the benchmark for 1 session. I use ann-benchmarks via my fork of a fork of a fork at this commit.  The ann-benchmarks config files are here for MariaDB and for Postgres.

I used a larger server (Hetzner ax162-s) with 48 cores, 128G of RAM, Ubuntu 22.04 and HW RAID 10 using 2 NVMe devices. The database configuration files are here for MariaDB and for Postgres.

I had ps and vmstat running during the benchmark and confirmed there weren't storage reads as the table and index were cached by MariaDB and Postgres.

The command line to run the benchmark using my helper scripts is:
    bash rall.batch.sh v1 gist-960-euclidean c32r128

Results: QPS vs recall

This chart shows the best QPS for a given recall. MariaDB gets ~1.5X more QPS than pgvector at low recall and between 2X and 4X more QPS at high recall.


Results: create index

The database configs for Postgres and MariaDB are shared above, and parallel index create is disabled by the config for Postgres and not supported yet by MariaDB. The summary is:
  • index sizes are similar between MariaDB and pgvector with halfvec
  • time to create the index varies a lot and it is better to consider this in the context of recall which is done in next section
Table size is 3906 MB for Postgres and 5292 MB for MariaDB.

Legend
  • M - value for M when creating the index
  • cons - value for ef_construction when creating the index
  • secs - time in seconds to create the index
  • size(MB) - index size in MB
                -- pgvector --          -- pgvector --
                -- float32  --          -- halfvec  --
M       cons    secs    size(MB)        secs    size(MB)
 8       32      323    3870             292    2568
16       32      610    7811             551    2603
 8       64      512    3865             466    2565
16       64      964    7684             869    2561
32       64     1958    7812            1773    2604
 8       96      717    3863             646    2564
16       96     1330    7681            1187    2560
32       96     2640    7679            2368    2559
48       96     3990    7812            3606    2606
 8      192     1265    3861            1142    2562
16      192     2295    7679            2036    2559
32      192     4361    7678            3880    2559
48      192     6281    7678            5581    2562
64      192     8589    7678            7612    3839
 8      256     1607    3861            1448    2562
16      256     2882    7678            2560    2559
32      256     5260    7678            4611    2559
48      256     7678    7678            6713    2561
64      256     9962    7678            8851    3839

mariadb
M       secs    size(MB)
 4        243   2316
 5        313   2320
 6        439   2316
 8        775   2316
12       1878   2316
16       3547   2348
24       8690   2696
32      16172   2696
48      38732   2756

Results: best QPS for a given recall

Many benchmark results are marketed via peak performance (max throughput or min response time) but these are usually constrained optimization problems -- determine peak performance that satisfies some SLA. And the SLA might be response time or efficiency (cost).

With ann-benchmarks the constraint is recall. Below I share the best QPS for a given recall target along with the configuration parameters (M, ef_construction, ef_search) at which that occurs for each of the algorithms (MariaDB, pgvector with float32, pgvector with float16/halfvec).

Summary
  • Postgres does not get recall=1.0 for the values of M, ef_construction and ef_search I used
  • Index create time was less for MariaDB in all cases except the result for recall >= 0.96. However, if you care more about index size than peak QPS then it might be better to look at more results per recall level, as in the best 3 results per DBMS rather than the best as I do here.
  • For a given recall target, MariaDB gets between 2.5X and 3.9X more QPS than Postgres
Legend:
  • recall, QPS - best QPS at that recall
  • isecs - time to create the index in seconds
  • m= - value for M when creating the index
  • ef_cons= - value for ef_construction when creating the index
  • ef_search= - value for ef_search when running queries
Best QPS with recall >= 1.000, Postgres did not reach the recall target
recall  QPS     isecs
1.000    87.1  38732   MariaDB(m=48, ef_search=10)

Best QPS with recall >= 0.99, MariaDB gets 3.7X more QPS than Postgres
recall  QPS     isecs
0.990    81.1   9962    PGVector(m=64, ef_cons=256, ef_search=120)
0.990    85.4   8851    PGVector_halfvec(m=64, ef_cons=256, ef_search=120)
0.991   311.8   8690    MariaDB(m=24, ef_search=10)

Best QPS with recall >= 0.98,MariaDB gets 3.5X more QPS than Postgres
recall  QPS     isecs
0.984   101.1   6281    PGVector(m=48, ef_cons=192, ef_search=120)
0.984   109.7   5581    PGVector_halfvec(m=48, ef_cons=192, ef_search=120)
0.985   384.9   3547    MariaDB(m=16, ef_search=20)

Best QPS with recall >= 0.97, MariaDB gets 2.5X more QPS than Postgres
recall  QPS     isecs
0.973   138.0   5260    PGVector(m=32, ef_cons=256, ef_search=120)
0.971   152.7   3880    PGVector_halfvec(m=32, ef_cons=192, ef_search=120)
0.985   384.9   3547    MariaDB(m=16, ef_search=20)

Best QPS with recall >= 0.96, MariaDB gets 3.9X more QPS than Postgres
recall  QPS     isecs
0.966   139.8   6281    PGVector(m=48, ef_cons=192, ef_search=80)
0.964   155.4   2368    PGVector_halfvec(m=32, ef_cons=96, ef_search=120)
0.960   610.1   3547    MariaDB(m=16, ef_search=10)

Best QPS with recall >= 0.95, MariaDB gets 2.9X more QPS than Postgres
recall  QPS     isecs
0.951   190.8   5260    PGVector(m=32, ef_cons=256, ef_search=80)
0.951   208.7   4611    PGVector_halfvec(m=32, ef_cons=256, ef_search=80)
0.960   610.1   3547    MariaDB(m=16, ef_search=10)

Sunday, January 26, 2025

Vector indexes, MariaDB & pgvector, large server, small dataset: part 2

This post has results for vector index support in MariaDB and Postgres. This work was done by Small Datum LLC and sponsored by the MariaDB Corporation. This is part 2 in a series that compares QPS and recall for the fashion-mnist-784-euclidean dataset using from 1 to 48 concurrent sessions on a large server. This is part 2 and part 1 is here

The purpose of this post is to explain the results I shared in part 1 where MariaDB does much better than pgvector at low and high concurrency but the performance gap isn't as large at medium concurrency where low means <= 4 concurrent sessions, medium means 8 to 20 and high means >= 24.

tl;dr

  • QPS is ~1.4X larger for MariaDB than for pgvector at 2 and 48 concurrent sessions
  • pgvector uses more CPU/query than MariaDB
  • MariaDB does more context switches /query than pgvector
  • MariaDB appears to use less CPU to compute euclidean distance

Benchmark

This post has much more detail about my approach in general and part 1 has more detail on this particular setup. I repeated the benchmark for 2 to 48 concurrent sessions because my test server has 48 cores. I use ann-benchmarks via my fork of a fork of a fork at this commit

For more on euclidean distance (L2) see here.

For MariaDB:
  • queries use ORDER by vec_distance_euclidean
  • create index uses DISTANCE=euclidean
For Postgres
  • queries use ORDER BY embedding::halfvec(...) <-> $name::halfvec(...)
  • create index uses USING hnsw ((embedding::halfvec(...)) halfvec_l2_ops

Results: QPS by concurrency

The charts in this section show QPS by concurrency level as the benchmark was repeated for 1 to 48 concurrent sessions (X concurrent sessions means X concurrent queries).

The charts come from this spreadsheet. All of the data from the benchmark is here and the data I scraped to make these charts is here. I used configurations that provide a recall of ~0.96.

  • MariaDB - ef_search=10, M=6
  • Postgres - ef_search=10, M=16, ef_construction=32

This chart has absolute QPS for each of the systems tested.

This chart has relative QPS which is: (QPS for Postgres / QPS for MariaDB).

  • The MariaDB advantage is larger at low and high concurrency.
  • The MariaDB advantage isn't as large between 8 and 20 concurrent sessions

And this table also has relative QPS.

  • The MariaDB advantage is larger at low and high concurrency.
  • The MariaDB advantage isn't as large between 8 and 20 concurrent sessions
  • pgvector.halfvecpgvector
    10.600.57
    20.710.69
    40.780.72
    80.890.83
    120.820.79
    160.980.86
    200.960.85
    240.820.81
    280.850.79
    320.900.86
    360.710.76
    400.840.75
    440.700.65
    480.730.64

    This table has (QPS / concurrency). For all systems tested the QPS per session decreases as the concurrency increases. I suspect the benchmark client is part of the problem but I am just speculating

    • multiprocessing.pool is used by both MariaDB and pgvector, which is good, less GIL. See here for MariaDB and for pgvector.
    • the benchmark client includes all of the time to process queries, including
      • creating & start multiprocessing.pool - perhaps the pool can be cached & reused across runs
      • creating a database connection
      • gathering results from the concurrent sessions - some of this is done in the main thread
    • AFAIK, the total number of queries per run is fixed, so the number of queries per session is less when there are more concurrent sessions and setup overhead (create database connection, create multiprocessing.pool, process results) becomes more significant as the concurrency level increases.

    MariaDBpgvector.halfvecpgvector
    14639.62803.52661.3
    23376.42405.62327.3
    42869.22226.82076.0
    82149.61922.21784.0
    121858.11522.21471.8
    161527.51498.31310.5
    201353.11303.11156.3
    241261.81039.81023.5
    281062.7906.6839.5
    32909.0814.8784.9
    36918.5649.0702.5
    40783.2659.8589.9
    44745.8518.9482.5
    48680.1495.9435.3

    Performance debugging

    The benchmark client does a lot of work (like checking results for recall) which means there is a brief burst of CPU overhead when queries run followed by longer periods where the benchmark client is processing the results. So I modified the benchmark client to only run queries in a loop and avoid other overheads like checking the results for recall. This makes it easier to collect performance data like CPU profiles (perf), PMP stacks and vmstat samples.

    Performance debugging: MariaDB

    From a test with 2 concurrent sessions the perf profile shows that much CPU is used to compute the dot product which is used to determine the distance between vectors:

    # Overhead  Command          Shared Object        Symbol
        16.89%  one_connection   mariadbd.orig        [.] FVector::dot_product
         4.71%  one_connection   mariadbd.orig        [.] escape_string_for_mysql
         3.42%  one_connection   mariadbd.orig        [.] search_layer
         2.99%  one_connection   mariadbd.orig        [.] buf_page_get_gen
         2.31%  one_connection   mariadbd.orig        [.] my_charlen_utf8mb4
         2.16%  one_connection   mariadbd.orig        [.] MYSQLparse
         2.03%  one_connection   libc.so.6            [.] __memmove_avx512_unaligned_erms
         1.74%  one_connection   mariadbd.orig        [.] PatternedSimdBloomFilter<FVectorNode>::Query
         1.58%  one_connection   libm.so.6            [.] __roundf
         1.49%  one_connection   mariadbd.orig        [.] mtr_memo_slot_t::release
         1.40%  one_connection   mariadbd.orig        [.] mhnsw_read_first
         1.32%  one_connection   mariadbd.orig        [.] page_cur_search_with_match
         1.09%  one_connection   libc.so.6            [.] __memcmp_evex_movbe
         1.06%  one_connection   mariadbd.orig        [.] FVectorNode::distance_to
         1.03%  one_connection   mariadbd.orig        [.] row_search_mvcc
         0.98%  one_connection   mariadbd.orig        [.] rec_get_offsets_func
         0.93%  one_connection   mariadbd.orig        [.] cmp_data
         0.93%  one_connection   mariadbd.orig        [.] alloc_root
         0.75%  one_connection   mariadbd.orig        [.] Visited::cmp

    And then the result with 48 concurrent sessions

    • the percentage of time in dot_product was ~17% above, but only ~11.5% here
    • more time is spent in InnoDB functions like buf_page_get_gen, mtr_memo_slot_t::release, page_cur_search_with_match, btr_cur_t_::search_leaf, ssux_lock::psd_read_lock, rec_get_offsets_func and buf_page_make_young_if_needed. Some of that might be expected but that can also be a sign of too much mutex contention in InnoDB.
    • I don't see signs of mutex contention in PMP output

    # Overhead  Command          Shared Object        Symbol
        11.49%  one_connection   mariadbd.orig        [.] FVector::dot_product
         7.17%  one_connection   mariadbd.orig        [.] buf_page_get_gen
         4.44%  one_connection   mariadbd.orig        [.] search_layer
         4.00%  one_connection   mariadbd.orig        [.] escape_string_for_mysql
         2.49%  one_connection   mariadbd.orig        [.] mtr_memo_slot_t::release
         2.23%  one_connection   mariadbd.orig        [.] page_cur_search_with_match
         1.86%  one_connection   mariadbd.orig        [.] MYSQLparse
         1.85%  one_connection   mariadbd.orig        [.] my_charlen_utf8mb4
         1.75%  one_connection   libc.so.6            [.] __memmove_avx512_unaligned_erms
         1.60%  one_connection   mariadbd.orig        [.] btr_cur_t::search_leaf
         1.38%  one_connection   mariadbd.orig        [.] ssux_lock::psi_rd_lock
         1.37%  one_connection   mariadbd.orig        [.] mhnsw_read_first
         1.32%  one_connection   mariadbd.orig        [.] FVectorNode::distance_to
         1.23%  one_connection   mariadbd.orig        [.] rec_get_offsets_func
         1.19%  one_connection   mariadbd.orig        [.] PatternedSimdBloomFilter<FVectorNode>::Query
         1.02%  one_connection   mariadbd.orig        [.] FVectorNode::load
         0.97%  one_connection   libm.so.6            [.] __roundf
         0.88%  one_connection   mariadbd.orig        [.] buf_page_make_young_if_needed
         0.83%  one_connection   mariadbd.orig        [.] cmp_dtuple_rec_with_match_low

    From vmstat with 2 concurrent sessions

    procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
     2  0  96784 89703408 1982284 34588692    0    0     0    70 1401 56743  3  1 95  0  0
     2  0  96784 89703408 1982284 34588696    0    0     0     9 1356 56588  4  1 96  0  0
     2  0  96784 89703408 1982284 34588696    0    0     0     0 1434 56755  3  1 95  0  0
     2  0  96784 89703408 1982284 34588696    0    0     0     0 1340 56629  4  1 96  0  0
     2  0  96784 89702672 1982288 34588700    0    0     0    86 1810 56874  4  1 95  0  0

    From vmstat with 48 concurrent sessions

    procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
    58  0  96784 89428544 1988284 34733232    0    0     0   290 27883 637320 80 20  0  0  0
    56  0  96784 89430560 1988284 34733236    0    0     0     0 28393 639018 81 19  0  0  0
    52  0  96784 89430560 1988284 34733236    0    0     0     0 28240 638914 80 20  0  0  0
    59  0  96784 89430560 1988284 34733236    0    0     0     0 33141 642055 80 20  1  0  0
    56  0  96784 89430560 1988284 34733236    0    0     0     0 29520 639747 81 19  0  0  0

    Comparing the vmstat results for 2 vs 48 sessions

    • CPU/query is (vmstat.us + vmstat.sy) / QPS
      • For 2 sessions it is ((4+5+4+5+5) / 5) / 6752.7 = .000681
      • For 48 sessions it is ((100+100+100+100+100)/5) / 32645.5 = .003063
      • CPU/query is ~4.5X larger at 48 sessions
    • Context switches /query is vmstat.cs / QPS
      • For 2 sessions it is 56743 / 6752.7 = 8.40
      • For 48 sessions it is 637320 / 32645.5 = 19.52
      • Context switches /query is ~2.3X larger at 48 sessions

    Performance debugging: pgvector with halfvec

    From a test with 2 concurrent sessions the perf profile shows

    • computing L2 distance accounts for the most time, here it is 25.98% while above for MariaDB at 2 concurrent sessions it was 16.89%. Perhaps MariaDB is faster at computing L2 distance, perhaps MariaDB has more overhead elsewhere to reduce the fraction of time in computing L2 distance. But I suspect that the former is true.
    • Postgres here appears to have more CPU overhead than MariaDB in accessing the data (PinBuffer, LWLock, etc)

    # Overhead  Command   Shared Object      Symbol
        25.98%  postgres  vector.so          [.] HalfvecL2SquaredDistanceF16c
         9.68%  postgres  postgres           [.] PinBuffer
         6.46%  postgres  postgres           [.] hash_search_with_hash_value
         5.39%  postgres  postgres           [.] LWLockRelease
         4.25%  postgres  postgres           [.] pg_detoast_datum
         3.95%  postgres  vector.so          [.] vector_to_halfvec
         3.06%  postgres  postgres           [.] LWLockAttemptLock
         2.97%  postgres  postgres           [.] LWLockAcquire
         2.89%  postgres  vector.so          [.] HnswLoadUnvisitedFromDisk
         2.44%  postgres  postgres           [.] StartReadBuffer
         1.82%  postgres  postgres           [.] GetPrivateRefCountEntry
         1.70%  postgres  vector.so          [.] tidhash_insert_hash_internal
         1.65%  postgres  postgres           [.] LockBuffer
         1.59%  postgres  vector.so          [.] HnswLoadElementImpl
         0.88%  postgres  libc.so.6          [.] __memcmp_evex_movbe
         0.80%  postgres  postgres           [.] ItemPointerEquals
         0.79%  postgres  postgres           [.] ResourceOwnerForget
         0.71%  postgres  vector.so          [.] HnswSearchLayer
         0.64%  postgres  postgres           [.] pq_getmsgfloat4

    And then from a test with 48 concurrent sessions

    • The fraction of time computing L2 distance here is less than it is for 2 sessions above. This is similar to the results for MariaDB.
    • From PMP I don't see signs of mutex contention, but I only took 3 samples

    # Overhead  Command   Shared Object      Symbol
        19.86%  postgres  vector.so          [.] HalfvecL2SquaredDistanceF16c
        10.89%  postgres  postgres           [.] PinBuffer
         6.78%  postgres  postgres           [.] hash_search_with_hash_value
         5.44%  postgres  postgres           [.] LWLockRelease
         5.25%  postgres  postgres           [.] LWLockAttemptLock
         4.97%  postgres  postgres           [.] pg_detoast_datum
         3.30%  postgres  vector.so          [.] vector_to_halfvec
         2.62%  postgres  vector.so          [.] HnswLoadUnvisitedFromDisk
         2.61%  postgres  postgres           [.] StartReadBuffer
         2.03%  postgres  postgres           [.] GetPrivateRefCountEntry
         1.70%  postgres  postgres           [.] LockBuffer
         1.69%  postgres  vector.so          [.] HnswLoadElementImpl
         1.49%  postgres  postgres           [.] LWLockAcquire
         1.28%  postgres  vector.so          [.] tidhash_insert_hash_internal
         0.79%  postgres  postgres           [.] ReadBufferExtended
         0.78%  postgres  postgres           [.] AllocSetAlloc
         0.76%  postgres  libc.so.6          [.] __memcmp_evex_movbe
         0.65%  postgres  postgres           [.] ResourceOwnerForget
         0.63%  postgres  vector.so          [.] HnswSearchLayer

    From vmstat with 2 concurrent sessions

    procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
     2  0  96784 90724048 1997176 36391152    0    0     0    80 4188 24830  4  0 96  0  0
     2  0  96784 90724304 1997176 36391152    0    0     0     9 4079 24931  4  0 96  0  0
     2  0  96784 90724304 1997176 36391152    0    0     0   210 4117 24857  4  0 96  0  0
     2  0  96784 90724048 1997176 36391152    0    0     0    42 4160 24946  4  0 96  0  0
     2  0  96784 90723576 1997176 36391152    0    0     0    51 4511 24971  4  1 96  0  0

    From vmstat with 48 concurrent sessions

    procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
    88  0  96784 90323440 2001256 36414792    0    0     0     0 13129 224133 92  8  0  0  0
    86  0  96784 90323440 2001256 36414796    0    0     0   197 13011 223926 92  8  0  0  0
    85  0  96784 90322936 2001256 36414796    0    0     0    48 13118 224239 92  8  0  0  0
    83  0  96784 90322688 2001264 36414800    0    0     0    56 13084 223987 92  8  0  0  0
    86  0  96784 90324952 2001264 36414800    0    0     0    17 13361 224189 92  8  0  0  0

    Comparing the vmstat results for 2 vs 48 sessions

    • QPS is ~1.4X larger for MariaDB at 2 and 48 sessions
    • CPU/query is (vmstat.us + vmstat.sy) / QPS
      • For 2 sessions it is ((4+4+4+4+5)/5) / 4811.2 = .000872
      • For 48 sessions it is ((100+100+100+100+100)/5) / 23802.6 = .004201
      • CPU/query is ~4.8X larger at 48 sessions than at 2
      • CPU/query for Postgres is ~1.3X larger than MariaDB at 2 sessions and ~1.4X larger at 48
    • Context switches is vmstat.cs / QPS
      • For 2 sessions it is 24830 / 4811.2 = 5.16
      • For 48 sessions it is 224133 / 23802.6 = 9.41
      • Context switches /query is ~1.8X larger at 48 sessions
      • Context switches /query for MariaDB is ~1.6X larger than Postgres at 2 sessions and ~2.1X larger at 48


    Feedback from the Postgres community about the vector index benchmarks

    This is a response to some of the feedback I received from the Postgres community about my recent benchmark results for vector indexes usin...