Optimizing Top K in Postgres

Posted by philippemnoel 2 days ago

Counter146Comment19OpenOriginal

Comments

Comment by bob1029 19 hours ago

Lucene really does feel like magic sometimes. It was designed expressly to solve the top K problem at hyper scale. It's incredibly mature technology. You can go from zero to a billion documents without thinking too much about anything other than the amount of mass storage you have available.

Every time I've used Lucene I have combined it with a SQL provider. It's not necessarily about one or the other. The FTS facilities within the various SQL providers are convenient, but not as capable by comparison. I don't think mixing these into the same thing makes sense. They are two very different animals that are better joined by way of the document ids.

Comment by philippemnoel 10 hours ago

Under the hood, ParadeDB is built by integrating Tantivy (a Lucene-inspired Rust search library) inside Postgres. This is to say: I agree with you -- We're not trying to claim that Postgres itself is an alternative to Lucene, but rather that something like Lucene can be integrated inside Postgres so that you can get the power of both in a single system (or in a cluster of Postgres instances)

Comment by monadicmonad 2 hours ago

why do I want them in a single system?

Comment by jmgimeno 19 hours ago

Maybe I'm wrong, but for this query:

SELECT * FROM benchmark_logs WHERE severity < 3 ORDER BY timestamp DESC LIMIT 10;

this index

CREATE INDEX ON benchmark_logs (severity, timestamp);

cannot be used as proposed: "Postgres can jump directly to the portion of the tree matching severity < 3 and then walk the timestamps in descending order to get the top K rows."

Postgres with this index can walk to a part of the tree with severity < 3, but timestamps are sorted only for the same severity.

Comment by Cervisia 18 hours ago

The SQLite documentation explains how (and how well) this works: https://www.sqlite.org/optoverview.html#the_skip_scan_optimi...

Comment by igorw 17 hours ago

While Postgres did introduce skip scan in 18, it only works for equality matching: https://www.crunchydata.com/blog/get-excited-about-postgres-...

Comment by dragon96 19 hours ago

If severity is a low cardinality enum, it still seems acceptable

Comment by mattashii 17 hours ago

The order returned from the Index Scan is not the ordering requested by the user, so there would still have to be a full (or topk) Sort over the dataset returned from the index scan, which could negate the gains you get from using an Index Scan; PostgreSQL itself does not produce merge join plans that merge a spread of index scans to get suffix-ordered data out of an index.

Comment by Tostino 13 hours ago

Do a partial index on just timestamp where severity < 3 instead.

Comment by davidelettieri 20 hours ago

The "But Wait, We Need Filters Too" paragraph mentions "US" filter which is introduced only later on.

Comment by GrayShade 20 hours ago

And footnote 3 is unreferenced.

Comment by philippemnoel 9 hours ago

Good catch, thank you both -- fixed!

Comment by h1fra 17 hours ago

Postgres is really good at a lot of things, but it's very unfortunate that it's really bad at simple analytics. I wish there was a plugin instead of having to have N databases

Comment by esafak 11 hours ago

ParadeDB is a pair of postgres plugins.

Comment by philippemnoel 10 hours ago

(ParadeDB maintainer here) Yes! We've already built some faster analytics in Postgres, and have a lot more coming. Here's some relevant context in case you're curious: https://www.paradedb.com/blog/faceting

Comment by ajstars 12 hours ago

Curious whether you benchmarked against a partial index on the sort column. For fixed-category top-K queries the planner sometimes picks it over a full index scan, though I've seen it regress on high-write tables due to index bloat. Did write volume factor into your test setup?

Comment by Vadim_samokhin 18 hours ago

Just in case, there is a btree_gin extension which can be used in queries combining gin-indexable column and btree-indexable column. It doesn’t solve top-K ordering problem though.

Comment by JEONSEWON 21 hours ago

[flagged]

Comment by bbshfishe 20 hours ago

[dead]

Comment by tacone 19 hours ago

The issue here is the row based format. You simply can't filter on arbitrary columns with that. Either use an external warehouse or a columnar plug-in like Timescale.

Comment by 17 hours ago

Comment by hrmtst93837 16 hours ago

Columnar solves some query patterns but treating row format as a dealbreaker for top-k is an overreach. For modest-to-mid datasets with the right index Postgres handles top-k on composite keys well, especially if reads aren't scanning millions of rows or you can fit hot columns in memory.

If latency really matters and you are working with large datasets, columnar extensions help, but they come with operational overhead and can limit transactional features, so it's usually better to stick with row-based unless you have a clear need.