Multi column order by kills postgresql performance, any way to fix?
I have a really simple query I'm running on a large table (500k rows) to page results.
Originally I was using this query, which is really fast:
select * from deck
order by
deck.sas_rating desc
limit 10
Its explain analyze show a 0.2ms execution time. Cool.
But the sas_rating
column has duplicate integer values, and I realized when paging through the results (using offset for other pages) that I was getting duplicate results. No problem, add the primary key as a secondary order by. But the performance is terrible.
select * from deck
order by
deck.sas_rating desc,
deck.id asc
limit 10
That takes 685ms with an explain analyze of:
Limit (cost=164593.15..164593.17 rows=10 width=1496) (actual time=685.138..685.139 rows=10 loops=1)
-> Sort (cost=164593.15..165866.51 rows=509343 width=1496) (actual time=685.137..685.137 rows=10 loops=1)
Sort Key: sas_rating DESC, id
Sort Method: top-N heapsort Memory: 59kB
-> Seq Scan on deck (cost=0.00..153586.43 rows=509343 width=1496) (actual time=0.009..593.444 rows=509355 loops=1)
Planning time: 0.143 ms
Execution time: 685.171 ms
It's even worse on my weaker production server. My search went from 125ms total to 35 seconds!
I tried adding a multi-column index, but that didn't improve performance. Is there any way to prevent duplicate results when using limit + offset without destroying the performance of the query?
sql postgresql
add a comment |
I have a really simple query I'm running on a large table (500k rows) to page results.
Originally I was using this query, which is really fast:
select * from deck
order by
deck.sas_rating desc
limit 10
Its explain analyze show a 0.2ms execution time. Cool.
But the sas_rating
column has duplicate integer values, and I realized when paging through the results (using offset for other pages) that I was getting duplicate results. No problem, add the primary key as a secondary order by. But the performance is terrible.
select * from deck
order by
deck.sas_rating desc,
deck.id asc
limit 10
That takes 685ms with an explain analyze of:
Limit (cost=164593.15..164593.17 rows=10 width=1496) (actual time=685.138..685.139 rows=10 loops=1)
-> Sort (cost=164593.15..165866.51 rows=509343 width=1496) (actual time=685.137..685.137 rows=10 loops=1)
Sort Key: sas_rating DESC, id
Sort Method: top-N heapsort Memory: 59kB
-> Seq Scan on deck (cost=0.00..153586.43 rows=509343 width=1496) (actual time=0.009..593.444 rows=509355 loops=1)
Planning time: 0.143 ms
Execution time: 685.171 ms
It's even worse on my weaker production server. My search went from 125ms total to 35 seconds!
I tried adding a multi-column index, but that didn't improve performance. Is there any way to prevent duplicate results when using limit + offset without destroying the performance of the query?
sql postgresql
If you just want to get rid of duplicates on this column, it seems there is adistinct on
in postgres: dba.stackexchange.com/a/24328 (near the end).
– dyukha
Jan 19 at 6:50
@dyukha There aren't duplicate rows. It's that when I select withlimit 10
, and then do another select withlimit 10 offset 10
I can retrieve some of the same results becausesas_rating
contains non-unique values. See this SO question for the problem, but no solution that works for me.
– CorayThan
Jan 19 at 6:52
I see. What if you try to order byROW_NUMBER() OVER(ORDER BY id)
? Maybe something like here: zaiste.net/row_number_in_postgresql , but withorder by
instead ofwhere
. Sorry, I'm not sure it even works and have nowhere to test it.
– dyukha
Jan 19 at 7:01
Another option for pagination is described here: use-the-index-luke.com/no-offset
– a_horse_with_no_name
Jan 19 at 9:24
add a comment |
I have a really simple query I'm running on a large table (500k rows) to page results.
Originally I was using this query, which is really fast:
select * from deck
order by
deck.sas_rating desc
limit 10
Its explain analyze show a 0.2ms execution time. Cool.
But the sas_rating
column has duplicate integer values, and I realized when paging through the results (using offset for other pages) that I was getting duplicate results. No problem, add the primary key as a secondary order by. But the performance is terrible.
select * from deck
order by
deck.sas_rating desc,
deck.id asc
limit 10
That takes 685ms with an explain analyze of:
Limit (cost=164593.15..164593.17 rows=10 width=1496) (actual time=685.138..685.139 rows=10 loops=1)
-> Sort (cost=164593.15..165866.51 rows=509343 width=1496) (actual time=685.137..685.137 rows=10 loops=1)
Sort Key: sas_rating DESC, id
Sort Method: top-N heapsort Memory: 59kB
-> Seq Scan on deck (cost=0.00..153586.43 rows=509343 width=1496) (actual time=0.009..593.444 rows=509355 loops=1)
Planning time: 0.143 ms
Execution time: 685.171 ms
It's even worse on my weaker production server. My search went from 125ms total to 35 seconds!
I tried adding a multi-column index, but that didn't improve performance. Is there any way to prevent duplicate results when using limit + offset without destroying the performance of the query?
sql postgresql
I have a really simple query I'm running on a large table (500k rows) to page results.
Originally I was using this query, which is really fast:
select * from deck
order by
deck.sas_rating desc
limit 10
Its explain analyze show a 0.2ms execution time. Cool.
But the sas_rating
column has duplicate integer values, and I realized when paging through the results (using offset for other pages) that I was getting duplicate results. No problem, add the primary key as a secondary order by. But the performance is terrible.
select * from deck
order by
deck.sas_rating desc,
deck.id asc
limit 10
That takes 685ms with an explain analyze of:
Limit (cost=164593.15..164593.17 rows=10 width=1496) (actual time=685.138..685.139 rows=10 loops=1)
-> Sort (cost=164593.15..165866.51 rows=509343 width=1496) (actual time=685.137..685.137 rows=10 loops=1)
Sort Key: sas_rating DESC, id
Sort Method: top-N heapsort Memory: 59kB
-> Seq Scan on deck (cost=0.00..153586.43 rows=509343 width=1496) (actual time=0.009..593.444 rows=509355 loops=1)
Planning time: 0.143 ms
Execution time: 685.171 ms
It's even worse on my weaker production server. My search went from 125ms total to 35 seconds!
I tried adding a multi-column index, but that didn't improve performance. Is there any way to prevent duplicate results when using limit + offset without destroying the performance of the query?
sql postgresql
sql postgresql
edited Jan 19 at 9:22
a_horse_with_no_name
296k46451546
296k46451546
asked Jan 19 at 6:32
CorayThanCorayThan
8,3571873129
8,3571873129
If you just want to get rid of duplicates on this column, it seems there is adistinct on
in postgres: dba.stackexchange.com/a/24328 (near the end).
– dyukha
Jan 19 at 6:50
@dyukha There aren't duplicate rows. It's that when I select withlimit 10
, and then do another select withlimit 10 offset 10
I can retrieve some of the same results becausesas_rating
contains non-unique values. See this SO question for the problem, but no solution that works for me.
– CorayThan
Jan 19 at 6:52
I see. What if you try to order byROW_NUMBER() OVER(ORDER BY id)
? Maybe something like here: zaiste.net/row_number_in_postgresql , but withorder by
instead ofwhere
. Sorry, I'm not sure it even works and have nowhere to test it.
– dyukha
Jan 19 at 7:01
Another option for pagination is described here: use-the-index-luke.com/no-offset
– a_horse_with_no_name
Jan 19 at 9:24
add a comment |
If you just want to get rid of duplicates on this column, it seems there is adistinct on
in postgres: dba.stackexchange.com/a/24328 (near the end).
– dyukha
Jan 19 at 6:50
@dyukha There aren't duplicate rows. It's that when I select withlimit 10
, and then do another select withlimit 10 offset 10
I can retrieve some of the same results becausesas_rating
contains non-unique values. See this SO question for the problem, but no solution that works for me.
– CorayThan
Jan 19 at 6:52
I see. What if you try to order byROW_NUMBER() OVER(ORDER BY id)
? Maybe something like here: zaiste.net/row_number_in_postgresql , but withorder by
instead ofwhere
. Sorry, I'm not sure it even works and have nowhere to test it.
– dyukha
Jan 19 at 7:01
Another option for pagination is described here: use-the-index-luke.com/no-offset
– a_horse_with_no_name
Jan 19 at 9:24
If you just want to get rid of duplicates on this column, it seems there is a
distinct on
in postgres: dba.stackexchange.com/a/24328 (near the end).– dyukha
Jan 19 at 6:50
If you just want to get rid of duplicates on this column, it seems there is a
distinct on
in postgres: dba.stackexchange.com/a/24328 (near the end).– dyukha
Jan 19 at 6:50
@dyukha There aren't duplicate rows. It's that when I select with
limit 10
, and then do another select with limit 10 offset 10
I can retrieve some of the same results because sas_rating
contains non-unique values. See this SO question for the problem, but no solution that works for me.– CorayThan
Jan 19 at 6:52
@dyukha There aren't duplicate rows. It's that when I select with
limit 10
, and then do another select with limit 10 offset 10
I can retrieve some of the same results because sas_rating
contains non-unique values. See this SO question for the problem, but no solution that works for me.– CorayThan
Jan 19 at 6:52
I see. What if you try to order by
ROW_NUMBER() OVER(ORDER BY id)
? Maybe something like here: zaiste.net/row_number_in_postgresql , but with order by
instead of where
. Sorry, I'm not sure it even works and have nowhere to test it.– dyukha
Jan 19 at 7:01
I see. What if you try to order by
ROW_NUMBER() OVER(ORDER BY id)
? Maybe something like here: zaiste.net/row_number_in_postgresql , but with order by
instead of where
. Sorry, I'm not sure it even works and have nowhere to test it.– dyukha
Jan 19 at 7:01
Another option for pagination is described here: use-the-index-luke.com/no-offset
– a_horse_with_no_name
Jan 19 at 9:24
Another option for pagination is described here: use-the-index-luke.com/no-offset
– a_horse_with_no_name
Jan 19 at 9:24
add a comment |
1 Answer
1
active
oldest
votes
I believe you probably added your multi-column index wrong - for example you used sas_rating, id
instead of sas_rating desc, id
, as only the latter can be used for your order by.
The index on sas_rating
only was usable for sas_rating desc
sorting, as the database could go backwards over it. It is not the case for multi-column index - you have to preserve the ascending/descending order of columns as in the sort.
You're right. I just needed my multi-column index to use the properdesc
order forsas_rating
. I didn't even realize indexes haddesc
vsasc
!
– CorayThan
Jan 19 at 9:33
@Tometzky . . . Wow! Great observation. I think Postgres should be smart enough to use(sas_rating, id)
for this purpose. But that would require implementing skip scans.
– Gordon Linoff
Jan 19 at 11:20
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54264667%2fmulti-column-order-by-kills-postgresql-performance-any-way-to-fix%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
I believe you probably added your multi-column index wrong - for example you used sas_rating, id
instead of sas_rating desc, id
, as only the latter can be used for your order by.
The index on sas_rating
only was usable for sas_rating desc
sorting, as the database could go backwards over it. It is not the case for multi-column index - you have to preserve the ascending/descending order of columns as in the sort.
You're right. I just needed my multi-column index to use the properdesc
order forsas_rating
. I didn't even realize indexes haddesc
vsasc
!
– CorayThan
Jan 19 at 9:33
@Tometzky . . . Wow! Great observation. I think Postgres should be smart enough to use(sas_rating, id)
for this purpose. But that would require implementing skip scans.
– Gordon Linoff
Jan 19 at 11:20
add a comment |
I believe you probably added your multi-column index wrong - for example you used sas_rating, id
instead of sas_rating desc, id
, as only the latter can be used for your order by.
The index on sas_rating
only was usable for sas_rating desc
sorting, as the database could go backwards over it. It is not the case for multi-column index - you have to preserve the ascending/descending order of columns as in the sort.
You're right. I just needed my multi-column index to use the properdesc
order forsas_rating
. I didn't even realize indexes haddesc
vsasc
!
– CorayThan
Jan 19 at 9:33
@Tometzky . . . Wow! Great observation. I think Postgres should be smart enough to use(sas_rating, id)
for this purpose. But that would require implementing skip scans.
– Gordon Linoff
Jan 19 at 11:20
add a comment |
I believe you probably added your multi-column index wrong - for example you used sas_rating, id
instead of sas_rating desc, id
, as only the latter can be used for your order by.
The index on sas_rating
only was usable for sas_rating desc
sorting, as the database could go backwards over it. It is not the case for multi-column index - you have to preserve the ascending/descending order of columns as in the sort.
I believe you probably added your multi-column index wrong - for example you used sas_rating, id
instead of sas_rating desc, id
, as only the latter can be used for your order by.
The index on sas_rating
only was usable for sas_rating desc
sorting, as the database could go backwards over it. It is not the case for multi-column index - you have to preserve the ascending/descending order of columns as in the sort.
answered Jan 19 at 9:16
TometzkyTometzky
15.7k14157
15.7k14157
You're right. I just needed my multi-column index to use the properdesc
order forsas_rating
. I didn't even realize indexes haddesc
vsasc
!
– CorayThan
Jan 19 at 9:33
@Tometzky . . . Wow! Great observation. I think Postgres should be smart enough to use(sas_rating, id)
for this purpose. But that would require implementing skip scans.
– Gordon Linoff
Jan 19 at 11:20
add a comment |
You're right. I just needed my multi-column index to use the properdesc
order forsas_rating
. I didn't even realize indexes haddesc
vsasc
!
– CorayThan
Jan 19 at 9:33
@Tometzky . . . Wow! Great observation. I think Postgres should be smart enough to use(sas_rating, id)
for this purpose. But that would require implementing skip scans.
– Gordon Linoff
Jan 19 at 11:20
You're right. I just needed my multi-column index to use the proper
desc
order for sas_rating
. I didn't even realize indexes had desc
vs asc
!– CorayThan
Jan 19 at 9:33
You're right. I just needed my multi-column index to use the proper
desc
order for sas_rating
. I didn't even realize indexes had desc
vs asc
!– CorayThan
Jan 19 at 9:33
@Tometzky . . . Wow! Great observation. I think Postgres should be smart enough to use
(sas_rating, id)
for this purpose. But that would require implementing skip scans.– Gordon Linoff
Jan 19 at 11:20
@Tometzky . . . Wow! Great observation. I think Postgres should be smart enough to use
(sas_rating, id)
for this purpose. But that would require implementing skip scans.– Gordon Linoff
Jan 19 at 11:20
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54264667%2fmulti-column-order-by-kills-postgresql-performance-any-way-to-fix%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
If you just want to get rid of duplicates on this column, it seems there is a
distinct on
in postgres: dba.stackexchange.com/a/24328 (near the end).– dyukha
Jan 19 at 6:50
@dyukha There aren't duplicate rows. It's that when I select with
limit 10
, and then do another select withlimit 10 offset 10
I can retrieve some of the same results becausesas_rating
contains non-unique values. See this SO question for the problem, but no solution that works for me.– CorayThan
Jan 19 at 6:52
I see. What if you try to order by
ROW_NUMBER() OVER(ORDER BY id)
? Maybe something like here: zaiste.net/row_number_in_postgresql , but withorder by
instead ofwhere
. Sorry, I'm not sure it even works and have nowhere to test it.– dyukha
Jan 19 at 7:01
Another option for pagination is described here: use-the-index-luke.com/no-offset
– a_horse_with_no_name
Jan 19 at 9:24