Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel IO Read Timings #305

Open
lrweck opened this issue Nov 3, 2020 · 7 comments · May be fixed by #677
Open

Parallel IO Read Timings #305

lrweck opened this issue Nov 3, 2020 · 7 comments · May be fixed by #677
Labels
question Further information is requested

Comments

@lrweck
Copy link

lrweck commented Nov 3, 2020

When using "EXPLAIN (ANALYZE, TIMING)", IO timings can be confusing when measuring parallel nodes.

BDjxgWEcof

Example: https://explain.dalibo.com/plan/avr0

What do you think of showing the maximum time between all wokers on this tab, and individual worker timings for each worker on Misc tab?

@pgiraud
Copy link
Member

pgiraud commented Nov 6, 2020

@lrweck it looks like you removed this plan. Can you share it again?

@pgiraud
Copy link
Member

pgiraud commented Dec 31, 2020

@lrweck are you able to provide a plan so that I can better understand the issue?

@pgiraud pgiraud added the question Further information is requested label Feb 16, 2021
@pgiraud
Copy link
Member

pgiraud commented Apr 20, 2021

@lrweck I'm still looking for more info. Thanks.

@pgiraud
Copy link
Member

pgiraud commented Jun 4, 2021

Here's a plan that should help reproduce the bug.

https://explain.dalibo.com/plan/6Z4#plan/node/3

                                                                QUERY PLAN                                                                 
-------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=1000.00..2345159.78 rows=1000000 width=45) (actual time=37.662..878.566 rows=1000000 loops=1)
   Buffers: shared hit=5 read=22612
   I/O Timings: read=4103.590
   ->  Gather  (cost=1000.00..6712463.06 rows=2863057 width=45) (actual time=0.268..706.579 rows=1000000 loops=1)
         Workers Planned: 8
         Workers Launched: 6
         Buffers: shared hit=5 read=22612
         I/O Timings: read=4103.590
         ->  Parallel Seq Scan on pwned_vm  (cost=0.00..6425157.36 rows=357882 width=45) (actual time=83.388..750.808 rows=142907 loops=7)
               Filter: (mod(n, 2) = 0)
               Rows Removed by Filter: 202716
               Buffers: shared hit=5 read=22612
               I/O Timings: read=4103.590
 Planning Time: 0.163 ms
 JIT:
   Functions: 15
   Options: Inlining true, Optimization true, Expressions true, Deforming true
   Timing: Generation 3.554 ms, Inlining 409.811 ms, Optimization 119.960 ms, Emission 54.414 ms, Total 587.740 ms
 Execution Time: 962.483 ms
(19 lignes)

@Krysztophe
Copy link

I can reproduce at will:

without parallelization:
https://explain.dalibo.com/plan/4J
with:
https://explain.dalibo.com/plan/5gW

178 Mbytes read in both cases (from OS cache), but its 500 ms or 5 s.

I suspect that the track_io_timing is multiplied by the loops number.
That's not a problem of pev2.

@Krysztophe
Copy link

According to @ioguix, adding the workers IO timings may be normal.
If this is the case, I think that pev2 should add an hint, perhaps the average per worker (beware parallel_leader_participation), or just the words "cumulated" before "I/O timings".

Here is another example with pgbench size 100, with all caches dropped between each run (PG13) :

@pgiraud
Copy link
Member

pgiraud commented Oct 10, 2024

We have several locations where the IO timing is displayed.

Screenshot from 2024-10-10 08-54-10

pgiraud added a commit that referenced this issue Oct 10, 2024
The timings may be confusing because greater than global timing. In the
case of parallel aware nodes, we show a message in node detail, plan
stats and diagram tooltip.

Fixes #305
@pgiraud pgiraud linked a pull request Oct 10, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants