Skip to content

Replace BackgroundProcessingPool with SchedulePool task and ThreadPool. #15983

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 52 commits into from
Nov 5, 2020

Conversation

alesapin
Copy link
Member

@alesapin alesapin commented Oct 14, 2020

I hereby agree to the terms of the CLA available at: https://yandex.ru/legal/cla/?lang=en

Changelog category (leave one):

  • Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Simplify the implementation of background tasks processing for the MergeTree table engines family. There should be no visible changes for user.

@robot-clickhouse robot-clickhouse added the pr-improvement Pull request with some product improvements label Oct 14, 2020
@alesapin alesapin changed the title [WIP] Trying to replace BackgroundProcessingPool with ThreadPool [WIP] Trying to replace BackgroundProcessingPool with ThreadPool [try 2] Oct 14, 2020
@alesapin
Copy link
Member Author

Stress:

2020.10.21 02:51:14.812613 [ 420272 ] {} <Error> Application: Caught exception while loading metadata: Code: 57, e.displayText() = DB::Exception: Mapping for table with UUID=00000738-1000-4000-8000-000000000001 al
ready exists: Cannot attach table `test_16`.`mv` from metadata file /var/lib/clickhouse/store/b87/b87c0582-b3b2-4e92-b7fb-0b49c866ebd8/mv.sql from query ATTACH MATERIALIZED VIEW test_16.mv UUID '00000738-1000-4000
-8000-000000000001' (`a` Int32) ENGINE = Log AS SELECT a FROM test_16.tab_00738: while loading database `test_16` from path /var/lib/clickhouse/metadata/test_16, Stack trace (when copying this message, always incl
ude the lines below):

@alesapin
Copy link
Member Author

No performance degradation on "merge tests":
insert sequential:

display-name    0       INSERT INTO t SELECT * FROM numbers(20000)
prewarm 0       insert_sequential.query0.prewarm0       0       2.492506265640259
prewarm 0       insert_sequential.query0.prewarm0       1       2.634366273880005
query   0       insert_sequential.query0.run0   0       2.502760410308838
query   0       insert_sequential.query0.run0   1       2.6985974311828613
query   0       insert_sequential.query0.run1   0       2.622211456298828
query   0       insert_sequential.query0.run1   1       2.68269419670105
query   0       insert_sequential.query0.run2   0       2.594139337539673
query   0       insert_sequential.query0.run2   1       2.7583072185516357
query   0       insert_sequential.query0.run3   0       2.702134847640991
query   0       insert_sequential.query0.run3   1       2.678210735321045
query   0       insert_sequential.query0.run4   0       2.6729700565338135
query   0       insert_sequential.query0.run4   1       2.9415955543518066
query   0       insert_sequential.query0.run5   0       2.589402198791504
query   0       insert_sequential.query0.run5   1       2.9460606575012207
query   0       insert_sequential.query0.run6   0       2.6683897972106934
query   0       insert_sequential.query0.run6   1       3.1266672611236572
query   0       insert_sequential.query0.run7   0       2.779920816421509
query   0       insert_sequential.query0.run7   1       3.388561964035034
query   0       insert_sequential.query0.run8   0       3.215115785598755
query   0       insert_sequential.query0.run8   1       3.2069432735443115
query   0       insert_sequential.query0.run9   0       3.0471608638763428
query   0       insert_sequential.query0.run9   1       3.2801365852355957
query   0       insert_sequential.query0.run10  0       2.925520658493042
query   0       insert_sequential.query0.run10  1       3.3361318111419678
query   0       insert_sequential.query0.run11  0       3.204563856124878
query   0       insert_sequential.query0.run11  1       3.8117313385009766
query   0       insert_sequential.query0.run12  0       2.956451892852783
query   0       insert_sequential.query0.run12  1       3.3952388763427734

read_in_order_many_parts:

create  0       23.193586349487305      INSERT INTO mt_100_parts SELECT number, rand() % 10000, rand() FROM numbers_mt(100000000)
create  1       23.504029035568237      INSERT INTO mt_100_parts SELECT number, rand() % 10000, rand() FROM numbers_mt(100000000)
create  1       41.7405641078949        INSERT INTO mt_1000_parts SELECT number, rand() % 10000, rand() FROM numbers_mt(100000000)
create  0       43.55165767669678       INSERT INTO mt_1000_parts SELECT number, rand() % 10000, rand() FROM numbers_mt(100000000)
create  1       3.8340775966644287      OPTIMIZE TABLE mt_100_parts FINAL
create  0       3.8088648319244385      OPTIMIZE TABLE mt_100_parts FINAL
create  0       156.07441067695618      OPTIMIZE TABLE mt_1000_parts FINAL
create  1       166.2251718044281       OPTIMIZE TABLE mt_1000_parts FINAL

@alesapin
Copy link
Member Author

Connected issue #10987

@alesapin
Copy link
Member Author

Performance looks OK.

@alexey-milovidov alexey-milovidov merged commit ee3e289 into master Nov 5, 2020
@alexey-milovidov alexey-milovidov deleted the no_background_pool_no_more branch November 5, 2020 18:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-improvement Pull request with some product improvements
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants