Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-19121][hive] Avoid accessing HDFS frequently in HiveBulkWriterFactory #13301

Merged
merged 1 commit into from Sep 4, 2020

Conversation

JingsongLi
Copy link
Contributor

What is the purpose of the change

In HadoopPathBasedBulkWriter, getSize will invoke FileSystem.exists and FileSystem.getFileStatus, but it is invoked per record.
There will be lots of visits to HDFS, may make HDFS pressure too high.

Brief change log

Creating a new HiveRollingPolicy:

  • Getting size of the file is too expensive. See HiveBulkWriterFactory#create. We can't check for every element, which will cause great pressure on DFS. Therefore, in this implementation, only check the file size in shouldRollOnProcessingTime, which can effectively avoid DFS pressure.

Verifying this change

Manually testing.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): no
  • The public API, i.e., is any changed class annotated with @Public(Evolving): no
  • The serializers: no
  • The runtime per-record code paths (performance sensitive): no
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
  • The S3 file system connector: no

Documentation

  • Does this pull request introduce a new feature? no

@flinkbot
Copy link
Collaborator

flinkbot commented Sep 2, 2020

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit a5a665a (Fri Feb 19 07:28:14 UTC 2021)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Sep 2, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

Copy link
Contributor

@lirui-apache lirui-apache left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@JingsongLi JingsongLi merged commit 41c3a19 into apache:master Sep 4, 2020
JingsongLi added a commit that referenced this pull request Sep 18, 2020
@cyofeiyue
Copy link

nice job,because of it,flink sql write into Hive table performance up up

@JingsongLi JingsongLi deleted the accessHDFS branch November 5, 2020 09:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants