[FLINK-19121][hive] Avoid accessing HDFS frequently in HiveBulkWriterFactory #13301
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What is the purpose of the change
In HadoopPathBasedBulkWriter, getSize will invoke
FileSystem.exists
andFileSystem.getFileStatus
, but it is invoked per record.There will be lots of visits to HDFS, may make HDFS pressure too high.
Brief change log
Creating a new
HiveRollingPolicy
:HiveBulkWriterFactory#create
. We can't check for every element, which will cause great pressure on DFS. Therefore, in this implementation, only check the file size inshouldRollOnProcessingTime
, which can effectively avoid DFS pressure.Verifying this change
Manually testing.
Does this pull request potentially affect one of the following parts:
@Public(Evolving)
: noDocumentation