Skip to content

Is there a file size limitation? #315

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Phantom-Studio opened this issue Dec 7, 2015 · 8 comments
Closed

Is there a file size limitation? #315

Phantom-Studio opened this issue Dec 7, 2015 · 8 comments

Comments

@Phantom-Studio
Copy link

Hi all,

As suggested in the title, I wanted to know if there is a file size limitation?
Because on Github page of s3fs-fuse, I can read "Maximum file size=64GB (limited by s3fs, not Amazon)." But I suspect s3fs to fail when trying to upload file greater than 5Gb ; everytime I get a write error.

Thanks for your help.
Regards.

@gaul
Copy link
Member

gaul commented Dec 7, 2015

Can you provide the specific error via the debug flags:

s3fs $BUCKET $MOUNTPOINT -d -d -f -o f2 -o curldbg

as well as share the S3 implementation you use, e.g., Amazon, Ceph? Errors at the 5 GB boundary imply some misconfiguration around multi-part uploads.

@Phantom-Studio
Copy link
Author

Hi Andrew,

Thanks for your help!

Could you please explain a bit more the step in order to retreive the log please?

  • So correct me if I'm wrong but I need to umount my bucket first then mount it with :

s3fs $BUCKET $MOUNTPOINT -d -d -f -o f2 -o curldbg (I also need to add -o use_cache and -o allow_other)

Regards.
Jonathan

@gaul
Copy link
Member

gaul commented Dec 7, 2015

Yes please invoke s3fs with those options then reproduce the symptoms with your application. When you encounter the error, please attach the relevant symptoms here or in a gist.

@ggtakec
Copy link
Member

ggtakec commented Dec 20, 2015

@Phantom-Studio
s3fs has file size limit which is dependent on "multipart_size" option.
S3 has a limit count for multi part uploading, it is 10000 parts, and s3fs is specified each part size by this option.
So that, the file size limitation is (10000 * multipart_size).
(If disk space is low it is also another restriction, but the standard will in the above size.)

https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/fdcache.cpp#L1353

Please try to set multipart_size option.
Thanks in advance for your help.

@ggtakec
Copy link
Member

ggtakec commented Mar 30, 2019

We kept this issue open for a long time.
I will close this, but if the problem persists, please reopen or post a new issue.

@ggtakec ggtakec closed this as completed Mar 30, 2019
@alphainets
Copy link

Hi ggtakec,
I have problem uploading 230GB file to ceph s3 server,
I saw it was caching locally,
but when copy was donw, the file size in s3 became 0
and got below error:
fail to close: operation not supported

below is how I mount my ceph s3:
s3fs testing /mnt/testing -o passwd_file=/testing/passwd-s3fs -o url=http://192.168.0.100:7480 -o use_path_request_style -o dbglevel=dbg -f -o use_cache="/nova/tmp" -o curldbg

May I know if there is any configuration mistakes?

I have no problem when file size is small, says 20GB.

@ezman
Copy link

ezman commented Nov 26, 2020

@alphainets I am seeing the same issue with a 100GB, and 432GB files. Did you resolve this issue?
I am using 1.8.7, on a Ubuntu 18.04 server.

@gaul
Copy link
Member

gaul commented Nov 27, 2020

s3fs 1.87 and earlier require temporary space equal to the object size. Please test with the latest master which includes a large file optimization that reduces temporary space usage. If this symptom persists, please run with -f -d and open a new issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants