Skip to content

s3cmd 2.0.2 signature_v2=False NOT WORK #1017

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
dovefi opened this issue Nov 23, 2018 · 10 comments
Closed

s3cmd 2.0.2 signature_v2=False NOT WORK #1017

dovefi opened this issue Nov 23, 2018 · 10 comments
Assignees

Comments

@dovefi
Copy link

dovefi commented Nov 23, 2018

hi, i use s3cmd 2.0.2 to operate ceph rgw bucket, when i list a bucket , i got an error S3 error: 403 (SignatureDoesNotMatch)

DEBUG: s3cmd version 2.0.2
DEBUG: ConfigParser: Reading file '/root/.s3cfg'
DEBUG: ConfigParser: access_key->LM...17_chars...R
DEBUG: ConfigParser: access_token->
DEBUG: ConfigParser: add_encoding_exts->
DEBUG: ConfigParser: add_headers->
DEBUG: ConfigParser: bucket_location->US
DEBUG: ConfigParser: ca_certs_file->
DEBUG: ConfigParser: cache_file->
DEBUG: ConfigParser: check_ssl_certificate->True
DEBUG: ConfigParser: check_ssl_hostname->True
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: content_disposition->
DEBUG: ConfigParser: content_type->
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delay_updates->False
DEBUG: ConfigParser: delete_after->False
DEBUG: ConfigParser: delete_after_fetch->False
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: enable_multipart->True
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: expiry_date->
DEBUG: ConfigParser: expiry_days->
DEBUG: ConfigParser: expiry_prefix->
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->127.0.0.1:7480
DEBUG: ConfigParser: host_bucket->%(bucket)s.s3.amazonaws.com
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: invalidate_default_index_on_cf->False
DEBUG: ConfigParser: invalidate_default_index_root_on_cf->True
DEBUG: ConfigParser: invalidate_on_cf->False
DEBUG: ConfigParser: kms_key->
DEBUG: ConfigParser: limit->-1
DEBUG: ConfigParser: limitrate->0
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: long_listing->False
DEBUG: ConfigParser: max_delete->-1
DEBUG: ConfigParser: mime_type->
DEBUG: ConfigParser: multipart_chunk_size_mb->15
DEBUG: ConfigParser: multipart_max_chunks->10000
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->
DEBUG: ConfigParser: proxy_port->0
DEBUG: ConfigParser: put_continue->False
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->65536
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: requester_pays->False
DEBUG: ConfigParser: restore_days->1
DEBUG: ConfigParser: restore_priority->Standard
DEBUG: ConfigParser: secret_key->Am...37_chars...k
DEBUG: ConfigParser: send_chunk->65536
DEBUG: ConfigParser: server_side_encryption->False
DEBUG: ConfigParser: signature_v2->False
DEBUG: ConfigParser: signurl_use_https->False
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->300
DEBUG: ConfigParser: stats->False
DEBUG: ConfigParser: stop_on_error->False
DEBUG: ConfigParser: storage_class->
DEBUG: ConfigParser: throttle_max->100
DEBUG: ConfigParser: upload_id->
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_http_expect->False
DEBUG: ConfigParser: use_https->False
DEBUG: ConfigParser: use_mime_magic->True
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: ConfigParser: website_endpoint->http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
DEBUG: ConfigParser: website_error->
DEBUG: ConfigParser: website_index->index.html
DEBUG: Updating Config.Config cache_file ->
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'ls' using UTF-8
DEBUG: Unicodising 's3://tenant_test_1:test-20181008' using UTF-8
DEBUG: Command: ls
DEBUG: Bucket 's3://tenant_test_1:test-20181008':
DEBUG: CreateRequest: resource[uri]=/
DEBUG: Using signature v2
DEBUG: SignHeaders: u'GET\n\n\n\nx-amz-date:Fri, 23 Nov 2018 10:01:05 +0000\n/tenant_test_1%3Atest-20181008/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(tenant_test_1:test-20181008): 127.0.0.1:7480
DEBUG: ConnMan.get(): creating new connection: http://127.0.0.1:7480
DEBUG: non-proxied HTTPConnection(127.0.0.1, 7480)
DEBUG: format_uri(): /tenant_test_1:test-20181008/?delimiter=%2F
DEBUG: Sending request method_string='GET', uri=u'/tenant_test_1:test-20181008/?delimiter=%2F', headers={'Authorization': u'AWS LMYKYEL95584F6ARSZ2R:aFJdIlMU2TT0uQF7Kv8pEARJTLM=', 'x-amz-date': 'Fri, 23 Nov 2018 10:01:05 +0000'}, body=(0 bytes)
DEBUG: ConnMan.put(): connection put back to pool (http://127.0.0.1:7480#1)
DEBUG: Response:
{'data': 'SignatureDoesNotMatchtx000000000000001fd3dbe-005bf7cfe1-119d18-tupu-zone1119d18-tupu-zone1-tupu-zonegroup',
'headers': {'accept-ranges': 'bytes',
'content-length': '211',
'content-type': 'application/xml',
'date': 'Fri, 23 Nov 2018 10:01:05 GMT',
'x-amz-request-id': 'tx000000000000001fd3dbe-005bf7cfe1-119d18-tupu-zone1'},
'reason': 'Forbidden',
'status': 403}
DEBUG: S3Error: 403 (Forbidden)
DEBUG: HttpHeader: date: Fri, 23 Nov 2018 10:01:05 GMT
DEBUG: HttpHeader: content-length: 211
DEBUG: HttpHeader: x-amz-request-id: tx000000000000001fd3dbe-005bf7cfe1-119d18-tupu-zone1
DEBUG: HttpHeader: content-type: application/xml
DEBUG: HttpHeader: accept-ranges: bytes
DEBUG: ErrorXML: Code: 'SignatureDoesNotMatch'
DEBUG: ErrorXML: RequestId: 'tx000000000000001fd3dbe-005bf7cfe1-119d18-tupu-zone1'
DEBUG: ErrorXML: HostId: '119d18-tupu-zone1-tupu-zonegroup'
ERROR: S3 error: 403 (SignatureDoesNotMatch)


i note that "DEBUG: ConfigParser: signature_v2->False", but it still use "DEBUG: Using signature v2", why ???? is this a bug???

@dovefi
Copy link
Author

dovefi commented Nov 23, 2018

I want to use signature V4, how to configure it ?? thanks !

@fviard
Copy link
Contributor

fviard commented Nov 23, 2018

Hi,
First thing is that your configuration looks not good:
DEBUG: ConfigParser: host_base->127.0.0.1:7480
DEBUG: ConfigParser: host_bucket->%(bucket)s.s3.amazonaws.com
Host bucket should be directed to your server.

Then, regarding the signature, the "region" is needed for signature v4, and so, when you don't set a specific location (ie US is generic), we still have to do a signature v2 request to try to get the region of your bucket for using signature v4 then.

You should try to set "bucket_location" to the right location for your server.
It might be something like us-east-1

@dovefi
Copy link
Author

dovefi commented Nov 23, 2018

thanks for your reply, but there are still some problem, i list different bucket ,one can be list while the other is not.

example 1 : s3cmd ls s3://ceph_c27_bucket1/world/ -d

DEBUG: s3cmd version 2.0.2
DEBUG: ConfigParser: Reading file '/root/.s3cfg'
DEBUG: ConfigParser: access_key->5C...17_chars...5
DEBUG: ConfigParser: access_token->
DEBUG: ConfigParser: add_encoding_exts->
DEBUG: ConfigParser: add_headers->
DEBUG: ConfigParser: bucket_location->us-east-1
DEBUG: ConfigParser: ca_certs_file->
DEBUG: ConfigParser: cache_file->
DEBUG: ConfigParser: check_ssl_certificate->True
DEBUG: ConfigParser: check_ssl_hostname->True
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: content_disposition->
DEBUG: ConfigParser: content_type->
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delay_updates->False
DEBUG: ConfigParser: delete_after->False
DEBUG: ConfigParser: delete_after_fetch->False
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: enable_multipart->True
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: expiry_date->
DEBUG: ConfigParser: expiry_days->
DEBUG: ConfigParser: expiry_prefix->
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->12...3_chars...6
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->172.26.2.52:7480
DEBUG: ConfigParser: host_bucket->%(bucket).172.26.2.52:7480
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: invalidate_default_index_on_cf->False
DEBUG: ConfigParser: invalidate_default_index_root_on_cf->True
DEBUG: ConfigParser: invalidate_on_cf->False
DEBUG: ConfigParser: kms_key->
DEBUG: ConfigParser: limit->-1
DEBUG: ConfigParser: limitrate->0
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: long_listing->False
DEBUG: ConfigParser: max_delete->-1
DEBUG: ConfigParser: mime_type->
DEBUG: ConfigParser: multipart_chunk_size_mb->15
DEBUG: ConfigParser: multipart_max_chunks->10000
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->
DEBUG: ConfigParser: proxy_port->0
DEBUG: ConfigParser: put_continue->False
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->65536
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: requester_pays->False
DEBUG: ConfigParser: restore_days->1
DEBUG: ConfigParser: restore_priority->Standard
DEBUG: ConfigParser: secret_key->Ic...37_chars...d
DEBUG: ConfigParser: send_chunk->65536
DEBUG: ConfigParser: server_side_encryption->False
DEBUG: ConfigParser: signature_v2->False
DEBUG: ConfigParser: signurl_use_https->False
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->300
DEBUG: ConfigParser: stats->False
DEBUG: ConfigParser: stop_on_error->False
DEBUG: ConfigParser: storage_class->
DEBUG: ConfigParser: throttle_max->100
DEBUG: ConfigParser: upload_id->
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_http_expect->False
DEBUG: ConfigParser: use_https->False
DEBUG: ConfigParser: use_mime_magic->True
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: ConfigParser: website_endpoint->http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
DEBUG: ConfigParser: website_error->
DEBUG: ConfigParser: website_index->index.html
DEBUG: Updating Config.Config cache_file ->
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'ls' using UTF-8
DEBUG: Unicodising 's3://ceph_c27_bucket1/world/' using UTF-8
DEBUG: Command: ls
DEBUG: Bucket 's3://ceph_c27_bucket1':
DEBUG: CreateRequest: resource[uri]=/
DEBUG: Using signature v2
DEBUG: SignHeaders: u'GET\n\n\n\nx-amz-date:Fri, 23 Nov 2018 12:24:35 +0000\n/ceph_c27_bucket1/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(ceph_c27_bucket1): 172.26.2.52:7480
DEBUG: ConnMan.get(): creating new connection: http://172.26.2.52:7480
DEBUG: non-proxied HTTPConnection(172.26.2.52, 7480)
DEBUG: format_uri(): /ceph_c27_bucket1/?delimiter=%2F&prefix=world%2F
DEBUG: Sending request method_string='GET', uri=u'/ceph_c27_bucket1/?delimiter=%2F&prefix=world%2F', headers={'Authorization': u'AWS 5CUI8393AQR63HAW23Z5:IW4tHPxB57LTJValNraQ/x9YIxU=', 'x-amz-date': 'Fri, 23 Nov 2018 12:24:35 +0000'}, body=(0 bytes)
DEBUG: ConnMan.put(): connection put back to pool (http://172.26.2.52:7480#1)
DEBUG: Response:
{'data': '<?xml version="1.0" encoding="UTF-8"?><ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>ceph_c27_bucket1</Name><Prefix>world/</Prefix><Marker></Marker><MaxKeys>1000</MaxKeys><Delimiter>/</Delimiter><IsTruncated>false</IsTruncated><CommonPrefixes><Prefix>world/data-c27/</Prefix></CommonPrefixes></ListBucketResult>',
 'headers': {'content-length': '336',
             'content-type': 'application/xml',
             'date': 'Fri, 23 Nov 2018 12:24:36 GMT',
             'x-amz-request-id': 'tx0000000000000020f762a-005bf7f183-119d18-tupu-zone1'},
 'reason': 'OK',
 'status': 200}
                       DIR   s3://ceph_c27_bucket1/world/data-c27/

I have check the ceph rgw log, the signaturion is the same

2018-11-23 20:27:57.591797 7fc7df2c2700 15 server signature=Ub5aRue/MjQ/ntpF5bPxiet5Bo8=
2018-11-23 20:27:57.591798 7fc7df2c2700 15 client signature=Ub5aRue/MjQ/ntpF5bPxiet5Bo8=
2018-11-23 20:27:57.591800 7fc7df2c2700 15 compare=0
2018-11-23 20:27:57.591805 7fc7df2c2700 20 rgw::auth::s3::LocalEngine granted access
2018-11-23 20:27:57.591807 7fc7df2c2700 20 rgw::auth::s3::AWSAuthStrategy granted access
2018-11-23 20:27:57.591812 7fc7df2c2700  2 req 34577922:0.000204:s3:GET /ceph_c27_bucket1/:list_bucket:normalizing buckets and tenants
2018-11-23 20:27:57.591816 7fc7df2c2700 10 s->object=<NULL> s->bucket=ceph_c27_bucket1

as you can see, the sever signature and client signature is the same

example 2: s3cmd ls s3://tenant_test_1:test-20181008 -d

tenant_test_1:test-20181008 tenant_test_1 is a tenant in rgw, and test-20181008 is the bucket name

DEBUG: s3cmd version 2.0.2
DEBUG: ConfigParser: Reading file '/root/.s3cfg'
DEBUG: ConfigParser: access_key->5C...17_chars...5
DEBUG: ConfigParser: access_token->
DEBUG: ConfigParser: add_encoding_exts->
DEBUG: ConfigParser: add_headers->
DEBUG: ConfigParser: bucket_location->us-east-1
DEBUG: ConfigParser: ca_certs_file->
DEBUG: ConfigParser: cache_file->
DEBUG: ConfigParser: check_ssl_certificate->True
DEBUG: ConfigParser: check_ssl_hostname->True
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: content_disposition->
DEBUG: ConfigParser: content_type->
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delay_updates->False
DEBUG: ConfigParser: delete_after->False
DEBUG: ConfigParser: delete_after_fetch->False
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: enable_multipart->True
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: expiry_date->
DEBUG: ConfigParser: expiry_days->
DEBUG: ConfigParser: expiry_prefix->
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->12...3_chars...6
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->172.26.2.52:7480
DEBUG: ConfigParser: host_bucket->%(bucket).172.26.2.52:7480
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: invalidate_default_index_on_cf->False
DEBUG: ConfigParser: invalidate_default_index_root_on_cf->True
DEBUG: ConfigParser: invalidate_on_cf->False
DEBUG: ConfigParser: kms_key->
DEBUG: ConfigParser: limit->-1
DEBUG: ConfigParser: limitrate->0
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: long_listing->False
DEBUG: ConfigParser: max_delete->-1
DEBUG: ConfigParser: mime_type->
DEBUG: ConfigParser: multipart_chunk_size_mb->15
DEBUG: ConfigParser: multipart_max_chunks->10000
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->
DEBUG: ConfigParser: proxy_port->0
DEBUG: ConfigParser: put_continue->False
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->65536
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: requester_pays->False
DEBUG: ConfigParser: restore_days->1
DEBUG: ConfigParser: restore_priority->Standard
DEBUG: ConfigParser: secret_key->Ic...37_chars...d
DEBUG: ConfigParser: send_chunk->65536
DEBUG: ConfigParser: server_side_encryption->False
DEBUG: ConfigParser: signature_v2->False
DEBUG: ConfigParser: signurl_use_https->False
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->300
DEBUG: ConfigParser: stats->False
DEBUG: ConfigParser: stop_on_error->False
DEBUG: ConfigParser: storage_class->
DEBUG: ConfigParser: throttle_max->100
DEBUG: ConfigParser: upload_id->
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_http_expect->False
DEBUG: ConfigParser: use_https->False
DEBUG: ConfigParser: use_mime_magic->True
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: ConfigParser: website_endpoint->http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
DEBUG: ConfigParser: website_error->
DEBUG: ConfigParser: website_index->index.html
DEBUG: Updating Config.Config cache_file ->
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'ls' using UTF-8
DEBUG: Unicodising 's3://tenant_test_1:test-20181008' using UTF-8
DEBUG: Command: ls
DEBUG: Bucket 's3://tenant_test_1:test-20181008':
DEBUG: CreateRequest: resource[uri]=/
DEBUG: Using signature v2
DEBUG: SignHeaders: u'GET\n\n\n\nx-amz-date:Fri, 23 Nov 2018 12:32:25 +0000\n/tenant_test_1%3Atest-20181008/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(tenant_test_1:test-20181008): 172.26.2.52:7480
DEBUG: ConnMan.get(): creating new connection: http://172.26.2.52:7480
DEBUG: non-proxied HTTPConnection(172.26.2.52, 7480)
DEBUG: format_uri(): /tenant_test_1:test-20181008/?delimiter=%2F
DEBUG: Sending request method_string='GET', uri=u'/tenant_test_1:test-20181008/?delimiter=%2F', headers={'Authorization': u'AWS 5CUI8393AQR63HAW23Z5:5FRiV+vjaO2r3A4UyPpCv9JuRXU=', 'x-amz-date': 'Fri, 23 Nov 2018 12:32:25 +0000'}, body=(0 bytes)
DEBUG: ConnMan.put(): connection put back to pool (http://172.26.2.52:7480#1)
DEBUG: Response:
{'data': '<?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><RequestId>tx0000000000000020fe34c-005bf7f359-119d18-tupu-zone1</RequestId><HostId>119d18-tupu-zone1-tupu-zonegroup</HostId></Error>',
 'headers': {'accept-ranges': 'bytes',
             'content-length': '211',
             'content-type': 'application/xml',
             'date': 'Fri, 23 Nov 2018 12:32:25 GMT',
             'x-amz-request-id': 'tx0000000000000020fe34c-005bf7f359-119d18-tupu-zone1'},
 'reason': 'Forbidden',
 'status': 403}
DEBUG: S3Error: 403 (Forbidden)
DEBUG: HttpHeader: date: Fri, 23 Nov 2018 12:32:25 GMT
DEBUG: HttpHeader: content-length: 211
DEBUG: HttpHeader: x-amz-request-id: tx0000000000000020fe34c-005bf7f359-119d18-tupu-zone1
DEBUG: HttpHeader: content-type: application/xml
DEBUG: HttpHeader: accept-ranges: bytes
DEBUG: ErrorXML: Code: 'SignatureDoesNotMatch'
DEBUG: ErrorXML: RequestId: 'tx0000000000000020fe34c-005bf7f359-119d18-tupu-zone1'
DEBUG: ErrorXML: HostId: '119d18-tupu-zone1-tupu-zonegroup'
ERROR: S3 error: 403 (SignatureDoesNotMatch)

and rgw log below

/tenant_test_1:test-20181008/
2018-11-23 20:35:32.089318 7fc7bb27a700 15 server signature=wqkzBfrNYr5wGP2V2mIHN397KFQ=
2018-11-23 20:35:32.089320 7fc7bb27a700 15 client signature=WwHb5faoNSxcmS61NrFLMXbUII0=
2018-11-23 20:35:32.089322 7fc7bb27a700 15 compare=-32
2018-11-23 20:35:32.089325 7fc7bb27a700 20 rgw::auth::s3::LocalEngine denied with reason=-2027
2018-11-23 20:35:32.089328 7fc7bb27a700 20 rgw::auth::s3::AWSAuthStrategy denied with reason=-2027
2018-11-23 20:35:32.089331 7fc7bb27a700  5 Failed the auth strategy, reason=-2027
2018-11-23 20:35:32.089333 7fc7bb27a700 10 failed to authorize request
2018-11-23 20:35:32.089335 7fc7bb27a700 20 handler->ERRORHANDLER: err_no=-2027 new_err_no=-2027
2018-11-23 20:35:32.089438 7fc7bb27a700  2 req 34609329:0.000537:s3:GET /tenant_test_1:test-20181008/:list_bucket:op status=0
2018-11-23 20:35:32.089450 7fc7bb27a700  2 req 34609329:0.000549:s3:GET /tenant_test_1:test-20181008/:list_bucket:http status=403

the client and server signature is different, which is wrong ? the server or the client ?

@dovefi
Copy link
Author

dovefi commented Nov 23, 2018

it seems is not about the signature version

@fviard
Copy link
Contributor

fviard commented Nov 23, 2018

Hum a few things are not right.
first:
DEBUG: ConfigParser: host_bucket->%(bucket).172.26.2.52:7480
this means that %(bucket) will be replaced by the bucket name you use.
If you are using a "ip" address, you can't have the bucket name concatenated.

If looks weird to me to have ":" in the bucket name because it is clearly an invalid bucket name as not a valid "domain name" like string.
I guess you should find some config info from ceph for s3 clients if using this syntax?

My best guess is that maybe it could work as "in-path" bucket name.
In that case, you should try to put the same thing in host_base and host_bucket.
and so no "%(bucket}" that will indicate that the bucket name will not be put in domain name part

@dovefi
Copy link
Author

dovefi commented Nov 24, 2018

thank you very much, may be ceph rgw need a “tenant“ parameter in HTTP request,i try to find another way.

@jamshid
Copy link

jamshid commented Dec 22, 2018

So to be clear there is no way to configure a ~/.s3cfg to make s3cmd ls use a V4 signature? An s3cmd "list of buckets" request always uses V2?

AWS doesn't _require_V2 signatures here, right? FWIW rclone -vv --dump headers lsd myaws: seems to use V4 signatures for this "list of buckets" request.

@kkadak
Copy link

kkadak commented Mar 10, 2020

@dovefi have you come up with a workaround? we are using Ceph RGW and are able to list the implicit tenants buckets/objects, but for buckets/objects, in a different tenant namespace, that we should have READ access to we are unable to (ie. s3cmd ls s3://other_tenant:bucket/object fails with ERROR: S3 error: 403 (SignatureDoesNotMatch) ). we are able to use boto and boto3 to write some python scripts but we are also looking to use the s3cmd tools

@fviard fviard self-assigned this Mar 24, 2020
@fviard
Copy link
Contributor

fviard commented Mar 24, 2020

Hi,
Thank you all for your detailed reports and info/questions.
It appears that you were all rights on different bugs or limitations.

  1. Ceph RGW has an unconventional behavior by putting a special character ":" inside the bucket name when using the "tenant" feature. So, there was an issue with bucket name not being encoded in uri in such a case.
    (If someone involved with Ceph radosgw comes around here, let me tell you that I think that it is a very very bad idea the choice of using a special char like that. In aws s3 idea, a bucket is a domain name, so should respect the domain name character limitations.
    Situation can even become worse in the coming years as aws decided to deprecate "path-style" based requests and only support dns "virtual hosted-style" requests)

  2. Another bug: if we were using a bucket with a name not respecting "virtual hosted-style" characters (ex.: uppercase, underscore), "signature v2" usage was forced.
    It is just a legacy behavior, inherited from the first implementation of signature v4 when server support was still limited.

  3. A limitation: as a legacy behavior, similar as for 2), when there is a request without a bucket defined, like for simple bucket "ls". "signature v2" usage was also forced.

Good news, I have a fix for the 3 things.

The fix for 3) is the only one that might have a side effect on some random s3 compatible server:
The problem is that for signature v4, we need to know the user's region.
And so, for things to work "automagically" without an user setting it, any s3 compatible server will have to behave like aws s3, by providing the "region to use" in the reported error or just ignore the region value.

@fviard
Copy link
Contributor

fviard commented Mar 26, 2020

Fixed in MASTER.

@fviard fviard closed this as completed Mar 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants