Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NC | NSFS | Restrict path and new_bucket_path Values #8177

Open
shirady opened this issue Jul 2, 2024 · 4 comments
Open

NC | NSFS | Restrict path and new_bucket_path Values #8177

shirady opened this issue Jul 2, 2024 · 4 comments
Labels

Comments

@shirady
Copy link
Contributor

shirady commented Jul 2, 2024

Environment info

  • NooBaa Version: master (current 5.17)
  • Platform: NSFS NC (running in local machine with MacOS)

Actual behavior

(Originally it was with the title "Check Bucket Boundaries Fails Upload an Object")

  1. When executing an S3 operation on a bucket - for example uploading an object we get an error that the upload failed.
    Note: the origin of the error as will be described below is in the check bucket boundaries - it is relevant for more operations (listing objects in a bucket, etc.).

Expected behavior

  1. Not to have this error.

Steps to reproduce

  1. Create an account using the cli: sudo node src/cmd/manage_nsfs account add --name <account-name> --new_buckets_path /tmp/nsfs_root1 --access_key <access-key> --secret_key <secret-key> --uid <uid> --gid <gid>.
    Note: Before creating the account need to give permission to the new_buckets_path: chmod 777 /tmp/nsfs_root1.
  2. Start the NSFS server with: sudo node src/cmd/nsfs --debug 5
  3. Use the account's details for the alias: alias s3-nc-user-1='AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<secret-key> aws --no-verify-ssl --endpoint-url https://localhost:6443'.
  4. Create the bucket: s3-nc-user-1 s3 mb s3://shira-1001-bucket-1.
  5. Create and Upload an object: touch hello_world.txt and then s3-nc-user-1 s3 cp hello_world.txt s3://shira-1001-bucket-1, see the error:

upload failed: ./hello_world.txt to s3://shira-1001-bucket-1/hello_world.txt An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

Note: after changing this line in the config: config.NSFS_CHECK_BUCKET_BOUNDARIES = false; // SDSD and restarting the server (ctrl + c and rerun sudo node src/cmd/nsfs --debug 5) we do not have an error:
s3-nc-user-1 s3 ls s3://shira-1001-bucket-1/
2024-07-02 13:32:07 0 hello_world.txt

More information - Screenshots / Logs / Other output

Logs from the server:

Jul-2 13:13:50.988 [nsfs/17208]    [L1] core.sdk.namespace_fs:: check_bucket_boundaries: fs_context { uid: 1001, gid: 1001, new_buckets_path: '/tmp/nsfs_root1', warn_threshold_ms: 100, backend: '', report_fs_stats: [Function (anonymous)] } file_path /tmp/nsfs_root1/shira-1001-bucket-1 this.bucket_path /tmp/nsfs_root1/shira-1001-bucket-1
2024-07-02 13:13:50.988168 [PID-17208/TID-259] [L1] FS::FSWorker::Begin: RealPath _path=/tmp/nsfs_root1/shira-1001-bucket-1
2024-07-02 13:13:50.988192 [PID-17208/TID-8707] [L1] FS::FSWorker::Execute: RealPath _path=/tmp/nsfs_root1/shira-1001-bucket-1 _uid=1001 _gid=1001 _backend=
2024-07-02 13:13:50.988212 [PID-17208/TID-8707] [L1] FS::FSWorker::Execute: RealPath _path=/tmp/nsfs_root1/shira-1001-bucket-1 _uid=1001 _gid=1001 geteuid()=1001 getegid()=1001 getuid()=1001 getgid()=1001
2024-07-02 13:13:50.988263 [PID-17208/TID-8707] [L1] FS::FSWorker::Execute: RealPath _path=/tmp/nsfs_root1/shira-1001-bucket-1  took: 0.018041 ms
2024-07-02 13:13:50.988284 [PID-17208/TID-259] [L1] FS::RealPath::OnOK: _path=/tmp/nsfs_root1/shira-1001-bucket-1 _full_path=/private/tmp/nsfs_root1/shira-1001-bucket-1
Jul-2 13:13:50.988 [nsfs/17208]    [L0] core.sdk.namespace_fs:: check_bucket_boundaries: the path /tmp/nsfs_root1/shira-1001-bucket-1 is not in the bucket /tmp/nsfs_root1/shira-1001-bucket-1 boundaries
Jul-2 13:13:50.990 [nsfs/17208]  [WARN] core.sdk.namespace_fs:: NamespaceFS: upload_object buffer pool cleanup error Error: Entry /tmp/nsfs_root1/shira-1001-bucket-1/hello_world.txt is not in bucket boundaries
    at Object.new_error_code (/Users/shiradymnik/SourceCode/noobaa-core/src/util/error_utils.js:16:26)
    at NamespaceFS._check_path_in_bucket_boundaries (/Users/shiradymnik/SourceCode/noobaa-core/src/sdk/namespace_fs.js:2541:31)
    at async NamespaceFS.upload_object (/Users/shiradymnik/SourceCode/noobaa-core/src/sdk/namespace_fs.js:1126:13)
    at async NsfsObjectSDK._call_op_and_update_stats (/Users/shiradymnik/SourceCode/noobaa-core/src/sdk/object_sdk.js:543:27)
    at async Object.put_object [as handler] (/Users/shiradymnik/SourceCode/noobaa-core/src/endpoint/s3/ops/s3_put_object.js:39:19)
    at async handle_request (/Users/shiradymnik/SourceCode/noobaa-core/src/endpoint/s3/s3_rest.js:150:19)
    at async Object.s3_rest [as handler] (/Users/shiradymnik/SourceCode/noobaa-core/src/endpoint/s3/s3_rest.js:65:9) {
  code: 'EACCES'
}
Jul-2 13:13:50.991 [nsfs/17208] [ERROR] core.endpoint.s3.s3_rest:: S3 ERROR <?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</Code><Message>Access Denied</Message><Resource>/shira-1001-bucket-1/hello_world.txt</Resource><RequestId>ly494dmt-6yxcl2-f9x</RequestId></Error> PUT /shira-1001-bucket-1/hello_world.txt {"host":"localhost:6443","accept-encoding":"identity","content-type":"text/plain","user-agent":"aws-cli/2.15.36 Python/3.11.9 Darwin/23.4.0 source/arm64 prompt/off command/s3.cp","content-md5":"1B2M2Y8AsgTpgAmY7PhCfg==","expect":"100-continue","x-amz-date":"20240702T101350Z","x-amz-content-sha256":"UNSIGNED-PAYLOAD","authorization":"AWS4-HMAC-SHA256 Credential=Dwertyuiopasdfg11001/20240702/us-east-1/s3/aws4_request, SignedHeaders=content-md5;content-type;host;x-amz-content-sha256;x-amz-date, Signature=60bd00111e1602ed42773d4b464c5e580886ce1e3f92af472368a88b85d7886b","content-length":"0"} Error: Entry /tmp/nsfs_root1/shira-1001-bucket-1/hello_world.txt is not in bucket boundaries
    at Object.new_error_code (/Users/shiradymnik/SourceCode/noobaa-core/src/util/error_utils.js:16:26)
    at NamespaceFS._check_path_in_bucket_boundaries (/Users/shiradymnik/SourceCode/noobaa-core/src/sdk/namespace_fs.js:2541:31)
    at async NamespaceFS.upload_object (/Users/shiradymnik/SourceCode/noobaa-core/src/sdk/namespace_fs.js:1126:13)
    at async NsfsObjectSDK._call_op_and_update_stats (/Users/shiradymnik/SourceCode/noobaa-core/src/sdk/object_sdk.js:543:27)
    at async Object.put_object [as handler] (/Users/shiradymnik/SourceCode/noobaa-core/src/endpoint/s3/ops/s3_put_object.js:39:19)
    at async handle_request (/Users/shiradymnik/SourceCode/noobaa-core/src/endpoint/s3/s3_rest.js:150:19)
    at async Object.s3_rest [as handler] (/Users/shiradymnik/SourceCode/noobaa-core/src/endpoint/s3/s3_rest.js:65:9)
@romayalon
Copy link
Contributor

@shirady This only happens because you use /tmp/ and not /private/tmp/ as the bucket path on your Mac.
/tmp/ on Mac is a symlink to /private/tmp/ and when checking the boundaries we check the real path of the object /private/tmp/shira-1001-bucket-1/hello_world.txt is in the boundaries of the bucket path which is /tmp/shira-1001-bucket-1. If we do want to support bucket path as symlink it's a very small fix, @guymguym do you see a reason for not allowing bucket path to be a symlink?

@guymguym
Copy link
Member

guymguym commented Jul 2, 2024

@romayalon @shirady Same reason as we protect it for things inside a bucket - to avoid exposing sensitive data from the system. For example what if someone managed to set the bucket path to ln -s /etc /fs/bucketpath and then tried to download passwd, or worse - upload it... This config option is meant to prevent that by default. If you want to allow it for your dev env, override the config.

@shirady
Copy link
Contributor Author

shirady commented Jul 2, 2024

@guymguym, I don't understand what is the exact difference if this link is a symlink or an absolute path in which we protect it?

Currently, as I understand it what blocks a new_buckets_path is the check of in the CLI is_dir_rw_accessible and this dir will not be accessible to a user that is not uid and gid 0 (root) - which means that your concern of exposing sensitive data can happen if someone passes to the CLI, for example: sudo node src/cmd/manage_nsfs account add --name shira-path --new_buckets_path /etc/noobaa.conf.d --uid 0 --gid 0 (I took our config path as an example of a directory in /etc)

output (omitting details of access_key, secret_key and master_key_id):

{
  "response": {
    "code": "AccountCreated",
    "reply": {
      "_id": "6683f6f16a973327fcf08dfc",
      "name": "shira-path",
      "email": "shira-path",
      "creation_date": "2024-07-02T12:47:45.608Z",
      "access_keys": [
        {
          "access_key": "",
          "secret_key": ""
        }
      ],
      "nsfs_account_config": {
        "uid": 0,
        "gid": 0,
        "new_buckets_path": "/etc/noobaa.conf.d"
      },
      "allow_bucket_creation": true,
      "master_key_id": ""
    }
  }
}

@shirady shirady changed the title Check Bucket Boundaries Fails Upload an Object NC | NSFS | Restrict path and new_bucket_path Values Jul 8, 2024
@shirady
Copy link
Contributor Author

shirady commented Jul 8, 2024

I changed the title of the issue to match what I wrote in the commented above, and so we will define it and solve it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants