Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error message namespace_fs._upload_stream: finish occured on stream ChunkFS #8588

Closed
rkomandu opened this issue Dec 10, 2024 · 7 comments
Closed
Labels

Comments

@rkomandu
Copy link
Collaborator

Environment info

  • NooBaa Version: VERSION
  • Platform: Kubernetes 1.14.1 | minikube 1.1.1 | OpenShift 4.1 | other: specify

noobaa-core-5.18.0-20241207.el8.x86_64 d/s rpm

Actual behavior

  1. While running the put object operation, looks like the message is in the noobaa.log
noobaa.log
Dec 10 06:22:22 rk522ga-22 node[3270239]: [Upgrade/3270239]   [LOG] CONSOLE:: init_rand_seed: seeding with 32 bytes
Dec 10 06:22:25 rk522ga-22 node[3270651]: [/3270651]   [LOG] CONSOLE:: detect_fips_mode: found /proc/sys/crypto/fips_enabled with value 0
Dec 10 06:22:25 rk522ga-22 node[3270651]: [Upgrade/3270651]   [LOG] CONSOLE:: read_rand_seed: reading 32 bytes from /dev/urandom ...
Dec 10 06:22:25 rk522ga-22 node[3270651]: [Upgrade/3270651]   [LOG] CONSOLE:: read_rand_seed: got 32 bytes from /dev/urandom, total 32 ...
Dec 10 06:22:25 rk522ga-22 node[3270651]: [Upgrade/3270651]   [LOG] CONSOLE:: read_rand_seed: closing fd ...
Dec 10 06:22:25 rk522ga-22 node[3270651]: [Upgrade/3270651]   [LOG] CONSOLE:: init_rand_seed: seeding with 32 bytes
Dec 10 06:22:25 rk522ga-22 [3201271]: [Upgrade/3201271]    [L0] core.sdk.endpoint_stats_collector:: bucket stats - bucket-61417 text/plain : { read_count: 1 }
Dec 10 06:22:25 rk522ga-22 [3201271]: [Upgrade/3201271]    [L0] core.sdk.endpoint_stats_collector:: namespace stats - undefined : { read_count: 1, read_bytes:
 105062400 }
Dec 10 06:22:25 rk522ga-22 node[3270651]: [Upgrade/3270651]    [L0] core.manage_nsfs.nc_master_key_manager:: init_from_exec: get master keys response status=OK, version=1
Dec 10 06:22:27 rk522ga-22 [3201271]: [Upgrade/3201271]    [L0] core.sdk.bucketspace_fs:: BucketSpaceFS.read_bucket_sdk_info: bucket name bucket-86128
Dec 10 06:22:27 rk522ga-22 [3201271]: [Upgrade/3201271]    [L0] core.sdk.namespace_fs:: NamespaceFS: buffers_pool  [ BufferPool.get_buffer: sem value: 2097152
 waiting_value: 0 buffers length: 0, BufferPool.get_buffer: sem value: 33554432 waiting_value: 0 buffers length: 0, BufferPool.get_buffer: sem value: 70254592
 waiting_value: 0 buffers length: 0, BufferPool.get_buffer: sem value: 629145600 waiting_value: 0 buffers length: 11 ]
Dec 10 06:22:27 rk522ga-22 [3201271]: [Upgrade/3201271]    [L0] core.endpoint.s3.ops.s3_put_object:: PUT OBJECT bucket-86128 test_file.txt
Dec 10 06:22:28 rk522ga-22 [3201271]: [Upgrade/3201271] [ERROR] core.sdk.namespace_fs:: namespace_fs._upload_stream: finish occured on stream ChunkFS:  undefined
Dec 10 06:22:28 rk522ga-22 [3201271]: [Upgrade/3201271] [ERROR] core.sdk.namespace_fs:: namespace_fs._upload_stream: close occured on stream ChunkFS:  undefined
Dec 10 06:22:28 rk522ga-22 [3201271]: 2024-12-10 06:22:28.412506 [PID-3201271/TID-3201686] [L0] FS::FSWorker::Execute: WARNING FileFsync _wrap->_path=/mnt/gpf
s0/account_86128/bucket-86128  took too long: 249.048 ms
Dec 10 06:22:32 rk522ga-22 node[3271454]: [/3271454]   [LOG] CONSOLE:: detect_fips_mode: found /proc/sys/crypto/fips_enabled with value 0
Dec 10 06:22:32 rk522ga-22 node[3271454]: [Upgrade/3271454]   [LOG] CONSOLE:: read_rand_seed: reading 32 bytes from /dev/urandom ...
Dec 10 06:22:32 rk522ga-22 node[3271454]: [Upgrade/3271454]   [LOG] CONSOLE:: read_rand_seed: got 32 bytes from /dev/urandom, total 32 ...
Dec 10 06:22:32 rk522ga-22 node[3271454]: [Upgrade/3271454]   [LOG] CONSOLE:: read_rand_seed: closing fd ...
Dec 10 06:22:32 rk522ga-22 node[3271454]: [Upgrade/3271454]   [LOG] CONSOLE:: init_rand_seed: seeding with 32 bytes

Expected behavior

The ERROR messages should be cleaned ?

Steps to reproduce

Install the rpm
run the s3 put operation

More information - Screenshots / Logs / Other output

@rkomandu rkomandu added the NS-FS label Dec 10, 2024
@rkomandu
Copy link
Collaborator Author

rkomandu commented Dec 10, 2024

@romayalon , if you observe above the messages are come as "upgrade" instead of "nsfs", are these appropriate ?
i.e [Upgrade/3271454]

@romayalon
Copy link
Contributor

romayalon commented Dec 10, 2024

Hi @rkomandu
They were indeed incorrect, I fixed both on the following merged PRs -

  1. Converted error to log1 on stream close - NSFS | Replace ChunkFS with FileWriter #8577
  2. Upgrade process name fix - NC | Online Upgrade | Health CLI update config directory and upgrade checks #8532
    so please try the newest upstream RPM and let me know if we can close this bug

@rkomandu
Copy link
Collaborator Author

we will pick up the next 4.18 d/s rpm and check it out

@romayalon
Copy link
Contributor

@rkomandu any news? can we close it?

@romayalon
Copy link
Contributor

closing, no response

@rkomandu
Copy link
Collaborator Author

@romayalon
verified on the 4.18 20250111 d/s build

Jan 16 09:47:52 rk523-12 node[310359]: [/310359]   [LOG] CONSOLE:: init_rand_seed: seeding with 32 bytes
Jan 16 09:47:58 rk523-12 [68695]: [nsfs/68695]    [L0] core.sdk.bucketspace_fs:: BucketSpaceFS.read_bucket_sdk_info: bucket name newbucket-15k1
Jan 16 09:47:58 rk523-12 [68695]: [nsfs/68695]    [L0] core.sdk.namespace_fs:: NamespaceFS: buffers_pool  [ BufferPool.get_buffer: sem value: 2097152 waiting_value: 0 buffers length: 0, BufferPool.get_buffer: sem value: 33554432 waiting_value: 0 buffers length: 0, BufferPool.get_buffer: sem value: 70254592 waiting_value: 0 buffers length: 0, BufferPool.get_buffer: sem value: 629145600 waiting_value: 0 buffers length: 0 ]
Jan 16 09:47:58 rk523-12 [68695]: [nsfs/68695]    [L0] core.endpoint.s3.ops.s3_put_object:: PUT OBJECT newbucket-15k1 file-1
Jan 16 09:47:58 rk523-12 [68695]: 2025-01-16 09:47:58.841663 [PID-68695/TID-68989] [L0] FS::FSWorker::Execute: WARNING FileFsync _wrap->_path=/mnt/gpfs0/s3user15001-dir/newbucket-15k1  took too long: 128.799 ms

Note: wanted to try with the correct Scale packages, hence took time to do this automatic deployment enabled in our env.

@rkomandu
Copy link
Collaborator Author

Upgrade is also performed on the cluster from 20250101 to 2025011 d/s build and no longer observed the [Upgrade] message in the noobaa logs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants