-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement] Skip tablet schema in rowset meta during ingestion. #50873
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
sevev
changed the title
[WIP][Enhancement] Skip tablet schema in rowset meta during ingestion.
[Enhancement] Skip tablet schema in rowset meta during ingestion.
Sep 19, 2024
sevev
force-pushed
the
load_schema_remove
branch
from
September 24, 2024 02:13
5e36a40
to
000a707
Compare
decster
previously approved these changes
Sep 24, 2024
@mergify rebase |
Signed-off-by: sevev <[email protected]>
Signed-off-by: sevev <[email protected]>
Signed-off-by: sevev <[email protected]>
Signed-off-by: sevev <[email protected]>
Signed-off-by: sevev <[email protected]>
Signed-off-by: sevev <[email protected]>
Signed-off-by: sevev <[email protected]>
✅ Branch has been successfully rebased |
sevev
force-pushed
the
load_schema_remove
branch
from
September 24, 2024 08:41
000a707
to
16f0b0d
Compare
Signed-off-by: zhangqiang <[email protected]>
Signed-off-by: sevev <[email protected]>
wyb
reviewed
Sep 27, 2024
wyb
previously approved these changes
Oct 3, 2024
luohaha
previously approved these changes
Oct 3, 2024
sevev
force-pushed
the
load_schema_remove
branch
from
October 12, 2024 01:39
1620da2
to
8f45131
Compare
luohaha
previously approved these changes
Oct 12, 2024
sevev
force-pushed
the
load_schema_remove
branch
from
October 12, 2024 02:32
8f45131
to
644a450
Compare
Signed-off-by: sevev <[email protected]>
sevev
force-pushed
the
load_schema_remove
branch
from
October 12, 2024 02:35
644a450
to
4af09ac
Compare
[Java-Extensions Incremental Coverage Report]✅ pass : 0 / 0 (0%) |
[FE Incremental Coverage Report]✅ pass : 0 / 0 (0%) |
[BE Incremental Coverage Report]✅ pass : 166 / 187 (88.77%) file detail
|
wyb
approved these changes
Oct 12, 2024
decster
approved these changes
Oct 12, 2024
kevincai
approved these changes
Oct 12, 2024
@Mergifyio backport branch-3.3 |
@Mergifyio backport branch-3.2 |
✅ Backports have been created
|
✅ Backports have been created
|
mergify bot
pushed a commit
that referenced
this pull request
Oct 12, 2024
…0873) ## Why I'm doing: Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues: 1. **When the import frequency is very high**, and a large number of Rowsets are generated, the metadata in RocksDB grows, particularly in non-primary key (non-PK) table scenarios. This is because, in non-PK tables, each update to the tablet metadata rewrites the historical Rowset metadata, leading to a large amount of obsolete data in RocksDB. 2. **When the tablet has a very large number of columns (e.g., 10,000 columns)**, the time taken to persist the Rowset metadata increases, especially when the imported data volume is small. These two issues can eventually reduce the efficiency of real-time imports. ## What I'm doing: This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat. One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema. However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata. So I choose a more conservative solution, and the main changes are as follows: 1. **Skip the schema in Rowset meta during import if the Rowset's schema is identical to the latest tablet schema.** 2. **Update the RowsetMeta that does not store schema when updating the tablet schema.** If the BE exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema. Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema. Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat. However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance. To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR. Below is a test based on this PR: a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times. | Branch | Table type |Total cost time | |----------|----------|----------| | main-8f128b | Duplicate | 1043.77(s) | | this pr | Duplicate | 178.49(s) | | main-8f128b | Primary | 188.46(s) | | this pr | Primary | 186.68(s) | Signed-off-by: sevev <[email protected]> Signed-off-by: zhangqiang <[email protected]> (cherry picked from commit 3005729) # Conflicts: # be/src/common/config.h # be/src/storage/tablet.h # be/src/storage/tablet_meta.h
mergify bot
pushed a commit
that referenced
this pull request
Oct 12, 2024
…0873) ## Why I'm doing: Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues: 1. **When the import frequency is very high**, and a large number of Rowsets are generated, the metadata in RocksDB grows, particularly in non-primary key (non-PK) table scenarios. This is because, in non-PK tables, each update to the tablet metadata rewrites the historical Rowset metadata, leading to a large amount of obsolete data in RocksDB. 2. **When the tablet has a very large number of columns (e.g., 10,000 columns)**, the time taken to persist the Rowset metadata increases, especially when the imported data volume is small. These two issues can eventually reduce the efficiency of real-time imports. ## What I'm doing: This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat. One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema. However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata. So I choose a more conservative solution, and the main changes are as follows: 1. **Skip the schema in Rowset meta during import if the Rowset's schema is identical to the latest tablet schema.** 2. **Update the RowsetMeta that does not store schema when updating the tablet schema.** If the BE exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema. Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema. Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat. However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance. To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR. Below is a test based on this PR: a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times. | Branch | Table type |Total cost time | |----------|----------|----------| | main-8f128b | Duplicate | 1043.77(s) | | this pr | Duplicate | 178.49(s) | | main-8f128b | Primary | 188.46(s) | | this pr | Primary | 186.68(s) | Signed-off-by: sevev <[email protected]> Signed-off-by: zhangqiang <[email protected]> (cherry picked from commit 3005729) # Conflicts: # be/src/common/config.h # be/src/storage/compaction_task.h # be/src/storage/tablet.cpp # be/src/storage/tablet.h # be/src/storage/tablet_meta.cpp # be/src/storage/tablet_meta.h # be/src/storage/txn_manager.cpp
This was referenced Oct 12, 2024
ignore backport check:3.2.12 |
wanpengfei-git
pushed a commit
that referenced
this pull request
Oct 16, 2024
…ckport #50873) (#51842) Signed-off-by: sevev <[email protected]> Co-authored-by: zhangqiang <[email protected]>
wanpengfei-git
pushed a commit
that referenced
this pull request
Oct 24, 2024
…ckport #50873) (#51843) Signed-off-by: sevev <[email protected]> Co-authored-by: zhangqiang <[email protected]>
ZiheLiu
pushed a commit
to ZiheLiu/starrocks
that referenced
this pull request
Oct 31, 2024
…arRocks#50873) ## Why I'm doing: Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues: 1. **When the import frequency is very high**, and a large number of Rowsets are generated, the metadata in RocksDB grows, particularly in non-primary key (non-PK) table scenarios. This is because, in non-PK tables, each update to the tablet metadata rewrites the historical Rowset metadata, leading to a large amount of obsolete data in RocksDB. 2. **When the tablet has a very large number of columns (e.g., 10,000 columns)**, the time taken to persist the Rowset metadata increases, especially when the imported data volume is small. These two issues can eventually reduce the efficiency of real-time imports. ## What I'm doing: This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat. One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema. However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata. So I choose a more conservative solution, and the main changes are as follows: 1. **Skip the schema in Rowset meta during import if the Rowset's schema is identical to the latest tablet schema.** 2. **Update the RowsetMeta that does not store schema when updating the tablet schema.** If the BE exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema. Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema. Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat. However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance. To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR. Below is a test based on this PR: a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times. | Branch | Table type |Total cost time | |----------|----------|----------| | main-8f128b | Duplicate | 1043.77(s) | | this pr | Duplicate | 178.49(s) | | main-8f128b | Primary | 188.46(s) | | this pr | Primary | 186.68(s) | Signed-off-by: sevev <[email protected]> Signed-off-by: zhangqiang <[email protected]>
renzhimin7
pushed a commit
to renzhimin7/starrocks
that referenced
this pull request
Nov 7, 2024
…arRocks#50873) ## Why I'm doing: Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues: 1. **When the import frequency is very high**, and a large number of Rowsets are generated, the metadata in RocksDB grows, particularly in non-primary key (non-PK) table scenarios. This is because, in non-PK tables, each update to the tablet metadata rewrites the historical Rowset metadata, leading to a large amount of obsolete data in RocksDB. 2. **When the tablet has a very large number of columns (e.g., 10,000 columns)**, the time taken to persist the Rowset metadata increases, especially when the imported data volume is small. These two issues can eventually reduce the efficiency of real-time imports. ## What I'm doing: This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat. One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema. However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata. So I choose a more conservative solution, and the main changes are as follows: 1. **Skip the schema in Rowset meta during import if the Rowset's schema is identical to the latest tablet schema.** 2. **Update the RowsetMeta that does not store schema when updating the tablet schema.** If the BE exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema. Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema. Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat. However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance. To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR. Below is a test based on this PR: a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times. | Branch | Table type |Total cost time | |----------|----------|----------| | main-8f128b | Duplicate | 1043.77(s) | | this pr | Duplicate | 178.49(s) | | main-8f128b | Primary | 188.46(s) | | this pr | Primary | 186.68(s) | Signed-off-by: sevev <[email protected]> Signed-off-by: zhangqiang <[email protected]> Signed-off-by: zhiminr.ren <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Why I'm doing:
Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues:
These two issues can eventually reduce the efficiency of real-time imports.
What I'm doing:
This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat.
One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema.
However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata.
So I choose a more conservative solution, and the main changes are as follows:
If the BE exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema.
Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema.
Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat.
However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance.
To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR.
Below is a test based on this PR:
a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times.
Fixes #issue
What type of PR is this:
Does this PR entail a change in behavior?
If yes, please specify the type of change:
Checklist:
Bugfix cherry-pick branch check: