Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Skip tablet schema in rowset meta during ingestion. #50873

Merged
merged 12 commits into from
Oct 12, 2024

Conversation

sevev
Copy link
Contributor

@sevev sevev commented Sep 9, 2024

Why I'm doing:

Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues:

  1. When the import frequency is very high, and a large number of Rowsets are generated, the metadata in RocksDB grows, particularly in non-primary key (non-PK) table scenarios. This is because, in non-PK tables, each update to the tablet metadata rewrites the historical Rowset metadata, leading to a large amount of obsolete data in RocksDB.
  2. When the tablet has a very large number of columns (e.g., 10,000 columns), the time taken to persist the Rowset metadata increases, especially when the imported data volume is small.

These two issues can eventually reduce the efficiency of real-time imports.

What I'm doing:

This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat.

One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema.

However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata.

So I choose a more conservative solution, and the main changes are as follows:

  1. Skip the schema in Rowset meta during import if the Rowset's schema is identical to the latest tablet schema.
  2. Update the RowsetMeta that does not store schema when updating the tablet schema.

If the BE exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema.

Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema.

Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat.

However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance.

To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR.

Below is a test based on this PR:
a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times.

Branch Table type Total cost time
main-8f128b Duplicate 1043.77(s)
this pr Duplicate 178.49(s)
main-8f128b Primary 188.46(s)
this pr Primary 186.68(s)

Fixes #issue

What type of PR is this:

  • BugFix
  • Feature
  • Enhancement
  • Refactor
  • UT
  • Doc
  • Tool

Does this PR entail a change in behavior?

  • Yes, this PR will result in a change in behavior.
  • No, this PR will not result in a change in behavior.

If yes, please specify the type of change:

  • Interface/UI changes: syntax, type conversion, expression evaluation, display information
  • Parameter changes: default values, similar parameters but with different default values
  • Policy changes: use new policy to replace old one, functionality automatically enabled
  • Feature removed
  • Miscellaneous: upgrade & downgrade compatibility, etc.

Checklist:

  • I have added test cases for my bug fix or my new feature
  • This pr needs user documentation (for new or modified features or behaviors)
    • I have added documentation for my new feature or new function
  • This is a backport pr

Bugfix cherry-pick branch check:

  • I have checked the version labels which the pr will be auto-backported to the target branch
    • 3.3
    • 3.2
    • 3.1
    • 3.0
    • 2.5

@sevev sevev requested review from a team as code owners September 9, 2024 13:31
@mergify mergify bot assigned sevev Sep 9, 2024
@sevev sevev changed the title [WIP][Enhancement] Skip tablet schema in rowset meta during ingestion. [Enhancement] Skip tablet schema in rowset meta during ingestion. Sep 19, 2024
decster
decster previously approved these changes Sep 24, 2024
@sevev
Copy link
Contributor Author

sevev commented Sep 24, 2024

@mergify rebase

Copy link
Contributor

mergify bot commented Sep 24, 2024

rebase

✅ Branch has been successfully rebased

@luohaha luohaha self-requested a review October 3, 2024 02:32
@StarRocks StarRocks deleted a comment from mergify bot Oct 3, 2024
wyb
wyb previously approved these changes Oct 3, 2024
luohaha
luohaha previously approved these changes Oct 3, 2024
luohaha
luohaha previously approved these changes Oct 12, 2024
Signed-off-by: sevev <[email protected]>
Copy link

[Java-Extensions Incremental Coverage Report]

pass : 0 / 0 (0%)

Copy link

[FE Incremental Coverage Report]

pass : 0 / 0 (0%)

Copy link

[BE Incremental Coverage Report]

pass : 166 / 187 (88.77%)

file detail

path covered_line new_line coverage not_covered_line_detail
🔵 be/src/storage/data_dir.cpp 0 2 00.00% [402, 421]
🔵 be/src/storage/base_tablet.h 0 1 00.00% [112]
🔵 be/src/storage/txn_manager.cpp 23 31 74.19% [272, 297, 298, 307, 308, 632, 633, 634]
🔵 be/src/storage/tablet.cpp 29 32 90.62% [632, 633, 634]
🔵 be/src/storage/tablet_updates.cpp 49 54 90.74% [2636, 5682, 5691, 5702, 5706]
🔵 be/src/storage/tablet_meta.cpp 27 29 93.10% [329, 337]
🔵 be/src/storage/compaction_task.h 1 1 100.00% []
🔵 be/src/storage/tablet_meta_manager.cpp 10 10 100.00% []
🔵 be/src/storage/olap_common.h 6 6 100.00% []
🔵 be/src/storage/rowset/rowset_meta_manager.cpp 5 5 100.00% []
🔵 be/src/storage/tablet.h 2 2 100.00% []
🔵 be/src/storage/rowset/rowset_meta.h 14 14 100.00% []

@sevev sevev merged commit 3005729 into StarRocks:main Oct 12, 2024
50 checks passed
Copy link

@Mergifyio backport branch-3.3

@github-actions github-actions bot removed the 3.3 label Oct 12, 2024
Copy link

@Mergifyio backport branch-3.2

@github-actions github-actions bot removed the 3.2 label Oct 12, 2024
Copy link
Contributor

mergify bot commented Oct 12, 2024

backport branch-3.3

✅ Backports have been created

Copy link
Contributor

mergify bot commented Oct 12, 2024

backport branch-3.2

✅ Backports have been created

mergify bot pushed a commit that referenced this pull request Oct 12, 2024
…0873)

## Why I'm doing:
Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues:

1. **When the import frequency is very high**, and a large number of Rowsets are generated, the metadata in RocksDB grows, particularly in non-primary key (non-PK) table scenarios. This is because, in non-PK tables, each update to the tablet metadata rewrites the historical Rowset metadata, leading to a large amount of obsolete data in RocksDB.
2. **When the tablet has a very large number of columns (e.g., 10,000 columns)**, the time taken to persist the Rowset metadata increases, especially when the imported data volume is small.

These two issues can eventually reduce the efficiency of real-time imports.

## What I'm doing:
This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat.

One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema.

However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata.

So I choose a more conservative solution, and the main changes are as follows:
1. **Skip the schema in Rowset meta during import if the Rowset's schema is identical to the latest tablet schema.**
2. **Update the RowsetMeta that does not store schema when updating the tablet schema.**

If the BE  exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema.

Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema.

Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat.

However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance.

To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR.

Below is a test based on this PR:
a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times.

| Branch | Table type |Total cost time |
|----------|----------|----------|
|  main-8f128b  | Duplicate  | 1043.77(s)  |
|  this pr  |  Duplicate  | 178.49(s)  |
|  main-8f128b  | Primary  | 188.46(s)  |
|  this pr  |  Primary  | 186.68(s)  |

Signed-off-by: sevev <[email protected]>
Signed-off-by: zhangqiang <[email protected]>
(cherry picked from commit 3005729)

# Conflicts:
#	be/src/common/config.h
#	be/src/storage/tablet.h
#	be/src/storage/tablet_meta.h
mergify bot pushed a commit that referenced this pull request Oct 12, 2024
…0873)

## Why I'm doing:
Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues:

1. **When the import frequency is very high**, and a large number of Rowsets are generated, the metadata in RocksDB grows, particularly in non-primary key (non-PK) table scenarios. This is because, in non-PK tables, each update to the tablet metadata rewrites the historical Rowset metadata, leading to a large amount of obsolete data in RocksDB.
2. **When the tablet has a very large number of columns (e.g., 10,000 columns)**, the time taken to persist the Rowset metadata increases, especially when the imported data volume is small.

These two issues can eventually reduce the efficiency of real-time imports.

## What I'm doing:
This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat.

One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema.

However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata.

So I choose a more conservative solution, and the main changes are as follows:
1. **Skip the schema in Rowset meta during import if the Rowset's schema is identical to the latest tablet schema.**
2. **Update the RowsetMeta that does not store schema when updating the tablet schema.**

If the BE  exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema.

Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema.

Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat.

However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance.

To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR.

Below is a test based on this PR:
a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times.

| Branch | Table type |Total cost time |
|----------|----------|----------|
|  main-8f128b  | Duplicate  | 1043.77(s)  |
|  this pr  |  Duplicate  | 178.49(s)  |
|  main-8f128b  | Primary  | 188.46(s)  |
|  this pr  |  Primary  | 186.68(s)  |

Signed-off-by: sevev <[email protected]>
Signed-off-by: zhangqiang <[email protected]>
(cherry picked from commit 3005729)

# Conflicts:
#	be/src/common/config.h
#	be/src/storage/compaction_task.h
#	be/src/storage/tablet.cpp
#	be/src/storage/tablet.h
#	be/src/storage/tablet_meta.cpp
#	be/src/storage/tablet_meta.h
#	be/src/storage/txn_manager.cpp
@yingtingdong
Copy link
Contributor

ignore backport check:3.2.12

wanpengfei-git pushed a commit that referenced this pull request Oct 16, 2024
wanpengfei-git pushed a commit that referenced this pull request Oct 24, 2024
ZiheLiu pushed a commit to ZiheLiu/starrocks that referenced this pull request Oct 31, 2024
…arRocks#50873)

## Why I'm doing:
Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues:

1. **When the import frequency is very high**, and a large number of Rowsets are generated, the metadata in RocksDB grows, particularly in non-primary key (non-PK) table scenarios. This is because, in non-PK tables, each update to the tablet metadata rewrites the historical Rowset metadata, leading to a large amount of obsolete data in RocksDB.
2. **When the tablet has a very large number of columns (e.g., 10,000 columns)**, the time taken to persist the Rowset metadata increases, especially when the imported data volume is small.

These two issues can eventually reduce the efficiency of real-time imports.


## What I'm doing:
This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat. 

One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema.

However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata.

So I choose a more conservative solution, and the main changes are as follows:
1. **Skip the schema in Rowset meta during import if the Rowset's schema is identical to the latest tablet schema.**
2. **Update the RowsetMeta that does not store schema when updating the tablet schema.**

If the BE  exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema.

Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema.

Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat. 

However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance.

To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR.

Below is a test based on this PR: 
a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times.

| Branch | Table type |Total cost time |
|----------|----------|----------|
|  main-8f128b  | Duplicate  | 1043.77(s)  |
|  this pr  |  Duplicate  | 178.49(s)  | 
|  main-8f128b  | Primary  | 188.46(s)  |
|  this pr  |  Primary  | 186.68(s)  | 


Signed-off-by: sevev <[email protected]>
Signed-off-by: zhangqiang <[email protected]>
renzhimin7 pushed a commit to renzhimin7/starrocks that referenced this pull request Nov 7, 2024
…arRocks#50873)

## Why I'm doing:
Since version 3.2, Rowset has been designed to save its own schema, and the complete tablet schema is stored in the metadata. This can lead to the following issues:

1. **When the import frequency is very high**, and a large number of Rowsets are generated, the metadata in RocksDB grows, particularly in non-primary key (non-PK) table scenarios. This is because, in non-PK tables, each update to the tablet metadata rewrites the historical Rowset metadata, leading to a large amount of obsolete data in RocksDB.
2. **When the tablet has a very large number of columns (e.g., 10,000 columns)**, the time taken to persist the Rowset metadata increases, especially when the imported data volume is small.

These two issues can eventually reduce the efficiency of real-time imports.

## What I'm doing:
This PR attempts to solve the issue of reduced import efficiency caused by metadata bloat.

One feasible solution is to store the tablet schema only once for all Rowsets that share the same schema. Instead of saving the complete schema in each Rowset's metadata, a reference or marker would be saved in each Rowset’s metadata to point to the corresponding tablet schema.

However, the issue with this solution is compatibility. In previous versions, each Rowset generated its corresponding schema based on its own metadata. If the system is upgraded and then rollback to an older version, the older version would not be able to locate the schema using a reference or marker. This would lead to the generation of incorrect schemas, as the previous versions expect the full schema to be included in each Rowset's metadata.

So I choose a more conservative solution, and the main changes are as follows:
1. **Skip the schema in Rowset meta during import if the Rowset's schema is identical to the latest tablet schema.**
2. **Update the RowsetMeta that does not store schema when updating the tablet schema.**

If the BE  exits at any given time and restarts, those Rowsets that have not saved their own schema will be initialized using the tablet's current schema. Since the Rowset meta without schemas is updated each time the tablet schema is modified, it ensures that after the BE restarts, every Rowset can find its corresponding schema.

Moreover, this logic is backward compatible with older versions, so even after an upgrade and subsequent downgrade, the BE will still be able to retrieve the correct schema.

Compared to imports, DDL operations can be considered low-frequency tasks. As a result, in most cases, the Rowset meta generated during imports will not carry the schema, which helps alleviate metadata bloat.

However, there can still be some bad cases. For example, in non-PK tables, during the period between an alter operation and the deletion of outdated Rowset meta, if the number of outdated Rowsets is particularly large, the system will still rewrite all outdated Rowsets each time the tablet meta is saved. This can still lead to a decline in import performance.

To solve this issue, we need to resolve the problem of storing multiple copies of the same schema. I think we can first support downgrading and then resolve this issue, allowing for an iteration based on this PR.

Below is a test based on this PR:
a table with 200 columns, one bucket, writing one row of data at a time, with 10 concurrent threads, executed 1,000 times.

| Branch | Table type |Total cost time |
|----------|----------|----------|
|  main-8f128b  | Duplicate  | 1043.77(s)  |
|  this pr  |  Duplicate  | 178.49(s)  |
|  main-8f128b  | Primary  | 188.46(s)  |
|  this pr  |  Primary  | 186.68(s)  |

Signed-off-by: sevev <[email protected]>
Signed-off-by: zhangqiang <[email protected]>
Signed-off-by: zhiminr.ren <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants