Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: fix clone ColumnFile with new page id twice when segmentDangerouslyReplaceDataFromCheckpoint #9436

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

Lloyd-Pottiger
Copy link
Contributor

What problem does this PR solve?

Issue Number: ref #6233

Problem Summary:

What is changed and how it works?

auto [in_memory_files, column_file_persisteds] = segment_to_ingest->getDelta()->cloneAllColumnFiles(
segment_to_ingest->mustGetUpdateLock(),
dm_context,
ingest_range,
wbs);
wbs.writeLogAndData();
RUNTIME_CHECK(in_memory_files.empty());
RUNTIME_CHECK(dm_files.size() == 1);
const auto new_segment_or_null
= segmentDangerouslyReplaceDataFromCheckpoint(dm_context, segment, dm_files[0], column_file_persisteds);
const bool succeeded = new_segment_or_null != nullptr;

std::pair<ColumnFiles, ColumnFilePersisteds> DeltaValueSpace::cloneAllColumnFiles(
const DeltaValueSpace::Lock &,
DMContext & context,
const RowKeyRange & target_range,
WriteBatches & wbs) const
{
auto [new_mem_files, flushed_mem_files] = mem_table_set->diffColumnFiles({});
// As we are diffing the current memtable with an empty snapshot,
// we expect everything in the current memtable are "newly added" compared to this "empty snapshot",
// and none of the files in the "empty snapshot" was flushed.
RUNTIME_CHECK(flushed_mem_files.empty());
RUNTIME_CHECK(new_mem_files.size() == mem_table_set->getColumnFileCount());
// We are diffing with an empty list, so everything in the current persisted layer
// should be considered as "newly added".
auto new_persisted_files = persisted_file_set->diffColumnFiles({});
RUNTIME_CHECK(new_persisted_files.size() == persisted_file_set->getColumnFileCount());
return {
CloneColumnFilesHelper<ColumnFilePtr>::clone(context, new_mem_files, target_range, wbs),
CloneColumnFilesHelper<ColumnFilePersistedPtr>::clone(context, new_persisted_files, target_range, wbs),
};
}

std::vector<ColumnFilePtrT> CloneColumnFilesHelper<ColumnFilePtrT>::clone(
DMContext & dm_context,
const std::vector<ColumnFilePtrT> & src,
const RowKeyRange & target_range,
WriteBatches & wbs)
{
std::vector<ColumnFilePtrT> cloned;
cloned.reserve(src.size());
for (auto & column_file : src)
{
if constexpr (std::is_same_v<ColumnFilePtrT, ColumnFilePtr>)
{
if (auto * b = column_file->tryToInMemoryFile(); b)
{
auto new_column_file = b->clone();
// No matter or what, don't append to column files which cloned from old column file again.
// Because they could shared the same cache. And the cache can NOT be inserted from different column files in different delta.
new_column_file->disableAppend();
cloned.push_back(new_column_file);
continue;
}
}
if (auto * dr = column_file->tryToDeleteRange(); dr)
{
auto new_dr = dr->getDeleteRange().shrink(target_range);
if (!new_dr.none())
{
// Only use the available delete_range column file.
cloned.push_back(dr->cloneWith(new_dr));
}
}
else if (auto * t = column_file->tryToTinyFile(); t)
{
// Use a newly created page_id to reference the data page_id of current column file.
PageIdU64 new_data_page_id = dm_context.storage_pool->newLogPageId();
wbs.log.putRefPage(new_data_page_id, t->getDataPageId());
auto new_column_file = t->cloneWith(new_data_page_id);
cloned.push_back(new_column_file);
}
else if (auto * f = column_file->tryToBigFile(); f)
{
auto delegator = dm_context.path_pool->getStableDiskDelegator();
auto new_page_id = dm_context.storage_pool->newDataPageIdForDTFile(delegator, __PRETTY_FUNCTION__);
// Note that the file id may has already been mark as deleted. We must
// create a reference to the page id itself instead of create a reference
// to the file id.
wbs.data.putRefPage(new_page_id, f->getDataPageId());
auto file_id = f->getFile()->fileId();
auto old_dmfile = f->getFile();
auto file_parent_path = old_dmfile->parentPath();
if (!dm_context.global_context.getSharedContextDisagg()->remote_data_store)
{
RUNTIME_CHECK(file_parent_path == delegator.getDTFilePath(file_id));
}
auto new_file = DMFile::restore(
dm_context.global_context.getFileProvider(),
file_id,
/* page_id= */ new_page_id,
file_parent_path,
DMFileMeta::ReadMode::all(),
dm_context.keyspace_id);
auto new_column_file = f->cloneWith(dm_context, new_file, target_range);
cloned.push_back(new_column_file);
}
else
{
throw Exception("Meet unknown type of column file", ErrorCodes::LOGICAL_ERROR);
}
}
return cloned;
}

We have acquire new page id in CloneColumnFilesHelper::clone, so just reuse the page id in Segment::dangerouslyReplaceDataFromCheckpoint

storage: fix clone ColumnFile with new page id twice when segmentDangerouslyReplaceDataFromCheckpoint

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

None

…erouslyReplaceDataFromCheckpoint

Signed-off-by: Lloyd-Pottiger <[email protected]>
@ti-chi-bot ti-chi-bot bot added the release-note-none Denotes a PR that doesn't merit a release note. label Sep 18, 2024
Copy link
Contributor

ti-chi-bot bot commented Sep 18, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from lloyd-pottiger, ensuring that each of them provides their approval before proceeding. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Sep 18, 2024
Signed-off-by: Lloyd-Pottiger <[email protected]>
@ti-chi-bot ti-chi-bot bot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Sep 19, 2024
@CalvinNeo
Copy link
Member

/hold It is grate job and nice catch. Let me pick it into a dedicated branch on tiflash-cse FAP, and run QA test first

@ti-chi-bot ti-chi-bot bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 19, 2024
@CalvinNeo
Copy link
Member

Testing on https://github.com/tidbcloud/tiflash-cse/pull/299/files.

Too many queued tasks...

@ti-chi-bot ti-chi-bot bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 4, 2024
Copy link
Contributor

ti-chi-bot bot commented Oct 4, 2024

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@CalvinNeo CalvinNeo self-requested a review December 24, 2024 17:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. release-note-none Denotes a PR that doesn't merit a release note. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants