Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(java): support statistics row num for lance scan #3304

Merged
merged 9 commits into from
Jan 3, 2025

Conversation

SaintBacchus
Copy link
Contributor

@SaintBacchus SaintBacchus commented Dec 27, 2024

Support statistics row num for lance scan, and with this statistics the spark will choose the broadcast to join for a small table.
But now the byte size of lance dataset is inferred by row number. It is not very precise.
Maybe We should store the file size in meta as disscus in #3221.

@github-actions github-actions bot added enhancement New feature or request java labels Dec 27, 2024
@eddyxu eddyxu requested a review from LuQQiu December 31, 2024 04:45
try (LockManager.ReadLock readLock = lockManager.acquireReadLock()) {
Preconditions.checkArgument(nativeDatasetHandle != 0, "Dataset is closed");
return nativeCountRows();
}
}

private native int nativeCountRows();
private native long nativeCountRows();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we do support countRows with filter.
Should we implement this as nativeCountRows(Optional<String> filter) ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a good suggestion. We can add this method for the aggregator pushdown for spark connector.
In this pr, the spark statistics only need the total row number.
This statistic will be used in the optimizer. The nativeCountRows(Optional<String> filter) will use the cpu to do the filter, and it's not suitable for the statistics.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking this as a Dataset public API aspect. we do need to expose the Dataset.java api as equivalent to the Rust Dataset API.

In spark, it is fine to just call nativeCountRows(None) for planning.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should add this count_filter in this pr?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have one native method and two override methods?

private native long nativeCountRows(/* Nullable */ String filter);

// Override 
public long countRows();
public long countRows(String filter);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine, I will add it later

Comment on lines +37 to +38
// TODO: Support quickly get the bytes on disk for the lance dataset
// Now use schema to infer the byte size for simple
Copy link
Contributor

@westonpace westonpace Jan 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've proposed per-field storage stats here: #3328

I started and then backed away from a whole-dataset number because I ran into some questions.

No need to answer these now but something to think about.

Should a whole-dataset number include indices and manifests? Or just data? Or one number for just data and one number for everything?

Should it include files that have been removed from the current version but not yet cleaned up?

Should it include columns that have been dropped (these columns may still have data in old fragments)

Copy link
Contributor Author

@SaintBacchus SaintBacchus Jan 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @westonpace The more precise the data size is, the better the optimizer will work.
In other words, the Spark optimizer needs to trade off the precision of plan and the computational performance of each rules.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mostly usage of dataset storage size in spark was to choose use broadcast join or not. So if the accurate data size of lance dataset included the dropped column and deleted row only need O(N) io call (N for the number of fragment), I think we can use this api. If not, use the total size of all the data fragments is also fine for this case.

Copy link
Contributor

@eddyxu eddyxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great

@eddyxu eddyxu merged commit 8a23d50 into lancedb:main Jan 3, 2025
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request java
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants