-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Proposal] Decouple logical from physical types #11513
Comments
Thank you @notfilippo -- I think this proposal is well thought out and makes a lot of sense to me. If we were to implement it I think the benefits for DataFusion would be enormous From my perspective, the use of Arrow types in logical planning in DataFusion (e.g. type coercion) has always been a little bit of an impedance mismatch. When there were just a few variants (e.g. As Arrow evolves (e.g. to include
I think breaking changes to the API is inevitable, but I think we can mange the pain through careful API thought and deprecation. More thoughts to follow |
Thoughts on the technical details
Since
Another possibility wuld be to migate I think the biggest challenge of this project will be managing the transition from Arrow DataType to LogicalTypes. You have done a great job articulating the places that would need to be changed. I think we could manage the transition over time by for example deprecating (but leaving UDF APIs in terms of DataType)
ANother possibility would be to make a function like let input: ColumnarValue = &args[0];
// get input as one of the named types, casting if necessary
let input = input.into_one_of(&[DataType::Utf8View, DataType::Utf8])?;
match input.data_type() {
DataType::Utf8View => { /*specialized impl for StringViewArray */ },
DataType::Utf8 => { /*specialized impl for StringArray */ },
_ => unreachable!()
}
Another thing we could do is relax the requirement that the |
Initially, I planned to propose repurposing the DFSchema for this change. Still, I decided against it (at least for the first draft) because of this open question that I've noted above:
This issue originates from the fact that TableSource and TableProvider (the "native" sources of schemas) would have to return a DFSchema to include the |
This proposal makes sense to me. Thanks for driving this @notfilippo. |
I was thining that a So like if a TableProvider said it returned a I haven't looked at the code so there may be some reason this wouldn't work |
The main challenge I see is that this will be a very large project. The high-level premise of separating logical and physical types makes sense from a first-principle POV, but the cost/benefit analysis at this point is not very clear to me. The latter will probably depend on the implementation strategy and the exact architecture we adopt, so I think we should try out different ideas and approaches before committing to a strategy/architecture and accepting/rejecting/deferring the idea based on a possibly premature choice. |
AFAICT |
I was thinking that there is no fundamental difference between using Thus I was thinking we could simplify the physical implementations by not having different codepaths and this would also give us some first hand experience in how mapping Logical --> Physical types might look like |
This generally looks good. I agree that starting with making One small nit: I don't think I would lump together |
Is it possible to have the mapping of the relation between arrow's DataType and As long as there is only one
We can then easily get the Is there any type mapping that can't be done without |
I think |
This could be interesting to try -- both in terms of whether we can somehow simplify It would also be a relatively contained draft, and if the result is satisfactory, can possibly get merged in even if the full proposal is for some reason not yet doable in our current state |
I agree with this approach. I'll dive deeper next week and report back my findings. |
Why don't we have or something like #[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub enum LogicalType {
LogicalPrimitiveType,
Date,
Time32(TimeUnit),
Time64(TimeUnit),
Timestamp(TimeUnit, Option<Arc<str>>),
Duration(TimeUnit),
Interval(IntervalUnit),
List(LogicalPhysicalFieldRef),
Struct(LogicalPhysicalFields),
Dict(LogicalPrimitiveType),
Map(LogicalPhysicalFieldRef, bool),
Decimal128(u8, i8),
Decimal256(u8, i8),
Union(LogicalUnionFields, UnionMode), // TODO: extension signatures?
// UserDefinedType
}
pub enum LogicalPrimitiveType {
Null,
Int8,
Int16,
Int32,
Int64,
UInt8,
UInt16,
UInt32,
UInt64,
Boolean,
Float16,
Float32,
Float64,
Utf8,
Binary,
} |
That is an interesting idea. I don't fully understand the implications, but if we can figure out how to avoid having to switching from |
I like the idea of separating logical types from arrow types, but it would be great to understand the exact consequences. The SQL frontend should have "very logical types". For example, we don't need Then, from DF-as-a-library perspective, physical representation becomes important.
I am concerned about deriving support for a logical type based on the support for a physical type is actually slipper slope. Let's consider an example. Let's assume i have Now, i have add_one() function that can take 64-bit integer values and add +1 to them. The +1 operation is perfectly valid operation for i64 -- it's valid for sql long/bigint type. It's valid also for my_duration(nanos), but it's not valid for my_duration(micros), since it produces unaligned value (not divisible by 1000). |
I find a previous discussion and reference it here: #7421 |
I have a question: How users specify the underlying physical types? FYI, Clickhouse exposes physical types to users like this. |
Physical Type in Datafusion is Arrow Type |
Apologies for the unclear description. I meant to ask, if we opt for logicalType, how do users then specify the physical types? This is important because in certain scenarios, only the users can determine the most suitable data types. |
@findepi -- If I understand correctly your example is about a the logical type Functions that have specialised implementations for Hypothetically, if |
@doki23 -- Through the use of a custom implementation of the Example#[derive(Debug)]
struct MyString {
logical: LogicalType,
physical: DataType,
}
impl Default for MyString {
fn default() -> Self {
Self {
logical: LogicalType::Utf8,
physical: DataType::new_list(DataType::UInt8, false),
}
}
}
impl TypeRelation for MyString {
fn logical(&self) -> &LogicalType {
&self.logical
}
fn physical(&self) -> &DataType {
&self.physical
}
// ...
} |
@wjones127 -- Noted! I was also hesitant on including the Fixed* variants into the base ones and your explanation makes sense to me. While I agree that having fixed length constraint for a list of logical types makes sense I am not convinced about FixedSizeBinaries. What would be the use case? Do you have some example in mind? |
Example uses of fixed size binary are representing arrays of data types not supported in Arrow, such as bf16 or i128 (UUID). A value with a different number of bytes would be considered invalid. |
I would like to stress that the intent of this proposal remains to decouple logical types from physical types in order to achieve the following goal:
While the goal seems to have achieved wide consensus, the path to reach it has not been finalised. Through some experiments (#11160 -> #11978 -> #12536) we've been trying to narrow down on a possible approach to commit to in order to make progress. As this proposal aims at changing the tires on a moving car there is and there will be a lot of discussion in order to complete the migration safely and without breaking too much functionality for end user. This will certainly result in a intermediate state where the existing behaviour is supported by temporarily tracking Re: @findepi's proposal,
This proposal is compatible with (and actually depends on) the decoupling logical from physical types but I think it's a further step ahead to consider once we at least clear the initial steps to take in order to make LogicalTypes happen. Additionally I think it should be filed as a separate, but related, ticket. I understand that it heavily depends and influences the choices of this proposal but judging by the comments above I think there needs to be a separate discussion in order to validate the idea on its own.
Not sure where we discussed this already but I would love to support both logical types and physical types when declaring function signatures in order to let the user have full control over the arguments: as little control as simply specifying a LogicalType + cast or as much control as precise function signatures for specific DataTypes. Instead I was planning on keeping |
I don't mind creating new ticket if needed. I created #12644 to track Extension Types which -- although mentioned already in this issue -- feel like even a higher goal to achieve. I am not convinced, however, that we should separate these discussions: "use new DataFusion types in logical plans" and "use new DataFusion types in physical plans as well". They feel very related and both impact the type system design. I don't want to paint ourselves into a corner just because we narrowed the horizon. Using logical types for all of the planning (including physical planning) is very important for performance -- see @andygrove 's Comet comment #11513 (comment) but obviously this is not Comet-only thing. As with any performance improvement, it may be impossible not to regress some specialized cases along the way. There is not point in breaking something for the sake of breaking. Just that if we allow only incremental improvements, we achieve local optimum only. |
The core issue seems to be mapping between Arrow's DataType and DataFusion's logical type. While we could technically bypass logical types and handle everything using Arrow's DataType, it's beneficial to have a simplified version of DataType for easier processing. This is where the logical type comes in—it serves as a simplified version of DataType to enumerate all semantically equivalent types. In this case, the logical type should be a more constrained or reduced representation than Arrow's DataType. We want a one-way mapping from Arrow's DataType to user-defined or extension types, where the logical type (DataFusion's native type) acts as the single source of truth within DataFusion, much like native types in Rust. To achieve this, we need two traits for type mapping:
If these types map to the same logical type, it implies that we can correctly decode the value as the expected type. Otherwise, it signals a type mismatch. #[derive(Clone)]
pub enum LogicalType {
Int32,
String,
Float32,
Float64,
FixedSizeList(Box<LogicalType>, usize),
// and more
Extenstion(Arc<dyn ExtensionType>),
}
pub trait ExtensionType {
fn logical_type(&self) -> LogicalType;
}
pub struct JsonType {}
impl ExtensionType for JsonType {
fn logical_type(&self) -> LogicalType {
LogicalType::String
}
}
pub struct GeoType {
n_dim: usize
}
impl ExtensionType for GeoType {
fn logical_type(&self) -> LogicalType {
LogicalType::FixedSizeList(Box::new(LogicalType::Float64), self.n_dim)
}
}
pub trait PhysicalType {
fn logical_type(&self) -> LogicalType;
}
impl PhysicalType for DataType {
fn logical_type(&self) -> LogicalType {
match self {
DataType::Int32 => LogicalType::Int32,
DataType::FixedSizeList(f, n) => {
LogicalType::FixedSizeList(Box::new(f.data_type().logical_type()), *n as usize)
}
_ => todo!("")
}
}
} I’d love to hear if this design is sound, or if there are any potential pitfalls in how I’ve approached type mapping. |
FYI i touched upon the topic of types on DataFusion meetup in Belgrade yesterday. |
One thing I found a bit confusing was pub trait ExtensionType {
fn logical_type(&self) -> LogicalType;
} But one of the #[derive(Clone)]
pub enum LogicalType {
//...
Extenstion(Arc<dyn ExtensionType>),
} It seems like the idea is that the logical type would report its underlying representation to DataFusion. I wonder when we would ever need to know the "logical_type" of an extension type. I think it would never make sense to DataFusion to treat a logical extension type as its underlying representation Just becase I know JSON columns are represented as Strings internally, I think they should never be treated as Strings unless the user explicitly requests such a conversion and the ExtensionType defines how to cast to a String |
I agree with this, but we don't need to treat it as string if we don't care about it. In logical layer, we can differentiate Json and String by the Enum itself or maybe I rewrote another one to show we can differentiate Json and String but also treat them the same if we need to. // Minimum set as Datafusion Native type
#[derive(Clone, PartialEq, Eq)]
pub enum DatafusionNativeType {
Int32,
UInt64,
String,
Float32,
Float64,
FixedSizeList(Box<DatafusionNativeType>, usize),
}
pub trait LogicalDataType {
fn name(&self) -> &str;
fn native_type(&self) -> DatafusionNativeType;
}
// This allows us to treat `DatafusionNativeType` as Trait `LogicalDataType`,
// there might be better design but the idea is to get `DatafusionNativeType`
// for both UserDefinedType and BuiltinType
//
// Alternative design is like
// pub enum DatafusionType {
// Builtin(DatafusionNativeType),
// Extension(Arc<dyn LogicalDataType>)
// }
impl LogicalDataType for DatafusionNativeType {
fn native_type(&self) -> DatafusionNativeType {
match self {
DatafusionNativeType::Int32 => DatafusionNativeType::Int32,
_ => self.clone()
}
}
fn name(&self) -> &str {
match self {
DatafusionNativeType::Int32 => "i32",
DatafusionNativeType::Float32 => "f32",
_ => todo!("")
}
}
}
fn is_numeric(logical_data_type: &Arc<dyn LogicalDataType>) -> bool {
matches!(logical_data_type.native_type(), DatafusionNativeType::Int32 | DatafusionNativeType::UInt64) // and more
}
// function where we only care about the Logical type
fn logical_func(logical_type: Arc<dyn LogicalDataType>) {
if is_numeric(&logical_type) {
// process it as numeric
}
// For user-defined type, maybe there is another way to differentiate the type instead by name
match logical_type.name() {
"json" => {
// process json
},
"geo" => {
// process geo
},
_ => todo!("")
}
}
// function where we care about the internal physical type so we can modify the Array.
fn physical_func(logical_type: Arc<dyn LogicalDataType>, array: ArrayRef, schema: Schema) -> Result<()>{
let data_type_in_schema = schema.field(0).data_type();
let actual_native_type = data_type_in_schema.logical_type();
if logical_type.native_type() != actual_native_type {
return internal_err!("logical type mismatches with the actual data type in schema & array")
}
// For Json type, we know the internal physical type is String, so we need to ensure the
// Array is able to cast to StringArray variant, we can check the schema.
match logical_type.native_type() {
DatafusionNativeType::String => {
match data_type_in_schema {
DataType::Utf8 => {
let string_arr = array.as_string::<i32>();
Ok(())
}
DataType::Utf8View => {
let string_view_arr = array.as_string_view();
Ok(())
}
_ => todo!("")
}
}
_ => todo!("")
}
}
pub struct JsonType {}
impl LogicalDataType for JsonType {
fn native_type(&self) -> DatafusionNativeType {
DatafusionNativeType::String
}
fn name(&self) -> &str {
"json"
}
}
pub struct GeoType {
n_dim: usize
}
impl LogicalDataType for GeoType {
fn native_type(&self) -> DatafusionNativeType {
DatafusionNativeType::FixedSizeList(Box::new(DatafusionNativeType::Float64), self.n_dim)
}
fn name(&self) -> &str {
"geo"
}
}
pub trait PhysicalType {
fn logical_type(&self) -> DatafusionNativeType;
}
impl PhysicalType for DataType {
fn logical_type(&self) -> DatafusionNativeType {
match self {
DataType::Int32 => DatafusionNativeType::Int32,
DataType::FixedSizeList(f, n) => {
DatafusionNativeType::FixedSizeList(Box::new(f.data_type().logical_type()), *n as usize)
}
_ => todo!("")
}
}
} |
Yes, I think it would make a lot of sense in the physical layer to simply pass along the actual arrow type (String in this case) as all the function calls / operators will have been resolved to do whatever conversion is needed (e.g. for comparisons) |
ANother way to potentially handle this is that the function call gets resolved to a specific implementation that gets specific Array types (like today). Physical planning / the extension type would be responsible for figuring out which specific implementation to call, etc |
I agree with the need for DataFusion's native type to be the single source of truth within DataFusion, to be "the types" of expressions, columns, etc. We clearly cannot have a one-way mapping / back mapping from Arrow's DataType back to DF Type. Arrow DataType cannot say "it's json" or "it's geometry". We can encode this information in Arrow metadata field and this would be useful for clients reading data from DataFusion. However, this won't be sufficient for intermediate expressions, so the DataFusion types need to be treated as the DF's responsibility, with Arrow types (DataType) handling as carrier type / physical type only.
I see this could be seen as a solution to some problems like function calls (vide simple functions #12635). BTW what if this |
I still don't get the point why not, is there any example that we require more than one Logical Type for DataType? Given Arrow's extension type like Json, we can consider it as either If the one-way mapping doesn't work, we can introduce
Trait for user defined type is much more flexible. For native type we can have either trait or enum. |
if we allow types to be extended (#12644), many different types may use same arrow type as their carrier type (eg JSON -> String, Geo -> String).
logical JSON and logical String may both use Arrow String, but they have different behavior, e.g. when casting from/to some other type.
i may be naive here, but i do very much hope we need only unidirectional mapping, from logical types to arrow types.
Yes, both impl strategies seem syntactically equivalent and starting from enum feels like a lazy choice. |
I think this is what we have not consensus on, for arrow extension type, I think they should be considered as an individual type. Json, Geo, String are 3 different arrow data type, thus 3 different logical types. Even they all have StringArray internally but the metadata or the rule to describe the StringArray should be considered different, otherwise why we not just use StringArray at the first place? For logical type, they can have the same DataType, logical JSON and logical String may both use Arrow String and they can both cast to StringArray, and we can still tell the difference between these 2 logical type and process them with different logic. I think my assumption should be changed to At most one DatafusionNativeType for each DataType instead. We can also think like this, we can find the minimum type set (DatafusionNativeTypes) and all the Arrow DataType including extension type and the LogicalType including user defined type can be mapped to at most one type in the DatafusionNativeTypes |
Maybe what we need is just checking whether the LogicalType we have is able to decode to the certain Arrow DataType 🤔 In this case we just need to know the set of types trait LogicalType {
fn can_decode_to(&self, data_type: DataType) -> bool;
}
struct String {}
impl LogicalType for String {
fn can_decode_to(&self, data_type: DataType) -> bool {
matches!(data_type, DataType::Utf8 | DataType::Utf8View | DataType::LargeUtf8)
}
}
struct Json {}
impl LogicalType for Json {
fn can_decode_to(&self, data_type: DataType) -> bool {
matches!(data_type, DataType::Utf8 | DataType::Utf8View | DataType::LargeUtf8)
}
}
struct StringViewOnly {}
impl LogicalType for StringViewOnly {
fn can_decode_to(&self, data_type: DataType) -> bool {
matches!(data_type, DataType::Utf8View)
}
}
struct Geo {}
impl LogicalType for Geo {
fn can_decode_to(&self, data_type: DataType) -> bool {
if let DataType::FixedSizeList(inner_type, n) = data_type {
return matches!(inner_type.data_type(), DataType::Float32 | DataType::Float64) && n > 1
}
return false;
}
} |
I don't really like that this function would require recursion to handle |
@jayzhan211 and @findepi I really like the approach of pub trait LogicalType {
fn native(&self) -> &NativeType;
fn name(&self) -> Option<&str>; // as in 'ARROW:extension:name'.
}
pub struct String {}
impl LogicalType for String {
fn native(&self) -> &NativeType {
&NativeType::Utf8
}
fn name(&self) -> Option<&str> {
None
}
}
pub struct JSON {}
pub const JSON_EXTENSION_NAME: &'static str = "arrow.json";
impl LogicalType for JSON {
fn native(&self) -> &NativeType {
&NativeType::Utf8
}
fn name(&self) -> Option<&str> {
Some(JSON_EXTENSION_NAME)
}
} And by modifying the temporary pub trait TypeRelation {
fn logical(&self) -> &dyn LogicalType;
fn physical(&self) -> &DataType;
}
pub struct NativeTypeRelation {
logical: Arc<dyn LogicalType>,
physical: DataType,
}
impl TypeRelation for NativeTypeRelation {
fn logical(&self) -> &dyn LogicalType {
&*self.logical
}
fn physical(&self) -> &DataType {
&self.physical
}
}
impl From<DataType> for NativeTypeRelation {
fn from(value: DataType) -> Self {
let logical = match value {
DataType::Utf8 | DataType::Utf8View | DataType::LargeUtf8 => {
Arc::new(String {})
}
_ => todo!(),
};
Self {
logical,
physical: value,
}
}
} I was able to run pass some string tests and write "JSON specific" functions. |
Do we still need |
For the final state of this proposal: no. As a transitory type to validate the concept you described above: yes |
@notfilippo This might work. I started similar prototype for #12644. To reliably encode types in Arrow schema, we would need a structured approach, perhaps something like this pub trait Type: Sync + Send {
fn name(&self) -> &Arc<TypeName>;
...
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct TypeName {
basename: String,
name: String,
parameters: Vec<TypeParameter>,
}
pub type TypeNameRef = Arc<TypeName>;
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum TypeParameter {
Type(TypeNameRef),
Number(i128),
}
impl TypeName {
pub fn basename(&self) -> &str {
&self.basename
}
pub fn name(&self) -> &str {
&self.name
}
pub fn parameters(&self) -> &[TypeParameter] {
&self.parameters
}
pub fn new(basename: impl Into<String>, parameters: Vec<TypeParameter>) -> Self {
// TODO assert parses, no braces, etc.
let basename = basename.into();
let name = FormatName {
basename: &basename,
parameters: ¶meters,
}
.to_string();
Self {
basename,
name,
parameters,
}
}
pub fn from_basename(basename: impl Into<String>) -> Self {
Self::new(basename, vec![])
}
} To allow type-checking ("is int" or "is string") for optimizer rules such as pub trait Type: Sync + Send {
fn as_any(&self) -> &dyn Any;
...
} Type should also describe how to compare (native) values of that type, their equality, etc. |
@notfilippo however, i realized that changing the logical type system would be easier if we had clearly defined architectural layers. I filed #12723 for this. |
Abstract
Logical types are an abstract representation of data that emphasises structure without regard for physical implementation, while physical types are tangible representations that describe how data will be stored, accessed, and retrieved.
Currently the type model in DataFusion, both in logical and physical plans, relies purely on arrow’s DataType enum. But whilst this choice makes sense its physical execution engine (DataTypes map 1:1 with the physical array representation, defining implicit rules for storage, access, and retrieval), it has flaws when dealing with its logical plans. This is due to the fact that some logically equivalent array types in the Arrow format have very different DataTypes – for example a logical string array can be represented in the Arrow array of DataType
Utf8
,Dictionary(Utf8)
,RunEndEncoded(Utf8)
, andStringView
(without mentioning the different indexes types that can be specified for dictionaries and REE arrays).This proposal evaluates possible solutions for decoupling the notion of types in DataFusion’s logical plans with DataTypes, evaluating their impact on DataFusion itself and on downstream crates.
Goals
Proposal
Defining a logical type
To define the list of logical types we must first take a look at the physical representation of the engine: the Arrow columnar format. DataTypes are the physical types of the DataFusion engine and they define storage and access pattern for buffers in the Arrow format.
Looking at a list of the possible DataTypes it's clear that while some map 1:1 with their logical representation other also specify information about the encoding (e.g.
Large*
,FixedSize*
,Dictionary
,RunEndEncoded
...). The latter must be consolidate into what they represent, discarding the encoding information and, in general, types that can store different ranges of values should be different logical types. (ref).What follows is a list of DataTypes and how would they map to their respective logical type following the rules above:
Null
Null
Boolean
Boolean
Int8
Int8
Int16
Int16
Int32
Int32
Int64
Int64
UInt8
UInt8
UInt16
Uint16
UInt32
UInt32
UInt64
UInt64
Float16
Float16
Float32
Float32
Float64
Float64
Timestamp(unit, tz)
Timestamp(unit, tz)
Date32
Date
Date64
Date
Date64
doesn't actually provide more precision. (docs)Time32(unit)
Time32(unit)
Time64(unit)
Time64(unit)
Duration(unit)
Duration(uint)
Interval(unit)
Interval(unit)
Binary
Binary
FixedSizeBinary(size)
Binary
LargeBinary
Binary
BinaryView
Binary
Utf8
Utf8
LargeUtf8
Utf8
Utf8View
Utf8
List(field)
List(field)
ListView(field)
List(field)
FixedSizeList(field, size)
List(field)
LargeList(field)
List(field)
LargeListView(field)
List(field)
Struct(fields)
Struct(fields)
Union(fields, mode)
Union(fields)
Dictionary(index_type, data_type)
data_type
, converted to logical typeDecimal128(precision, scale)
Decimal128(precision, scale)
Decimal256(precision, scale)
Decimal256(precision, scale)
Map(fields, sorted)
Map(fields, sorted)
RunEndEncoded(run_ends_type, data_type)
data_type
, converted to logical typeUser defined types
User defined physical types
The Arrow columnar format provides guidelines to define Extension types though the composition of native DataTypes and custom metadata in fields. Since this proposal already includes a mapping from DataType to logical type we could extend it to support user defined types (through extension types) which would map to a known logical type.
For example an extension type with the DataType of
List(UInt8)
and a custom metadata field{'ARROW:extension:name': 'myorg.mystring'}
could have a logical type ofUtf8
.User defined logical types
Arrow extension types can also be used to extend the list of supported logical types. An additional logical type called
Extension
could be introduced. This extension type would contain a structure detailing its logical type and the extension type metadata.Boundaries of logical and physical types
In plans and expressions
As the prefix suggests, logical types should be used exclusively in logical plans (LogicalPlan and Expr) while physical types should be used exclusively in physical plans (ExecutionPlan and PhysicalExpr). This would enable logical plans to be purely logical, without worrying about underlying encodings.
Expr in logical plans would need to represent their resulting value as logical types through the trait method ExprSchemable::get_type, which would need to return a logical type instead.
In functions
ScalarUDF, WindowUDF, and AggregateUDF all define their Signatures through the use of DataTypes. Function arguments are currently validated against signatures through type coercion during logical planning. With logical types Signatures would be expressed without the need to specify the underlying encoding. This would simplify the type coercion rules, removing the need of traversing dictionaries and handling different containers and focusing instead on explicit logical rules (e.g. all logical types can be coerced to
Utf8
).During execution the function receives a slice of ColumnarValue that is guaranteed to match the signature. Being strictly a physical operation, the function will have to deal with physical types. ColumnarValue enum could be extended so that functions could choose to provide their own optimised implementation for a subset of physical types and then fall back to a generic implementation that materialises the argument to known physical type. This would potentially allow native functions to support user defined physical types that map to known logical types.
In substrait
The
datafusion_substrait
crate provides helper functions to enable interoperability between substrait plans and datafusion's plan. While some effort has been made to support converting from / to DataTypes viatype_variation_reference
(example here), dictionaries and not supported as both literal types and cast types, leading to potential errors when trying to encode a valid LogicalPlan) into a substrait plan. The usage of logical types would enable a more seamless transition between DataFusion's native logical plan and substrait.Keeping track of the physical type
While logical types simplify the list of possible types that can be handled during logical planning, the relation to their underlying physical representation needs to be accounted for when transforming the logical plan into nested ExecutionPlan and PhysicalExpr which will dictate how will the query execute.
This proposal introduces a new trait that represents the link between a logical type and its underlying physical representation:
While
NativeType
would be primarily used for standard DataTypes and their logical relation,TypeRelation
is defined to provide support for used defined physical types.What follows is an exploration of the areas in which
LogicalPhysicalType
would need to get introduced:A new type of Schema
To support the creation of
LogicalPhysicalType
a new schema must be introduced, which can be consumed as either a logical schema or used to access the underlying physical representation. Currently DFSchema is used throughout DataFusion as a thin wrapper for Arrow's native Schema in order to qualify fields originating from potentially different tables. This proposal suggest to decouple the DFSchema from its underlying Schema and instead adopt a new Schema-compatible structure (LogicalPhysicalSchema
) but with DataTypes replaced byLogicalPhysicalType
. This would also mean the introduction of new Field-compatible structure (LogicalPhysicalField
) which also supportsLogicalPhysicalType
instead of Arrow's native Field DataType.DFSchema would be used by DataFusion's planning and execution engine to derive either logical or physical type information of each field. It should retain the current interoperability with Schema (and additionally the new
LogicalPhysicalSchema
) allowing easyInto
&From
conversion.Type sources
Types in plans sourced through Arrow's native Schema returned by implementations of TableSource / TableProvider , variables DataTypes returned by VarProvider , and ScalarValue. To allow definition of custom
LogicalPhysicalType
these type sources should be edited to returnLogicalPhysicalSchema
/LogicalPhysicalType
.Tables
For tables a non-breaking way of editing the trait to support
LogicalPhysicalSchema
could be:logical_physical_schema() -> LogicalPhysicalSchema
, this method's default implementation calls theschema()
and converts it toLogicalPhysicalSchema
without introducing any customLogicalPhysicalType
. Implementers are free to override this method and add customLogicalPhysicalType
.schema()
method to returnimpl Into<LogicalPhysicalSchema>
.VarProvider
VarProvider needs to be edited in order to return a
LogicalPhysicalType
when getting the type of the variable, while the actual variable can very well remain aScalarValue
.ScalarValue
ScalarValue should be wrapped in order to have a way of retrieving both its logical and its underlying physical type. When reasoning about logical plans it should be treated as its logical type while its physical properties should be accessed exclusively by the physical plan.
Physical execution
For physical planning and execution, much like the invocation of UDFs, ExecutionPlan and PhysicalExpr must also be granted access to the
LogicalPhysicalType
in order to have the capabilities of performing optimised execution for a subset of supported physical types and then fall back to a generic implementation that materialises other types to known physical type. This can be achieved by substituting the use DataType and Schema with, respectively,LogicalPhysicalType
andLogicalPhysicalSchema
.Impact on downstream dependencies
Care must be put in place not to introduce breaking changes for downstream crates and dependencies that build on top of DataFusion.
The most impactful changes introduced by this proposal are the
LogicalPhysicalType
,LogicalPhysicalSchema
andLogicalPhysicalField
types. These structures would replace most of the mentions of DataType, Schema and Field in the DataFusion codebase. Type sources (TableProvider / TableSource, VarProvider, and ScalarValue) and Logical / ExecutionPlan nodes would be greatly affected by this change. This effect can be mitigated by providing goodInto
&From
implementations for the new types and providing editing existing function arguments and return types asimpl Into<LogicalPhysical*>
, but it will still break a lot of things.Case study: datafusion-comet
datafusion-comet
is a high-performance accelerator for Apache Spark, built on top of DataFusion. A fork of the project containing changes from this proposal currently compiles without modifications. As more features in this proposal are implemented, namely UDFs Logical Signature, some refactoring might be required (e.g forCometScalarFunction
and other functions defined in the codebase). Refer to the draft's TODOs to see what's missing.Non-goals topics that might be interesting to dive into
While not the primary goals of this proposal, here are some interesting topics that could be explored in the future:
RecordBatches with same logical type but different physical types
Integrating
LogicalPhysicalSchema
into DataFusion's RecordBatches, streamed from one ExecutionPlan to the other, could be an interesting approach to support the possibility of two record batches having logically equivalent schemas with different underlying physical types. This could be useful in situations where data, stored in multiple pages mapping 1:1 with RecordBatches, is encoded with different strategies based on the density of the rows and the cardinality of the values.Current work effort
The [Epic] issue tracking this effort can be found at #12622 .
The text was updated successfully, but these errors were encountered: