Replies: 4 comments
-
I jut wrote a some really crude Makefile targets for Kai Blumberg's Minimal Information about Food Components repo. They convert TSV example data files to YAML, which are then included in the standard examples validation step. It makes assumptions about the names of the TSV files, among other things. At this time, all slots in all classes (besides the I really want to work towards very fast and easy validation of tabular data in these projects, because I believe that's when the utility of the schemas will become most apparent to end users. |
Beta Was this translation helpful? Give feedback.
-
see also |
Beta Was this translation helpful? Give feedback.
-
FWIW, LinkML-store is using its own approach for unifying loading and parsing of frame-shaped data Although this would not have been necessary had we merged @sneakers-the-rat PR in earlier: linkml/linkml-runtime#305 (which provides a raw dict iterator for the different loaders) We may also want to look at https://ibis-project.org/ suggested by a linkml community member, which provides a unified interface onto multiple backends. |
Beta Was this translation helpful? Give feedback.
-
i would be interested in this, i am doing my own hack on top of the pydantic generator to treat a special-cased class in the nwb schema as tables: https://github.com/p2p-ld/nwb-linkml/blob/ff77f0a2b8c95a46ee03ef81e35d8dd699feef15/nwb_linkml/src/nwb_linkml/includes/hdmf.py and would way rather have a way of indicating that a given class expects tabular data, which as far as i can tell would just be a way of expressing that each of its slots/attributes are multivalued and have the same length. is there an elegant way we can think of expressing that? or should we just ducktype it? The basic problem (i think) is that there are two orientations that tabular data could be modeled in, a class with slots s.t. each slot is multivalued and becomes a column, or a class with non-multivalued slots and each class instance is a row. In general I prefer the latter, but for the sake of expressiveness and compatibility with other schemas the former should be possible as well. |
Beta Was this translation helpful? Give feedback.
-
If we assume a highly limited profile of LinkML:
multivalued
Then CSV/TSV loading is mostly trivial, because the CSV is isomorphic to the underlying objects. In practical terms we can feed the dict into python csvreader/writer and it will do the right thing.
Of course, even with this limited subset, there are multiple annoying edge cases due to lack of adhered to standards around CSVs, in particular
When we start to extend the profile, it gets harder to have predictable behavior. Even something as simple as allowing
multivalued
has annoying edge cases. We can pick an internal delimiter, such as|
. This mostly works, modulo the missing data case above, and also assuming sensible escaping rules. There are edge cases where we might want a range to be anany_of
multivalued and single valued, and there is no way to distinguish these cases.any_of
is problematic in general - consider trying to distinguish"1"
from1
in a CSV.Things get harder when we start to allow ranges to be classes; sometimes there are "obvious" ways to flatten this but usually not.
Another approach with multivalued is to use the relmodel_transformer, which introduces new linking tables, but here let's assume that by "tabular format" we mean wide table denormalized where all observations fit into one table rather than linked relational tables.
For modern tabular databases and formats, there is no problem
allow for a richer tabular profiles, including mutlivalued and inlined object references, with clear non-YOLO distinctions between different base types.
An orthogonal concern is that there is a lack of standards for dataset-level metadata. A sensible paradigm is to follow sssom and include a schema-controlled yaml block in the header, but everyone does this differently.
But if we are limited to CSV/TSV, one approach is to treat this as a transformation problem. Different schemas can define different transforms for flattening data. See https://github.com/linkml/linkml-transformer. This is the most principled approach. However, it may be overkill for cases when the transformation is "obvious" (e.g. use a pipe to separate multivalued). The problem is when everyone's obvious transforms come together it can be hard to reason over the combination.
The current approach with the tsv loaders and dumpers is to try and do the "obvious" transforms, and it mostly works. It uses this under the hood: https://github.com/cmungall/json-flattener -- it will do things like use json serialization for nested objects, etc.
Can we do better than this? There is an argument for having a separate library just for tabular data, where we could escape from the "ismorphism assumption" underpinning the current runtime loaders/dumpers. This could also have a plugin architecture that would make it easy for people to use other tabular/columnar formats like parquet, dask, etc. It would allow for a certain amount of flexibility without requiring the use of linkml-transformers, with some reasonable defaults.
We may want to look into using flatten-tool as a drop in replacement for json-flattener (see #1031). This is schema guided and uses simple json-schema profiles to guide behavior.
Beta Was this translation helpful? Give feedback.
All reactions