You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With a patient- and clinician-facing application in progress, we will need to be able to provide status updates regarding incoming data and its quality - in terms of the data itself as well as compliance.
From our meeting on the 22nd of March
Most likely we will aggregate data from each individual file download for easy and quick access for any companion application. Most API calls from device providers include some metrics already, and some metrics can be calculated in our pipeline - such as file size. As we are on the verge of running the pipeline live, and due to current priorities, we will include the file-size as a metric to our database, but leave others alone for now. Our intent is to rerun the pipeline to retrieve all metadata at a later date, but without actually downloading the file itself. Since file-size can only be determined now, this metric will be included in the current version (in #46 ) to prevent the requirement to do that again at a later stage.
We will use this issue to keep the discussion ongoing about what metrics or aggregation we wish to achieve. Current thoughts on useful metrics include:
Per data recording:
File size
Dates covered by this data
Duration of recording
Date of download
As well as device specific metrics (often included by the manufacturer), such as:
Actual wear times (Dreem)
Data quality (ByteFlies)
etc.
The text was updated successfully, but these errors were encountered:
For any of the quality metrics we aggregate / derive, I'd argue to have in such a way that represents the original data / value; rather than a mapping from 0 to 1 for example. This will allow us to alter the mapping and or change thresholds in any calculations in the long run.
For example, if we take filesize; we can calculate kb/hour or kb/day. Then, when 'plotting' this we can assess and assign a good - medium - bad assignment. Instead, if we instantly convert the kb/hour to a value between 0 and 1, we lose the ability to change that last step.
With a patient- and clinician-facing application in progress, we will need to be able to provide status updates regarding incoming data and its quality - in terms of the data itself as well as compliance.
From our meeting on the 22nd of March
Most likely we will aggregate data from each individual file download for easy and quick access for any companion application. Most API calls from device providers include some metrics already, and some metrics can be calculated in our pipeline - such as file size. As we are on the verge of running the pipeline live, and due to current priorities, we will include the file-size as a metric to our database, but leave others alone for now. Our intent is to rerun the pipeline to retrieve all metadata at a later date, but without actually downloading the file itself. Since file-size can only be determined now, this metric will be included in the current version (in #46 ) to prevent the requirement to do that again at a later stage.
We will use this issue to keep the discussion ongoing about what metrics or aggregation we wish to achieve. Current thoughts on useful metrics include:
Per data recording:
The text was updated successfully, but these errors were encountered: