You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Notice that we have 2 batches of hourly files and they all seem to have records that could be inserted. I would have expected that one of the files would show that 0 records were inserted.
So its possible that there is an issue of how we are recording the fetchlog stats after a file is ingested.
The text was updated successfully, but these errors were encountered:
WITH inserted AS (
SELECT m.fetchlogs_id
, COUNT(m.*) as n_records
, COUNT(t.*) as n_inserted
, MIN(m.datetime) as fr_datetime
, MAX(m.datetime) as lr_datetime
, MIN(t.datetime) as fi_datetime
, MAX(t.datetime) as li_datetime
FROM tempfetchdata m
LEFT JOIN temp_inserted_measurements t ON (t.sensors_id = m.sensors_id AND t.datetime = m.datetime)
GROUP BY m.fetchlogs_id)
UPDATE fetchlogs
SET completed_datetime = CURRENT_TIMESTAMP
, inserted = COALESCE(n_inserted, 0)
, records = COALESCE(n_records, 0)
, first_recorded_datetime = fr_datetime
, last_recorded_datetime = lr_datetime
, first_inserted_datetime = fi_datetime
, last_inserted_datetime = li_datetime
FROM inserted
WHERE inserted.fetchlogs_id = fetchlogs.fetchlogs_id;
We are joining off of sensors and timestamp and not accounting for the fetchlogs_id.
We cant add fetchlogs_id to the join because we are currently not adding that field to the temp_inserted_measurements. We will have to make that change before we can close this one.
While checking to make sure that the latest fetcher deployments were working I noticed that the japan file seemed to be ingested twice.
Notice that we have 2 batches of hourly files and they all seem to have records that could be inserted. I would have expected that one of the files would show that 0 records were inserted.
So its possible that there is an issue of how we are recording the fetchlog stats after a file is ingested.
The text was updated successfully, but these errors were encountered: