You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In attempting to return unaveraged LST-binned data files, I noticed that the number of data points considered for each time bin was irregular.
I'm working on the 18 nights of H1C_IDR2.2, with the only change being ntimes_per_file = 30 in the lstbin_grp1of1.toml file.
If I turn on return_no_avg=True on pickle the final data container, as I do here, I don't get the expected 2 (time integrations) x 18 (days) data points for each time bin.
Inspecting the unaveraged data file:
import pickle
with open('/lustre/aoc/projects/hera/mmolnar/LST_bin/binned_files/no_avg/zen.grp1.of1.LST.1.40949.HH.OCRSLU.uvh5.pkl', 'rb') as f:
no_avg_dc = pickle.load(f)
# trying a sample baseline
for count, i in enumerate(no_avg_dc[(12, 13, 'ee')]):
print(count, len(i))
Further changing this line to: bin_count.append(np.sum(np.ones_like(d, dtype=bool) * n_c, axis=0)) further confirms this as the bin counts obtained here are also 36 and 37.
I also tried to hack the lst_bin function to return unaveraged data without using return_no_avg = True , by creating appending the unaveraged data to lists and creating arrays out of them - see from this line here (specifically the real_unavg
and d_unavg variables). This still returned an inconsistent number of data points (36 for the first 12 indices and 37 for the others).
In summary:
d in the lst_bin function does not have a regular number of data points for each time bin
n is somehow zero-ing the extra row of data so that in practice when averaging (e.g. here real_avg.append(np.sum(d.real * n, axis=0) / norm)) is done, 2 x no_nights are indeed considered, hence why this effect is not easily caught.
The text was updated successfully, but these errors were encountered:
In attempting to return unaveraged LST-binned data files, I noticed that the number of data points considered for each time bin was irregular.
I'm working on the 18 nights of H1C_IDR2.2, with the only change being
ntimes_per_file = 30
in thelstbin_grp1of1.toml
file.If I turn on
return_no_avg=True
on pickle the final data container, as I do here, I don't get the expected 2 (time integrations) x 18 (days) data points for each time bin.Inspecting the unaveraged data file:
returning
Further changing this line to:
bin_count.append(np.sum(np.ones_like(d, dtype=bool) * n_c, axis=0))
further confirms this as the bin counts obtained here are also 36 and 37.I also tried to hack the
lst_bin
function to return unaveraged data without usingreturn_no_avg = True
, by creating appending the unaveraged data to lists and creating arrays out of them - see from this line here (specifically the real_unavgand d_unavg variables). This still returned an inconsistent number of data points (36 for the first 12 indices and 37 for the others).
In summary:
d
in the lst_bin function does not have a regular number of data points for each time binn
is somehow zero-ing the extra row of data so that in practice when averaging (e.g. herereal_avg.append(np.sum(d.real * n, axis=0) / norm)
) is done, 2 x no_nights are indeed considered, hence why this effect is not easily caught.The text was updated successfully, but these errors were encountered: