You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
pandas' read_csv does not properly read the string if it contains these unicodes and the string is discarded instead
it either should be stripped on dune side or maybe there is a cleaner way than my current hacky workaround:
raw = dune.run_query_csv(query).data
with open('dirty.tmp', 'wb') as f:
f.write(raw.getbuffer())
with open('clean.tmp', 'w') as f:
buffer = open('dirty.tmp').read()
for sanitise in ['\x00', '\x1a', '\x1b', '\x1c', '\x1d', '\x1e', '\x1f']:
buffer = buffer.replace(sanitise, '')
f.write(buffer)
df = pd.read_csv('clean.tmp')
for clarity, the data point in question is a result of
FROM_UTF8(VARBINARY_LTRIM (data))
The text was updated successfully, but these errors were encountered:
pandas'
read_csv
does not properly read the string if it contains these unicodes and the string is discarded insteadit either should be stripped on dune side or maybe there is a cleaner way than my current hacky workaround:
for clarity, the data point in question is a result of
The text was updated successfully, but these errors were encountered: