DataFrame.parse()
performance issue for wide DF
#849
Labels
performance
Something related to how fast the library can handle data
Milestone
We noticed this issue especially when parsing DataFrames with lots of
String
columns, such as a wide CSV file.If you run
DataFrame.parse()
, each column is getting parsed one at a time.If a column has type
String
, thentryParse
goes over each parser inParsers
and if any of the values of the column cannot be parsed it will try the next parser.These parsers are ordered like Int -> Long -> Instant -> LocalDateTime -> ... -> Json -> String. Which means that for "normal" string columns that need no parsing, at least one cell in the column is attempted to be parsed 17 times. Many of these attempts are achieved by
catchSilent {}
blocks, which catch any exception thrown an returnnull
if they do. And they are heavy: https://www.baeldung.com/java-exceptions-performance.This is easily measurable by creating two wide dataframes, one with columns that can be parsed to ints and another with cols that cannot be parsed and must remain strings:
We can see that parsing the wide string DF takes a considerable amount of time more.
And this is mostly due to
Instant.parse
,toLocalDateTimeOrNull
, etc., and, most importantly, allfillInStackTrace
calls at the top of the graph, a.k.a the exceptions of the parsers. We might be able to improve this :)Looking at the parsers there are some interesting observations and possible solutions:
Instant.parse()
is a lot slower than Java's. We should use Java's and.toKotlin
it.LocalDateTime
and Kotlin's. If a String can be parsed as date time, it will pick Kotlin's every time. If it cannot, it will fail both the Kotlin and Java one, creating a useless exception. We should drop the java duplicates.toIntOrNull
andtoLongOrNull
are so fast, the time isn't even shown. If a library offers acanParse()
function, we should use it.convert to
which is built onreplace with
, so that's where the parallelization should occur. Relevant issue: Parallel computations #723The text was updated successfully, but these errors were encountered: