-
Notifications
You must be signed in to change notification settings - Fork 653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FIX-#6594: fix usage of Modin objects inside UDFs for apply
#6673
Changes from all commits
96cc494
50dde2a
d223793
36bffbf
1cecc04
04d24f9
69943e3
fb26aa8
3cb4b96
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
|
@@ -27,7 +27,13 @@ | |||
import pandas | ||||
from pandas._libs.lib import no_default | ||||
|
||||
from modin.config import BenchmarkMode, Engine, NPartitions, ProgressBar | ||||
from modin.config import ( | ||||
BenchmarkMode, | ||||
Engine, | ||||
NPartitions, | ||||
PersistentPickle, | ||||
ProgressBar, | ||||
) | ||||
from modin.core.dataframe.pandas.utils import concatenate | ||||
from modin.core.storage_formats.pandas.utils import compute_chunksize | ||||
from modin.error_message import ErrorMessage | ||||
|
@@ -121,7 +127,20 @@ def preprocess_func(cls, map_func): | |||
`map_func` if the `apply` method of the `PandasDataframePartition` object | ||||
you are using does not require any modification to a given function. | ||||
""" | ||||
return cls._partition_class.preprocess_func(map_func) | ||||
old_value = PersistentPickle.get() | ||||
# When performing a function with Modin objects, it is more profitable to | ||||
# do the conversion to pandas once on the main process than several times | ||||
# on worker processes. Details: https://github.com/modin-project/modin/pull/6673/files#r1391086755 | ||||
# For Dask, otherwise there may be an error: `coroutine 'Client._gather' was never awaited` | ||||
need_update = not PersistentPickle.get() and Engine.get() != "Dask" | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @dchigarev I suppose we can leave the current implementation, but for those cases where There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
You mean There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. my take is, that we either should drop the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yes
In a situation like this (it seems to me that this case is almost never seen, if you think so, then I’ll delete
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. on the second thought, it maybe it make sense keeping |
||||
if need_update: | ||||
PersistentPickle.put(True) | ||||
try: | ||||
result = cls._partition_class.preprocess_func(map_func) | ||||
finally: | ||||
if need_update: | ||||
PersistentPickle.put(old_value) | ||||
return result | ||||
|
||||
# END Abstract Methods | ||||
|
||||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -18,6 +18,7 @@ | |
import datetime | ||
import functools | ||
import itertools | ||
import os | ||
import re | ||
import sys | ||
import warnings | ||
|
@@ -3107,7 +3108,7 @@ def _getitem(self, key): | |
|
||
# Persistance support methods - BEGIN | ||
@classmethod | ||
def _inflate_light(cls, query_compiler): | ||
def _inflate_light(cls, query_compiler, source_pid): | ||
""" | ||
Re-creates the object from previously-serialized lightweight representation. | ||
|
||
|
@@ -3117,35 +3118,53 @@ def _inflate_light(cls, query_compiler): | |
---------- | ||
query_compiler : BaseQueryCompiler | ||
Query compiler to use for object re-creation. | ||
source_pid : int | ||
Determines whether a Modin or pandas object needs to be created. | ||
Modin objects are created only on the main process. | ||
|
||
Returns | ||
------- | ||
DataFrame | ||
New ``DataFrame`` based on the `query_compiler`. | ||
""" | ||
if os.getpid() != source_pid: | ||
return query_compiler.to_pandas() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. correct me if I'm wrong, but my understanding is that we use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this is the only question that bothers me, otherwise, the PR looks good There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
It may be more expensive, but this method allows calculations to run asynchronously, which mitigates this problem (partially) given that worker processes tend to be under-loaded. On the other hand, the memory consumption on the other hand will be much greater, I believe that you are right and we need to make the call in the main process. |
||
# The current logic does not involve creating Modin objects | ||
# and manipulation with them in worker processes | ||
return cls(query_compiler=query_compiler) | ||
|
||
@classmethod | ||
def _inflate_full(cls, pandas_df): | ||
def _inflate_full(cls, pandas_df, source_pid): | ||
""" | ||
Re-creates the object from previously-serialized disk-storable representation. | ||
|
||
Parameters | ||
---------- | ||
pandas_df : pandas.DataFrame | ||
Data to use for object re-creation. | ||
source_pid : int | ||
Determines whether a Modin or pandas object needs to be created. | ||
Modin objects are created only on the main process. | ||
|
||
Returns | ||
------- | ||
DataFrame | ||
New ``DataFrame`` based on the `pandas_df`. | ||
""" | ||
if os.getpid() != source_pid: | ||
return pandas_df | ||
# The current logic does not involve creating Modin objects | ||
# and manipulation with them in worker processes | ||
return cls(data=from_pandas(pandas_df)) | ||
|
||
def __reduce__(self): | ||
self._query_compiler.finalize() | ||
if PersistentPickle.get(): | ||
return self._inflate_full, (self._to_pandas(),) | ||
return self._inflate_light, (self._query_compiler,) | ||
pid = os.getpid() | ||
if ( | ||
PersistentPickle.get() | ||
or not self._query_compiler.support_materialization_in_worker_process() | ||
): | ||
return self._inflate_full, (self._to_pandas(), pid) | ||
return self._inflate_light, (self._query_compiler, pid) | ||
|
||
# Persistance support methods - END |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why would we ever want to recreate modin objects on a worker? wouldn't it make applying any method to them super slow anyway?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main goal is to avoid materializing the dataframe in the main process and transfer this operation to the worker process.
The only operation needed for this is taking the object by reference (
get
operation) to perform the conversion to pandas and perform any operations on the pandas object.