Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PythonObjectSerializer raise 'utf-8' codec can't decode byte #639

Open
MASARIwot opened this issue Jul 18, 2023 · 2 comments
Open

PythonObjectSerializer raise 'utf-8' codec can't decode byte #639

MASARIwot opened this issue Jul 18, 2023 · 2 comments

Comments

@MASARIwot
Copy link

MASARIwot commented Jul 18, 2023

Good time of the day team

When I tried to cache API responses, it failed as sometimes I have UnicodeDecodeError.

Looking deeper, I found that this is happening because of this code out.write_string(cPickle.dumps(obj, 0).decode("utf-8")) in PythonObjectSerializer
Full Code:

class PythonObjectSerializer(BaseSerializer):
    def read(self, inp):
        str = inp.read_string().encode()
        return cPickle.loads(str)

    def write(self, out, obj):
        out.write_string(cPickle.dumps(obj, 0).decode("utf-8"))

    def get_type_id(self):
        return PYTHON_TYPE_PICKLE

Issue example:

>>> import pickle

>>> pickle.dumps("\u00e4").decode("utf-8")

Traceback (most recent call last):

  File "<stdin>", line 1, in <module>

UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte

As a workaround, I created such a custom serializer:

class HazelcastJsonSerializer(StreamSerializer):

  def read(self, inp):
       return json.loads(inp.read_string())

  def write(self, out, obj):
     out.write_string(json.dumps(obj))

   def get_type_id(self):
         …

Is there any better solution?

python version: 3.6

@mehmettokgoz
Copy link
Contributor

Hi @MASARIwot

You can use HazelcastJsonValue to store JSON values in Hazelcast IMap. No need for a custom serializer for this. Check out this example: https://github.com/hazelcast/hazelcast-python-client/blob/master/examples/pandas/pandas_example.py.

If the fields of your object is known and stable, then it's better if you use Compact serialization which is our newest serialization mechanism.

I believe following docs can help:

https://hazelcast.com/blog/introduction-to-compact-serialization/

https://github.com/hazelcast/hazelcast-python-client/blob/master/examples/serialization/compact_serialization_example.py

@alexjironkin
Copy link

The issue here is the default serialiser can't handle special characters e.g. ä (\u00e4) and raises PythonObjectSerializer. The workaround is to use json.dumps serialisation that does handle it, but this is merely a workaround, which is the same as examples (df.to_json()).

I get it, that using pickle.dumps means you can pickle any Python object (that may not be JSON serialisable), if that is the intention please add support for non English characters.

In general using pickle for serialisation is a bad idea. For example see data protocols (here) so python versions need to match, and there are no guarantees of backwards compatibility. Moreover, as pickle doc says at the top The pickle module is not secure. Only unpickle data you trust. clearly clients can can deserialise data stored by other clients in hazelcast, which makes it a path for malicious code execution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants