Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch from pickled blobs to JSON data #1786

Open
wants to merge 74 commits into
base: master
Choose a base branch
from
Open

Switch from pickled blobs to JSON data #1786

wants to merge 74 commits into from

Conversation

dsblank
Copy link
Member

@dsblank dsblank commented Oct 10, 2024

This PR converts the database interface to use JSON data rather than the pickled blobs used since the early days.

  1. Uses a new abstraction in the database: db.serializer
    a. abstracts data column name
    b. contains serialize/unserialize functions
  2. Updates database format to 21
  3. The conversion from 20 to 21 reads pickled blobs, and writes JSON data.
    a. It does this by switching between serializers
  4. New databases do not contain pickled blobs
  5. Converted databases contain both fields

@Nick-Hall
Copy link
Member

If we are moving from BLOBs to JSON then we should really use the new format. See PR #800.

The new format uses the to_json and from_json methods in the serialize module to build the json from the underlying classes. It comes with get_schema class methods which provide a JSON Schema that allow the validation that we already use in our unit tests.

The main benefit of the new format is that it is easier maintain and debug. Instead of lists we use dictionaries. So, for example, we refer to the field "parent_family_list" instead of field number 9.

Upgrades are no problem. We just read and write the raw data.

When I have more time I'll update you on discussion whilst you have been away.

@dsblank
Copy link
Member Author

dsblank commented Oct 11, 2024

Oh, that sounds like a great idea! I'll take a look at the JSON format and switch to that. Should work even better with the SQL JSON_EXTRACT().

@Nick-Hall
Copy link
Member

There are a few places where the new format is used, so we will get some bonus performance improvements.

Feel free to make changes to my existing code if you see a benefit.

You may also want to have a quick look at how we serialize GrampsType. Enough information is stored so that we can recreate the object, but I don't think that I chose to store all fields.

@dsblank
Copy link
Member Author

dsblank commented Oct 12, 2024

Making some progress. Turns out, the serialized format had leaked into many other places, probably for speed. Probably good candidates for business logic.

@dsblank
Copy link
Member Author

dsblank commented Oct 13, 2024

I added a to_dict() and from_dict() based on the to_json() and from_json(). I didn't know about the object hooks. Brilliant! That saves so much code.

@dsblank
Copy link
Member Author

dsblank commented Oct 13, 2024

@Nick-Hall , I will probably need your assistance regarding the complete save/load of the to_json and from_json functions. I looked at your PR but as it touches 590 files, there is a lot there.

In this PR, I can now upgrade a database, and load the people views (except for name functions which I have to figure out).

image

@Nick-Hall
Copy link
Member

@dsblank I have rebased PR #800 on the gramps51 branch. Only 25 files were actually changed.

You can also see the changes suggested by @prculley resulting from his testing and performance benchmarks.

@dsblank
Copy link
Member Author

dsblank commented Oct 13, 2024

Thanks @Nick-Hall, that was very useful. I think that I will cherry pick some of the changes (like attribute name changes, elimination of private attributes).

You'll see that I did many of the same changes you made. But, one thing I found is that if we want to allow upgrades from previous versions, then we need to be able to read in blob_data, and write out json_data. I think my version has that covered.

I'll continue to make progress.

@Nick-Hall
Copy link
Member

@dsblank Why are you removing the properties? The validation in the setters will no longer be called.

@dsblank
Copy link
Member Author

dsblank commented Oct 14, 2024

@Nick-Hall , I thought that was what @prculley did for optimization, and I thought was needed. I can put those back :)

@Nick-Hall
Copy link
Member

Perhaps we could consider a solution similar to that provided by the pickle __getstate__ and __setstate__ methods.

A get_state method in a base class could return a dictionary of public attributes by default. This could be overridden to add properties if required.

Aset_state method could write the values back. In the case of properties we could just set the corresponding private variable rather than calling the setter. The list to tuple conversion could also be done in this method.

I expect that only a handful of classes would need to override the default methods.

@Nick-Hall
Copy link
Member

For the COLUMN_ suggestions, I decided to go the other way, and remove all of them.

Yes, I agree that this is the best approach.

@stevenyoungs
Copy link
Contributor

@stevenyoungs, I appreciate the review!

If others disagree, they can make it consistent a different way in a follow-up PR.

I can make arguments for either approach. Consistency is of greater value. I can't imagine the strings will change frequently.

@dsblank
Copy link
Member Author

dsblank commented Nov 10, 2024

@Nick-Hall, the one thing I didn't convert was the metadata; it is still in a pickled format. I didn't even look at the format of metadata. I suspect that we'll want to remove pickle from there, too. That can be done in a follow-up PR, or here.

@Nick-Hall
Copy link
Member

Have a look at how I do it in the MongoDB backend. All the tables become collections of JSON documents. It's only really a proof of concept so feel free to use a different JSON structure.

Getting rid of all the pickled data is probably the way to go. I don't mind if you continue with this PR.

@dsblank
Copy link
Member Author

dsblank commented Nov 12, 2024

@Nick-Hall, thanks for the pointer to your MongoDB database implementation. I stole the basic framework.

This was a little tricky as I was using the version to know whether to use JSON or blob, and the version is of course stored in the metadata. But I moved away from using the version number and made a probe to see if the DB supports JSON.

I think that is the last thing I'd want to add to this PR.

@dsblank dsblank self-assigned this Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants