Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate cgltf as a replacement to tinygltf #31

Closed
pjcozzi opened this issue Oct 31, 2020 · 10 comments · Fixed by #106
Closed

Evaluate cgltf as a replacement to tinygltf #31

pjcozzi opened this issue Oct 31, 2020 · 10 comments · Fixed by #106

Comments

@pjcozzi
Copy link

pjcozzi commented Oct 31, 2020

https://github.com/jkuhlmann/cgltf

This is supposed to be faster, but extension handling is not as robust.

@jtorresfabra
Copy link
Contributor

I used it (and replaced tinygltf as well) and it is indeed faster and less cluttered. About extensions, even if they are not explicitely supported you can always access them via extension members on objects and do the job yourself, but I did not have any problem. Pretty sure we could open PRs and get them merged if we need it. Also cgltf doesn't load extra files as images or buffers, again you need to manage it yourself.

cgltf uses jsmn for parsing json but leaves the door open to use your own json parsing lib. Tinygltf uses jsonhpp (and also I've read there is the possibility to use rapidjson).

My take on the comparison is that cgltf is simpler, minimal and faster, but you need to do some work yourself. Tinygltf is not that tiny, but still does its work pretty decently. Depending on the usage we do and if we care for the extra performance we could go for cgltf.

@emackey
Copy link

emackey commented Nov 5, 2020

We're using it at AGI now too. @abwood led the conversion from tinygltf.

@abwood
Copy link

abwood commented Nov 5, 2020

Great assessment @jtorresfabra. Only other thing I'll add is that tinygltf handled the draco decompression on the load (with the right #defines specificed), whereas with cgltf you'll need to decompress the buffers outside of the load. Not a big deal, just something to consider.

Regarding the extensions, there is an open issue to provide a better way for us to load extensions that are not directly supported by cgltf, see jkuhlmann/cgltf#125. Looking forward to seeing a better solution to how they are handled now. I'm not sure how open this project is to vendor extensions, like those that might be added for 3DTiles next or the super popular AGI_xxx extensions. At some point their code base will get out of control if they are taking on too many "fringe" extensions.

@javagl
Copy link
Contributor

javagl commented Nov 9, 2020

There certainy are many important technical aspects to consider for such a transition: Performance, extension support, convenience, available time,...

Subjectively (or rather objectively, but with a very narrow focus on API design) it would be great if it would not matter for clients.

To put it harshly: Iff the underlying glTF implementation is "visible" through the cesium-native API, then the implementation that is used at the day where this repo is made "public" will have to be used forever.

Of course, it can be changed at any time. The point is: The users will not like such incompatible changes. It would be great if people could use cesium-native and would simply not know whether tinygltf or cgltf is used under the hood.

Ultimately, this could only be achieved with ~"some sort of abstraction". Depending on how much of the actual glTF structure should be exposed to users of cesium-native, this may be more or less feasible (and creating an abstraction for glTF as a whole would likely not make sense). One example of such an abstraction already exists: The GltfAccessor class offers accessor elements "like an array". The size and operator[] could also be implemented on top of any other glTF implementation. (It also exposes some tinygltf elements, but these are largely unused - except for accessor.gltfAccessor().minValues/maxValues, but these could reasonably be added in GltfAccessor).

@kring
Copy link
Member

kring commented Dec 7, 2020

Well... I've started to use cgltf, and I have a fairly big problem with it. If it's just used to read a glTF, I guess it's fine. But if we want to load and then modify a glTF, it's terrible.

In the glTF spec, objects that reference other objects do so by index. For example, a bufferView has a buffer property which is the index of the buffer backing the bufferView. Rather than sticking with that design, the author of cgltf has inexplicably decided to use a pointer. The buffer property of cgltf_buffer_view is of type cgltf_buffer*. So when I add a new buffer to the glTF (which - by the way - has to be done manually because cgltf doesn't have any functions to help with this), all of those pointers are potentially invalidated and left dangling.

I'm writing this in case someone has any better ideas, but as far as I can see, the options are:

  • Write the really annoying code to update all the pointers when reallocating arrays of objects.
  • Fork cgltf and change it to reference objects by index instead of pointer.
  • Don't use cgltf.

@abwood
Copy link

abwood commented Dec 7, 2020

I agree that using cgltf outside of it's designed usage is dicey/painful. I had to venture down this path for handling the Draco extension. In that case, I never modified the collection of buffers in cgltf_data; instead I kept the extra cgltf_buffers in my own data structure. cgltf_attributes already define an (unused) cgltf_buffer_view* (via the accessor), so I construct a cgltf_buffer_view (also stored in that separate data structure), and set it as the buffer view for the draco attribute.

I'd love to see a better solution to this, but it was the best solution I could think of given our aggressive deadline on this port.

@javagl
Copy link
Contributor

javagl commented Dec 7, 2020

That's tricky. It's reading the data, and storing the indices from the JSON, as in bufferView->buffer = indexFromJson. Then it's going through the whole structure in cgltf_fixup_pointers and calling CGLTF_PTRFIXUP for each element, converting these indices into actual pointers - basically as in

bufferView->buffer = (&buffers[0] + bufferView->buffer);
//                                              ^ The former index...
//          ^ ... now becomes a pointer

Of course, one could introduce a cgltf_fixdown_pointers function and a CGLTF_PTRFIXDOWN macro to convert these pointers to indices, do the modifications, and use the existing functions to convert the indices back to pointers. One could also use a wrench to hammer a screw into a wall.

As a specific instance of my usual sermon about the trade-off between performance and API design: The library may be fast*, but is obviously not tailored for easily creating or modifying assets.


*: I'm still looking forward to see something like https://github.com/miloyip/nativejson-benchmark#parsing-time for glTF, although the result will likely largely depend on the underyling JSON parser, and the parsing result (as in returning a read-only C-struct vs. a sensible, modifiable in-memory representation) may be the more interesting part in practice.

@kring
Copy link
Member

kring commented Dec 20, 2020

I did some really quick and dirty performance measurement of tinygltf on my Dell XPS 15, Core i7-9750H @ 2.60GHz, Windows 10, Visual Studio 2019 release build. tinygltf takes max 2 ms (usually under a millisecond) to load a Cesium OSM Buildings tile. Melbourne tiles take longer due to Draco and textures, and max out at about 78 ms per tile. A very significant chunk of that 78ms is spent loading images. If I stub out stbi_load_from_memory inside tinygltf, instead just producing a 256x256 image initialized to zeros, the max Melbourne tile time drops to 22ms. I suspect most of the remainder is Draco decoding.

So based on that very rough measurement, I don't think we're going to gain very much by moving to a faster glTF loader. Simple meshes are plenty fast, and more complicated ones are slow in a way that wouldn't be improved by a faster glTF loader, assuming the faster loader still uses STB for imaging decoding and Draco for mesh decompression. In any case, downloading tiles over the network is probably the biggest bottleneck by far.

So I think I may switch tinygltf to use RapidJSON instead of nlohmann JSON, cause that's an easy win. And I'll see about wrapping tinygltf in our own classes so that we're not locked into tinygltf for such an important part of cesium-native. But I don't think I'll invest the time to switch to cgltf or our-own-loader-based-on-simdjson or anything fancy like that at this stage.

@jtorresfabra
Copy link
Contributor

jtorresfabra commented Dec 21, 2020

This is very interesting, and opens the box to do the next question. Sorry if this is off-topic but I think is somehow related: Is there a reason to go for Draco instead of Meshoptimizer? Do we have any numbers on where Draco is better than meshoptimizer? like compression ratio etc?

As far as I understand Draco is not really good in terms of vertex cache, optimizing overdraw, vertex fetching, etc (as the compression method used just destroys any optimization/cache ordering) or decoding times (https://gist.github.com/zeux/b48678a44944187c16f55ad6cca45e78 these times may not relate to our decoding tiling times though). Meshoptimizer comes with gltfpack (https://meshoptimizer.org/gltf/) which I think uses cgltf under the hoods.

I guess Draco will give slightly better compression ratios, but I think it would be worth to do some benching.

@kring
Copy link
Member

kring commented Dec 21, 2020

Is there a reason to go for Draco instead of Meshoptimizer?

Only that there's a lot of Draco content out there already. I believe the intention is to support meshoptimizer as well, and perhaps even deprecate Draco eventually (I'm less sure of that bit). But in the meantime we can't load the Melbourne tileset and lots of others without decoding Draco. See https://github.com/CesiumGS/3d-tiles-next/issues/27.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants