Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CSUMB Capstone : Added google/draco compression natively in mbgrd2gltf #1456

Open
wants to merge 36 commits into
base: master
Choose a base branch
from

Conversation

varadpoddar
Copy link

@varadpoddar varadpoddar commented Apr 22, 2024

Adds : Draco compression to mbgrd2gltf
Solves : #1426

Tasks Performed :

  • Added google/draco library to the build process of mbgrd2gltf.
  • Added geometry compression pathway to the process.
  • Added correct gltf structuring post compression.
  • Added man page with html+pdf variation.
  • Native build process updated.
  • Updated tiny-gltf to the latest version.
  • Added additional quantization options for draco_encoder as part of the options parameters. [-qp : Positional Quantization, -qn : Normal Quantization, -qt : Texcord Quantization, -qc : Color Quantization]

varadpoddar and others added 28 commits February 27, 2024 14:37
@varadpoddar
Copy link
Author

varadpoddar commented Apr 23, 2024

@MBARIMike : My first initial question is why was the src/mbio/mbr_mr1aldeo.c file deleted inthis commit?

The file presented cloning issues on Windows 11 both via the CLI and the GitHub Desktop application. The problem was isolated to that one single empty file. We didn't have an intention to remove the file entirely. We overlooked putting it back in the repo. Please expect an updated commit to fix that by the end of the day.

As per our discussion, and as mentioned in #1453, the aforementioned src/mbio/mbr_mr1aldeo.c: shall remain banished.

@MBARIMike
Copy link
Contributor

MBARIMike commented Apr 26, 2024

Testing this PR - getting the new code:

➜  MB-System git:(master) git fetch origin pull/1456/head:capstone-spring2024
remote: Enumerating objects: 568, done.
remote: Counting objects: 100% (535/535), done.
remote: Compressing objects: 100% (341/341), done.
remote: Total 505 (delta 210), reused 456 (delta 164), pack-reused 0
Receiving objects: 100% (505/505), 695.03 KiB | 2.12 MiB/s, done.
Resolving deltas: 100% (210/210), completed with 23 local objects.
From github.com:dwcaress/MB-System
 * [new ref]             refs/pull/1456/head -> capstone-spring2024
➜  MB-System git:(master) git checkout capstone-spring2024
Switched to branch 'capstone-spring2024'

Building it:

➜  MB-System git:(capstone-spring2024) cd build
➜  build git:(capstone-spring2024) cmake ..
----------------------------------------------------------
MB-System
CMake Build System
...
➜  build git:(capstone-spring2024) make
[  1%] Built target mbgsf
[  1%] Built target dump_gsf
...
[ 75%] Building CXX object src/mbgrd2gltf/CMakeFiles/mbgrd2gltf.dir/draco/texture/texture_map.cc.o
[ 75%] Building CXX object src/mbgrd2gltf/CMakeFiles/mbgrd2gltf.dir/draco/texture/texture_transform.cc.o
[ 75%] Building CXX object src/mbgrd2gltf/CMakeFiles/mbgrd2gltf.dir/draco/texture/texture_utils.cc.o
[ 76%] Linking CXX executable mbgrd2gltf
[ 76%] Built target mbgrd2gltf
...
➜  build git:(capstone-spring2024) sudo make install

Testing man page:

➜  build git:(capstone-spring2024) man mbgrd2gltf
MBGRD2GLTF(1)                                                                                  MB-System User Commands                                                                                 MBGRD2GLTF(1)

NAME
       mbgrd2gltf - convert bathymetric grid data to GLTF format with optional settings, including Draco compression.


SYNOPSIS
       mbgrd2gltf <filepath> [-b | --binary] [-o | --output <output filepath>] [-e | --exaggeration <vertical exaggeration>] [-m | --max-size <max size>] [-c | --compression <compression ratio>] [-d | --draco]
       [-q | --quantization <quantization number>] [-h | --help]


DESCRIPTION
       mbgrd2gltf is a tool designed to convert grid data files (GRD) to Graphics Library Transmission Format (GLTF) files. It can generate output in: a binary to create a more compact GLTF file, exaggerate
       vertex altitude to enhance topographic features, and cap the max-size of the GLTF so the output files don't exceed size constraints. Furthermore, optional Draco compression can reduce the outputed GLTF
       file size even more while maintaining good visual quality. With Draco, one can specify the quantization level, which affects the compression level of mesh vertices.


MB-SYSTEM AUTHORSHIP
       David W. Caress - Monterey Bay Aquarium Research Institute
       Dale N. Chayes - Center for Coastal and Ocean Mapping, University of New Hampshire
       Christian dos Santos Ferreira - MARUM, University of Bremen


OPTIONS
       <filepath>
              Specify the path to the input GMT GRD file.


       -b, --binary
              Generate the output in binary GLTF format. Binary GLTF is more compact and typically loads faster in applications.


       -o, --output
              Specify the path to the folder where the output GLTF file will be written.


       -e, --exaggeration
              Specify the vertical exaggeration factor as a decimal number, which multiplies the vertex altitudes. This can enhance the visibility of topographic features.


       -m, --max-size
              Specify the maximum size of the output buffer data in megabytes. This is useful for ensuring that the output files do not exceed certain size constraints. Actual size may vary based on compression
              settings.


       -c, --compression
              Specify the compression ratio as a decimal number, indicating the desired ratio of uncompressed to compressed size. This setting controls the general output file size.


       -d, --draco
              Enable Draco compression to further reduce the file size and improve loading times in 3D environments. Draco is an open-source library, created by Google, for compressing and decompressing 3D
              geometric meshes and point clouds.


       -q, --quantization
              Specify the quantization level for Draco compression. Quantization level affects the compression level of mesh vertices. Higher values increase compression but reduce precision, potentially
              affecting the visual quality of the terrain.


       -h, --help
              Display help message and exit. Provides more information on command usage and options.


EXAMPLES
       Convert a GMT GRD file to a GLTF file with default settings:
              mbgrd2gltf input.grd

       Convert a GRD file to binary GLTF with specified output folder and Draco compression at a quantization level of 10:
              mbgrd2gltf input.grd --binary --output /path/to/output --draco --quantization 10


SEE ALSO
       mbinfo(1), mbprocess(1), mblist(1)


BUGS
       Please report any bugs on the MB-System GitHub Issue Tracker.

MB-System version 5.8                                                                              17 April, 2024                                                                                      MBGRD2GLTF(1)

Sweet!

Testing execution:

➜  build git:(capstone-spring2024) ./src/mbgrd2gltf/mbgrd2gltf /Users/mccann/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd --binary --output /Users/mccann/Downloads/Monterey25.glb --draco --quantization 10
Failed to write GLTF file.

Oops! what did I do wrong?

@MBARIMike
Copy link
Contributor

Omitting the --output /Users/mccann/Downloads/Monterey25.glb fixed the problem mentioned in #1456 (comment).

I was able to create a couple of draco compressed .glb files that can be viewed with X3DOM here:

https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco.html # Options --binary --draco
https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_q10.html # Options --binary --draco -q 10

The file size reduction with --draco is great. Without it the .glb file is 287 MB, with it it's 7.1 MB. The file with -q 10 is 6.4 MB - a 45x reduction in file size!

I do see jaggy artifacts in the resulting model, especially on the flatter parts of the terrain, e.g.:

Screenshot 2024-04-26 at 4 44 29 PM

Is this reasonable for what we can expect with Draco compression?

We don't see these jaggies in the file created without --draco:

https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_lat_rev.html

@varadpoddar
Copy link
Author

@MBARIMike : I'll be working on adding other quantization variables to the program and to the GLTF structure. I wonder if modifying the NORMAL quantization variable will have some effect on the renderings on the flats. I did expect the edges because of fidelity loss, however it didn't seem as drastic in the GLTF viewers.

Stay tuned for updates.

@MBARIMike
Copy link
Contributor

I wonder if the jaggies are being caused by round off or lack of numerical precision error. Are all the geometry computations done using double precision floats until the final step?

@varadpoddar
Copy link
Author

@MBARIMike : Its possible. I didn't want to change the base calculations, however I'll make some changes and see if that helps. I added the other quantization variables and that didn't help the roughness. Increasing the positional quantization to 20 yields a bigger , less compressed and thus smoother mesh. I have attached a screenshot of the edge of the ~29MB file.

Screenshot 2024-04-26 at 11 25 39 PM

In addition, this seems to be the expected behavior as using default draco_transcoder, the ~3MB .glb thus produced is similarly rough on the flats. I didn't exaggerate the output to showcase the same in the following screenshot.

Screenshot 2024-04-26 at 11 35 07 PM

The following were the results of
mbgrd2gltf Monterey25.grd -d -b -qp 20~16MB file.
draco_transcoder -i Monterey25.gltf -o Monterey25-draco.glb -qp 20~12MB file.

Screenshot 2024-04-26 at 11 52 57 PM

The files thus produced were visually very similar.( No difference to the naked eye. )
So, I believe this is expected result of the draco compression and a possible solution to produce detailed meshes would be to increase the default Positional Quantization variable (-qp) somewhere between 16 - 22.

@varadpoddar
Copy link
Author

@StevenHPatrick Or @aretinoco,
I have updated the quantization options and I was wondering if either one of you could kindly update the documentation to reflect the changes? If not, I can do it as well.

Meanwhile, I'll be tackling the output dir discrepancy today and hopefully have it all finished by the coming Wednesday.
Thank you.

@MBARIMike
Copy link
Contributor

MBARIMike commented Apr 29, 2024

Testing latest commits...

➜  mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd -o ~/Downloads -b -d -e 10 -qp 10
➜  mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd -o ~/Downloads -b -d -e 10 -qp 12
➜  mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd -o ~/Downloads -b -d -e 10 -qp 13
➜  mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd -o ~/Downloads -b -d -e 10 -qp 14
➜  mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd -o ~/Downloads -b -d -e 10 -qp 15
➜  mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd -o ~/Downloads -b -d -e 10 -qp 20

Resulting files have been put into X3D files viewable here with a consistent default viewpoint:
https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_qp10.html (6.4M)
https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_qp12.html (7.5M)
https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_qp13.html (8.5M)
https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_qp14.html (10M)
https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_qp15.html (11M)
https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_qp20.html (21M)

Higher values of -qp result in greater fidelity with the "jaggies" largely disappearing with a value of 14 or 15.

The man page and help message should give some guidance in this regard:

       -q, --quantization
              Specify the quantization level for Draco compression. Quantization level affects
              the compression level of mesh vertices. Higher values increase compression but
              reduce precision, potentially affecting the visual quality of the terrain.

    <quantization>            Draco quantization settings. The quantization
                              value must be between 1 and 30. The quantization
                              settings are applied in the following order:
                              -qp: Position, -qn: Normal, -qt: TexCoord, -qc: Color

Also, it seems that the --compression option is not used anymore. My recollection is that was doing simply some gzip compression. I suggest removing that option now that --draco is working well. I also suggest adding a note to the README.md describing the work this team has done.

@varadpoddar
Copy link
Author

Testing latest commits...

➜  mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd -o ~/Downloads -b -d -e 10 -qp 10
➜  mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd -o ~/Downloads -b -d -e 10 -qp 12
➜  mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd -o ~/Downloads -b -d -e 10 -qp 13
➜  mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd -o ~/Downloads -b -d -e 10 -qp 14
➜  mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd -o ~/Downloads -b -d -e 10 -qp 15
➜  mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd -o ~/Downloads -b -d -e 10 -qp 20

Resulting files have been put into X3D files viewable here with a consistent default viewpoint: https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_qp10.html (6.4M) https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_qp12.html (7.5M) https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_qp13.html (8.5M) https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_qp14.html (10M) https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_qp15.html (11M) https://stoqs.mbari.org/x3d/Monterey25_10x/Monterey25_e10_draco_qp20.html (21M)

Thankyou for testing this for us. I'll be using these numbers as part of our analysis to identify an optimal -qp value based on time spend encoding, size rendered, and the loading time of the assets.

Higher values of -qp result in greater fidelity with the "jaggies" largely disappearing with a value of 14 or 15.

The man page and help message should give some guidance in this regard:

       -q, --quantization
              Specify the quantization level for Draco compression. Quantization level affects
              the compression level of mesh vertices. Higher values increase compression but
              reduce precision, potentially affecting the visual quality of the terrain.

    <quantization>            Draco quantization settings. The quantization
                              value must be between 1 and 30. The quantization
                              settings are applied in the following order:
                              -qp: Position, -qn: Normal, -qt: TexCoord, -qc: Color

The above section of the man page seems to be flipped. It seems like the original edits from my end did not make it to the manpage. Please expect a fix soon.

Also, it seems that the --compression option is not used anymore. My recollection is that was doing simply some gzip compression. I suggest removing that option now that --draco is working well.

I'll work on removing the option without anything breaking by the end of the week.

I also suggest adding a note to the README.md describing the work this team has done.

One of my teammates will make the requested change according to previous standards.

@MBARIMike : Could you please remind me the steps or direction to take to make use of the Michigan and Erie .grd files for testing ?

@MBARIMike
Copy link
Contributor

The NetCDF metadata differs from the various programs that write .grd files. The metadata for the Monterey25.grd file is:

➜  build git:(capstone-spring2024) ✗ ncdump -h ~/GitHub/stoqsgit/stoqs/loaders/Monterey25.grd
netcdf Monterey25 {
dimensions:
	side = 2 ;
	xysize = 7037572 ;
variables:
	double x_range(side) ;
		x_range:units = "user_x_unit" ;
	double y_range(side) ;
		y_range:units = "user_y_unit" ;
	double z_range(side) ;
		z_range:units = "user_z_unit" ;
	double spacing(side) ;
	int dimension(side) ;
	float z(xysize) ;
		z:scale_factor = 1. ;
		z:add_offset = 0. ;
		z:node_offset = 0 ;

// global attributes:
		:title = "" ;
		:source = "xyz2grd mbm_arc2grd_tmp_ -GMonterey25.grd -H0 -I0.00025269750452443/0.00025269750452443 -R-122.507137008218/-121.78063168271/36.4408483957993/37.058946491866 -N-9999 -ZTLa -V" ;
}

Whereas the metadata for erie_lld.grd is:

➜  build git:(capstone-spring2024) ✗ ncdump -h ~/GitHub/stoqsgit/stoqs/loaders/erie_lld.grd
netcdf erie_lld {
dimensions:
	x = 7201 ;
	y = 2401 ;
variables:
	double x(x) ;
		x:long_name = "x" ;
		x:actual_range = -84., -78. ;
	double y(y) ;
		y:long_name = "y" ;
		y:actual_range = 41., 43. ;
	float z(y, x) ;
		z:long_name = "z" ;
		z:_FillValue = NaNf ;
		z:actual_range = -62.5768966674805, 603.534606933594 ;

// global attributes:
		:Conventions = "COARDS/CF-1.0" ;
		:title = "erie.grd" ;
		:history = "grdmath erie_igld.grd 173.5 SUB = erie_lld.grd" ;
		:GMT_version = "4.5.7 [64-bit]" ;
}

The same information is there, but it needs to be parsed and constructed differently for mbgrd2gltf. I'm adding more try/catch blocks to this section of code to be able to read both formats.

@dwcaress
Copy link
Owner

dwcaress commented May 1, 2024 via email

@MBARIMike
Copy link
Contributor

@dwcaress Thanks for your comment, it helps confirm my observations on the variety of .grd file formats. Given that there is no determinative way to discern the format (for example by switching on the value of the Conventions global attribute) I think it's best to deal (for now) with the arbitrary values by having arbitrary sets of try/catch blocks that can parse the files we care about. The mbgrd2gltf tool converts geographic longitude and latitude to ECEF coordinates before converting the grid into a mesh – we don't need to convert the data to a projected coordinate system, even though this may add some determinism to the workflow.

@varadpoddar
Copy link
Author

I was wondering if regex pattern matching might be an option for the variable names. Something along the lines of :

try:
            # Define regex patterns
            side_pattern = List of regex patterns matching for sides
            xy_size_pattern = List of regex patterns matching for x,y size
            range_pattern = Pattern for 'a:zA:Z_range' matching 

            # Iterate over variables in the dataset to find matches
            for var_name in variables:
                # if else through the regex blocks and see what we match with.
                assign = data accordingly.

except Exception as e:
            #raise hell

@MBARIMike
Copy link
Contributor

Thanks @varadpoddar. So far I'm going with a pattern like this:

        // Support CF-1.0, CF1.7, and other arbitrary versions of metadata describing the grid
        try
        {
            _side = get_dimension_length(netcdf_id, "side");
        }
        catch (const std::exception&)
        {
            _side = 2;
        }
        try
        {
            _xysize = get_dimension_length(netcdf_id, "xysize");
        }
        catch (const std::exception&)
        {
            _x = get_dimension_length(netcdf_id, "x");
            _y = get_dimension_length(netcdf_id, "y");
            _xysize = _x * _y;
        }
        ...

With these kind of changes, I'm able to parse the data into the variables that bathymetry.cpp needs. However, my testing indicates several other issues (upstream of the draco compression) that are beyond the scope of the work entailed by this PR.

@MBARIMike
Copy link
Contributor

MBARIMike commented May 7, 2024

Following up on #1456 (comment) I made the changes shown in this gist and was able to make draco compressed meshes of the Michigan and Erie .grd files:

src/mbgrd2gltf/mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/michigan_lld.grd  -o ~/Downloads -b -e 10 -c 2 -d -qp 15
src/mbgrd2gltf/mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/erie_lld.grd -o ~/Downloads -b -e 10 -c 2 -d -qp 15

The resulting .glb files load into X3DOM here:
https://stoqs.mbari.org/x3d/michigan_lld_10x/michigan_lld_e10_c2_draco_qp15.html (86M .grd, 10M .glb)
https://stoqs.mbari.org/x3d/erie_lld_10x/erie_lld_e10_c2_draco_qp15.html (66M .grd, 7.5M .glb)

Note that the -c option was needed to reduce the grid size by striding through the original grid in steps of 2, otherwise an exception is raised (e.g. from https://stoqs.mbari.org/x3d/michigan_lld_10x/michigan_lld_e10.html):

 x3dom-full.js:7 Uncaught RangeError: Invalid typed array length: 800869164
    at new Uint8Array (<anonymous>)
    at x3dom.glTF2Loader._getGLTF (x3dom-full.js:7:427677)
    at x3dom.glTF2Loader.load (x3dom-full.js:7:409663)
    at i.onreadystatechange (x3dom-full.js:7:702908)

I'm making this comment here to document further work beyond the scope of this PR. Here is a checklist for follow-on improvements:

  • Implement changes indicated in this gist
  • Rename the -compression option to -stride
  • Figure out the cause of Uncaught RangeError: Invalid typed array length: and implement a fix

@varadpoddar
Copy link
Author

varadpoddar commented May 8, 2024

@MBARIMike : On it. I apologize for the delay. I am setting a hard deadline for us for these tasks by the end of the day Thursday. If that is later than you expected, I'll try to crank it out by tomorrow. Please let me know.

As for the error,

x3dom-full.js:7 Uncaught RangeError: Invalid typed array length: 800869164
at new Uint8Array ()
at x3dom.glTF2Loader._getGLTF (x3dom-full.js:7:427677)
at x3dom.glTF2Loader.load (x3dom-full.js:7:409663)
at i.onreadystatechange (x3dom-full.js:7:702908)

Screenshot 2024-05-07 at 10 58 38 PM

It seems to be because the gltf size, or more importantly the buffer size exceeds the original capacity of the array and your approach with the stride seems to be the optimal way for this scenario. I'll talk to the team tomorrow, however I am thinking something along the lines of

  • identifying the ceiling of the buffer array
  • factor for appropriate stride number
  • Continue generating(All this will naturally only happen during the uncompressed workflow).

Waiting to hear back.
Kindly,
Varad

@MBARIMike
Copy link
Contributor

Hi @varadpoddar No need to crank it out tomorrow. The purpose of my checklist is to capture items for follow-on work.

@varadpoddar
Copy link
Author

Following up on #1456 (comment) I made the changes shown in this gist and was able to make draco compressed meshes of the Michigan and Erie .grd files:

src/mbgrd2gltf/mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/michigan_lld.grd  -o ~/Downloads -b -e 10 -c 2 -d -qp 15
src/mbgrd2gltf/mbgrd2gltf ~/GitHub/stoqsgit/stoqs/loaders/erie_lld.grd -o ~/Downloads -b -e 10 -c 2 -d -qp 15

The resulting .glb files load into X3DOM here: https://stoqs.mbari.org/x3d/michigan_lld_10x/michigan_lld_e10_c2_draco_qp15.html (86M .grd, 10M .glb) https://stoqs.mbari.org/x3d/erie_lld_10x/erie_lld_e10_c2_draco_qp15.html (66M .grd, 7.5M .glb)

Note that the -c option was needed to reduce the grid size by striding through the original grid in steps of 2, otherwise an exception is raised (e.g. from https://stoqs.mbari.org/x3d/michigan_lld_10x/michigan_lld_e10.html):

 x3dom-full.js:7 Uncaught RangeError: Invalid typed array length: 800869164
    at new Uint8Array (<anonymous>)
    at x3dom.glTF2Loader._getGLTF (x3dom-full.js:7:427677)
    at x3dom.glTF2Loader.load (x3dom-full.js:7:409663)
    at i.onreadystatechange (x3dom-full.js:7:702908)

@MBARIMike : Based on some more extensive research, I located multiple causes for runtime errors for the js frameworks used to visualize the gltf/glb data. One error happens because of irregular placement of buffer data for large files and Draco-decoder's inability to parse it. (Assumption) Since a fair chunk of memory is requested in one lump sum because of the big buffer size, I think it reaches past their initial memory allotment. One of the fixes that I explored but wasn't able to implement was chunking the buffer data. This would require exploring breaking up of buffer data and assigning the properties (accessors, bufferview) correctly to the data while it is being dracoCompressed. Second error type was uint8Array.

As a work-around and based on some experimentation, setting the stride value to 1 reduces the z size by ~10-20%. The strides or compressions thus force the buffer to not jump out of the array, which leads me to believe that it might be an issue there.

I'm making this comment here to document further work beyond the scope of this PR. Here is a checklist for follow-on improvements:

  • Implement changes indicated in this gist
  • Rename the -compression option to -stride
  • Figure out the cause of Uncaught RangeError: Invalid typed array length: and implement a fix

All changes have been implemented and the documentation reflects the same. In the spirit to improve upon the project, I spent some time exploring finding ways to eliminate the try-catch blocks in favor of some standard to parse in the netCDF values.

Towards this exploration, I ended up writing a json parser that could theoretically perform the task as it should have, however it wasn't published as part of the pull request. The following would be blueprint of a "Universal" NetCDF value interpreter.

  1. The lexer would depend on JSON. The library is included as part of tiny-gltf library and thus introduces no overhead, However the amount of json writing at this stage is more than the try-catch blocks.

    • The program would parse the json, to retrieve a list of variables to get, the methods to get them from, and any and all fallback models or variables. Fallbacks could also be calculations based on the grammar 1,0 + 1,1 / 2 which would be parsed left to right 3 tokens at a time until exhausted. The numbers represent the array index where the value for the dependent would be stored. This would be retrieved from the an actual array in the code.
  2. It is very near impossible to account for all the variables, however one can come close by using C to create void pointers pointing to generic functions. Alternatively, std::any might come in handy to save variable pointers to be accessed based on their names.

  3. One would need to write a small expression parser for the calculations. See 1, subpoint 1. (Uses Recursion)

Time constraints led me to leave this approach behind in favor for the try-catch blocks from the gist. I hope another team could use this approach to gain some traction in the wild Wild westish world that is netCDF.

Regards,
Varad

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants