You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The validator generates a ValidationReport for each validated tileset. This report can be serialized to JSON. When validating a single tileset from the command line, this report is printed to the console, but it can also be written to an output file. When validating multiple tilesets (recursively, with the --tilesetsDirectory option), then one can set the --writeReports flag. This will cause these reports to be written into files, one for each validated tileset.
There are some degrees of freedom for the structure and contents of these reports, and for how they should be written.
Report structure and contents
Right now, such a report may look as in the following example:
{
"date": "2023-01-31T16:05:42.981Z",
"numErrors": 1,
"numWarnings": 0,
"numInfos": 0,
"issues": [
{
"type": "CONTENT_VALIDATION_ERROR",
"path": "tiles/b3dm/invalid.b3dm",
"message": "tiles/b3dm/invalid.b3dm caused validation errors",
"severity": "ERROR",
"causes": [
{
"type": "BINARY_INVALID_VALUE",
"path": "tiles/b3dm/invalid.b3dm",
"message": "The version must be 1 but is 2",
"severity": "ERROR"
}
]
}
]
}
The summary part currently consists of numErrors/numWarnings/numInfos, but could be refined or structured differently.
For example: When there is one external tileset that contains 10 errors, then numErrors will still be 1. Due to the hierarchical structure of the report, there will only be one error in the top-level tileset, namely a single EXTERNAL_TILESET_VALIDATION_ERROR. This makes sense, because the validator should not report, say, 100 errors in one tileset only because one GLB content contains 100 errors.
One could consider to add further information here. For example, one could add a property like numTotalErrors, that returns the number of ERROR issues, counting them recursively.
(Whether this should refer to all "nodes" or only the "leaf nodes" then will have to be decided - i.e. whether
One could even go further, and add real, in-depth statistics on the reports. These would be optional (i.e. not enabled by default), and contain detailed information - for example, how often each error type appeared in the whole report:
Regardless of the level of detail of the report, much of this should/could be implemented solely as some sort of "post-processing step" on the ValidationReport object. This would offer some flexibility for the application of this step. For example, one could then just load an existing report JSON file, and create a summary of its contents (independent of the actual validator run).
Report files
When validating multiple tilesets in a directory and setting the --writeReports flag, then the validator will write a <name>.report.json file for each <name>.json tileset file. This file will be written into the same directory as the tileset itself. One could consider to add further options to configure where and how these reports should be written.
For example, it could make sense to define a target directory for the reports, to all have them in one place (and not pollute the input directories). The names of these report files will have to be disambiguated. For example, for tilesets in a directory structure
One option that could be desirable would be to have a single report for multiple input files. A user could validate all tilesets in one directory, recursively, and create a single JSON structure that could then be written into a single file, accordingly. This wouldn't need to be much more than a structure like
The validator generates a
ValidationReport
for each validated tileset. This report can be serialized to JSON. When validating a single tileset from the command line, this report is printed to the console, but it can also be written to an output file. When validating multiple tilesets (recursively, with the--tilesetsDirectory
option), then one can set the--writeReports
flag. This will cause these reports to be written into files, one for each validated tileset.There are some degrees of freedom for the structure and contents of these reports, and for how they should be written.
Report structure and contents
Right now, such a report may look as in the following example:
The summary part currently consists of
numErrors
/numWarnings
/numInfos
, but could be refined or structured differently.For example: When there is one external tileset that contains 10 errors, then
numErrors
will still be 1. Due to the hierarchical structure of the report, there will only be one error in the top-level tileset, namely a singleEXTERNAL_TILESET_VALIDATION_ERROR
. This makes sense, because the validator should not report, say, 100 errors in one tileset only because one GLB content contains 100 errors.One could consider to add further information here. For example, one could add a property like
numTotalErrors
, that returns the number ofERROR
issues, counting them recursively.(Whether this should refer to all "nodes" or only the "leaf nodes" then will have to be decided - i.e. whether
should then count as 3 errors or 5 errors...)
One could even go further, and add real, in-depth statistics on the reports. These would be optional (i.e. not enabled by default), and contain detailed information - for example, how often each error type appeared in the whole report:
Regardless of the level of detail of the report, much of this should/could be implemented solely as some sort of "post-processing step" on the
ValidationReport
object. This would offer some flexibility for the application of this step. For example, one could then just load an existing report JSON file, and create a summary of its contents (independent of the actual validator run).Report files
When validating multiple tilesets in a directory and setting the
--writeReports
flag, then the validator will write a<name>.report.json
file for each<name>.json
tileset file. This file will be written into the same directory as the tileset itself. One could consider to add further options to configure where and how these reports should be written.For example, it could make sense to define a target directory for the reports, to all have them in one place (and not pollute the input directories). The names of these report files will have to be disambiguated. For example, for tilesets in a directory structure
the report files in a user-defined output directory could be named
Combining the aforementioned points
One option that could be desirable would be to have a single report for multiple input files. A user could validate all tilesets in one directory, recursively, and create a single JSON structure that could then be written into a single file, accordingly. This wouldn't need to be much more than a structure like
with some degrees of freedom for the exact structure, and maybe things like a "global" summary information
The text was updated successfully, but these errors were encountered: