diff --git a/libs/i18n-labels/en/datasets.json b/libs/i18n-labels/en/datasets.json index 57524aad56..87b4c987de 100644 --- a/libs/i18n-labels/en/datasets.json +++ b/libs/i18n-labels/en/datasets.json @@ -2264,7 +2264,7 @@ }, "public-global-sar-presence": { "name": "SAR with Neural classification", - "description": "Detection footprints are areas within each satellite scan (or scene) that the platform uses to perform detections. These filters help to keep relevant detections and exclude data that may be inaccurate.
\n
\nDetection footprints are smaller than the total scene as they exclude any land areas and islands, and exclude a 500 meter buffer from the boundaries of the scene and a 1 kilometer buffer from shorelines. ", + "description": "

Overview

\n

Satellite synthetic aperture radar (SAR) is a spaceborne radar imaging system that can detect at-sea vessels and structures in any weather conditions. Microwave pulses are transmitted by a satellite-based antenna towards the Earth surface. The microwave energy scattered back to the spacecraft is then measured and integrated to form a “backscatter” image. The SAR image contains rich information about the different objects on the water, such as their size, orientation and texture. SAR imaging systems overcome most weather conditions and illumination levels, including clouds or rain due to the cloud penetrating property of microwaves, and daylight or darkness due to radar being an “active” sensor (it shoots and records back its own energy). SAR gives an advantage over some other “passive” satellite sensors, such as electro-optical imagery, consisting of a satellite-based camera recording the sunlight/infrared radiation reflected from/emitted by objects on the ground. This latter method can be confounded by cloud cover, haze, weather events and seasonal darkness at high latitudes.

\n

Use cases

\n\n

Limitations

\n\n

Methods

\n

SAR imagery

\n

We use SAR imagery from the Copernicus Sentinel-1 mission of the European Space Agency (ESA) [1]. The images are sourced from two satellites (S1A and S1B up until December 2021 when S1B stopped operating, and S1A only from 2022 onward) that orbit 180 degrees out of phase with each other in a polar, sun-synchronous orbit. Each satellite has a repeat-cycle of 12 days, so that together they provide a global mapping of coastal waters around the world approximately every six days for the period that both were operating. The number of images per location, however, varies greatly depending on mission priorities, latitude, and degree of overlap between adjacent satellite passes. Spatial coverage also varies over time [2]. Our data consist of dual-polarization images (VH and VV) from the Interferometric Wide (IW) swath mode, with a resolution of about 20 m.

\n

[1]\n \n https://sedas.satapps.org/wp-content/uploads/2015/07/Sentinel-1_User_Handbook.pdf\n \n

\n

[2]\n \n https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1/observation-scenario\n \n

\n

Detection footprints

\n

Detection footprints are areas within each satellite scan (or scene) that our system uses to perform detections. These filters help to keep relevant detections and exclude data that may be inaccurate. Detection footprints are smaller than the total scene as they exclude any land areas and islands, and exclude a 500 meter buffer from the boundaries of the scene and a 1 kilometer buffer from shorelines.

\n

Filtering

\n

GFW has post-processed the SAR detections to reduce noise (false positives), remove offshore infrastructure from this layer focused on vessels, and exclude areas with sea ice at high latitudes.

\n

Vessel detection by SAR

\n

Detecting vessels with SAR is based on an known as Constant False Alarm Rate (CFAR), a threshold algorithm used for anomaly detection in radar imagery. This algorithm is designed to search for pixel values that are unusually bright (the targets) compared to those in the surrounding area (the sea clutter). This method sets a threshold based on the pixel values of the local background (within a window), scanning the whole image pixel-by-pixel. Pixel values above the threshold constitute an anomaly and are likely to be samples from a target, and therefore are included as a detection.

\n

Vessel presence and length estimation

\n

To estimate the length of every detected object and also to identify when our CFAR algorithm made false detections, we designed a deep convolutional neural network (ConvNet) based on the modern ResNet (Residual Networks) architecture. This single-input/multi-output ConvNet takes dual-band SAR image tiles of 80 by 80 pixels as input, and outputs the probability of object presence (known as a “binary classification task”) and the estimated length of the object (known as a “regression task”).

\n

Fishing and non-fishing classification

\n

To identify whether a detected vessel was a fishing or non-fishing vessel we use a machine learning model. For this classification task we used a ConvNeXt architecture modified to process the following two inputs: the estimated length of the vessel from SAR (a scalar quantity) and a stack of environmental rasters centered at the vessel’s location (a multi-channel image). This multi-input-mixed-data/single-output model passes the raster stack (11 channels) through a series of convolutional layers and combines the resulting feature maps with the vessel length value to perform a binary classification: fishing or non-fishing. 

\n

The environmental layers used to differentiate between fishing and non-fishing include:

\n
    \n
  1. vessel density (based on SAR)
  2. \n
  3. average vessel length (based on SAR)
  4. \n
  5. bathymetry
  6. \n
  7. distance from port
  8. \n
  9. hours of non-fishing vessel presence, under 50 m (from AIS)
  10. \n
  11. hours of non-fishing vessel presence, over 50 m (from AIS)
  12. \n
  13. average surface temperature
  14. \n
  15. average current speed
  16. \n
  17. standard deviation of daily temperature
  18. \n
  19. standard deviation of daily current speed
  20. \n
  21. average chlorophyll
  22. \n
\n

AIS matching and vessel identity

\n

AIS data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large “blind spots” along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched SAR detections therefore provide the missing information about vessel traffic in the ocean.

\n

SAR and AIS matching

\n

Matching SAR detections to vessels’ GPS coordinates (from the automatic identification system (AIS) is challenging because the timestamp of the SAR images and AIS records do not coincide, and a single AIS message can potentially match to multiple vessels appearing in the image, and vice versa. To determine the likelihood that a vessel broadcasting AIS corresponded to a specific SAR detection, we followed a matching approach based on probability rasters of where a vessel is likely to be minutes before and after an AIS position was recorded. These rasters were developed from one year of global AIS data from the Global Fishing Watch pipeline which uses Spire Global and Orbcomm sources of satellite data, including roughly 10 billion vessel positions, and computed for six different vessel classes, considering six different speeds and 36 time intervals. So we obtain the likely position of a vessel that could match a SAR detection based on the vessel class, speed and time interval.

\n

AIS matching and vessel identity

\n

Automatic identification system (AIS) data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large “blind spots” along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched SAR detections therefore provide the missing information about vessel traffic in the ocean.

\n

Resources, code and other notes

\n

All code developed in this study for SAR detection, deep learning models, and analyses is open source and freely available at\n \n https://github.com/GlobalFishingWatch/paper-industrial-activity\n .\n

\n

Source data and citations

\n

All vessel data are freely available through the Global Fishing Watch data portal at\n \n https://globalfishingwatch.org\n . All data to reproduce our supporting scientific paper can be downloaded from\n \n https://doi.org/10.6084/m9.figshare.24309475\n \n (statistical analyses and figures) and\n \n https://doi.org/10.6084/m9.figshare.24309469\n \n (model training and evaluation).\n

\n

License

\n

Non-Commercial Use Only. The Site and the Services are provided for Non-Commercial use only in accordance with the CC BY-NC 4.0 license. If you would like to use the Site and/or the Services for commercial purposes, please contact us.", "schema": { "id": "id", "lat": "lat", @@ -2903,8 +2903,8 @@ } }, "public-panama-vessel-identity-fishing": { - "name": "Panama VMS (Public Non fishing vessels)", - "description": "Dataset for VMS Panama - Carriers (Public)", + "name": "Panama VMS (Public Fishing Vessels)", + "description": "Dataset for VMS Panama (Public)", "schema": { "id": "id", "selfReportedInfo": "selfReportedInfo", diff --git a/libs/i18n-labels/es/datasets.json b/libs/i18n-labels/es/datasets.json index 15be91c424..accf2013c8 100644 --- a/libs/i18n-labels/es/datasets.json +++ b/libs/i18n-labels/es/datasets.json @@ -2264,7 +2264,7 @@ }, "public-global-sar-presence": { "name": "SAR with Neural classification", - "description": "La zona de detección es el área dentro de cada escaneo satelital (o escena) que la plataforma usa para realizar detecciones. Estos filtros ayudan a mantener detecciones relevantes y excluyen datos que pueden ser inexactos.\n\nLa zona de detección es más pequeña que la escena total, ya que excluye cualquier área terrestre o islas, al igual que un borde de 500 metros en los límites de la escena y cualquier área a menos de 1 kilómetro de las costas.\n", + "description": "

Overview

\n

Satellite synthetic aperture radar (SAR) is a spaceborne radar imaging system that can detect at-sea vessels and structures in any weather conditions. Microwave pulses are transmitted by a satellite-based antenna towards the Earth surface. The microwave energy scattered back to the spacecraft is then measured and integrated to form a “backscatter” image. The SAR image contains rich information about the different objects on the water, such as their size, orientation and texture. SAR imaging systems overcome most weather conditions and illumination levels, including clouds or rain due to the cloud penetrating property of microwaves, and daylight or darkness due to radar being an “active” sensor (it shoots and records back its own energy). SAR gives an advantage over some other “passive” satellite sensors, such as electro-optical imagery, consisting of a satellite-based camera recording the sunlight/infrared radiation reflected from/emitted by objects on the ground. This latter method can be confounded by cloud cover, haze, weather events and seasonal darkness at high latitudes.

\n

Use cases

\n\n

Limitations

\n\n

Methods

\n

SAR imagery

\n

We use SAR imagery from the Copernicus Sentinel-1 mission of the European Space Agency (ESA) [1]. The images are sourced from two satellites (S1A and S1B up until December 2021 when S1B stopped operating, and S1A only from 2022 onward) that orbit 180 degrees out of phase with each other in a polar, sun-synchronous orbit. Each satellite has a repeat-cycle of 12 days, so that together they provide a global mapping of coastal waters around the world approximately every six days for the period that both were operating. The number of images per location, however, varies greatly depending on mission priorities, latitude, and degree of overlap between adjacent satellite passes. Spatial coverage also varies over time [2]. Our data consist of dual-polarization images (VH and VV) from the Interferometric Wide (IW) swath mode, with a resolution of about 20 m.

\n

[1]\n \n https://sedas.satapps.org/wp-content/uploads/2015/07/Sentinel-1_User_Handbook.pdf\n \n

\n

[2]\n \n https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1/observation-scenario\n \n

\n

Detection footprints

\n

Detection footprints are areas within each satellite scan (or scene) that our system uses to perform detections. These filters help to keep relevant detections and exclude data that may be inaccurate. Detection footprints are smaller than the total scene as they exclude any land areas and islands, and exclude a 500 meter buffer from the boundaries of the scene and a 1 kilometer buffer from shorelines.

\n

Filtering

\n

GFW has post-processed the SAR detections to reduce noise (false positives), remove offshore infrastructure from this layer focused on vessels, and exclude areas with sea ice at high latitudes.

\n

Vessel detection by SAR

\n

Detecting vessels with SAR is based on an known as Constant False Alarm Rate (CFAR), a threshold algorithm used for anomaly detection in radar imagery. This algorithm is designed to search for pixel values that are unusually bright (the targets) compared to those in the surrounding area (the sea clutter). This method sets a threshold based on the pixel values of the local background (within a window), scanning the whole image pixel-by-pixel. Pixel values above the threshold constitute an anomaly and are likely to be samples from a target, and therefore are included as a detection.

\n

Vessel presence and length estimation

\n

To estimate the length of every detected object and also to identify when our CFAR algorithm made false detections, we designed a deep convolutional neural network (ConvNet) based on the modern ResNet (Residual Networks) architecture. This single-input/multi-output ConvNet takes dual-band SAR image tiles of 80 by 80 pixels as input, and outputs the probability of object presence (known as a “binary classification task”) and the estimated length of the object (known as a “regression task”).

\n

Fishing and non-fishing classification

\n

To identify whether a detected vessel was a fishing or non-fishing vessel we use a machine learning model. For this classification task we used a ConvNeXt architecture modified to process the following two inputs: the estimated length of the vessel from SAR (a scalar quantity) and a stack of environmental rasters centered at the vessel’s location (a multi-channel image). This multi-input-mixed-data/single-output model passes the raster stack (11 channels) through a series of convolutional layers and combines the resulting feature maps with the vessel length value to perform a binary classification: fishing or non-fishing. 

\n

The environmental layers used to differentiate between fishing and non-fishing include:

\n
    \n
  1. vessel density (based on SAR)
  2. \n
  3. average vessel length (based on SAR)
  4. \n
  5. bathymetry
  6. \n
  7. distance from port
  8. \n
  9. hours of non-fishing vessel presence, under 50 m (from AIS)
  10. \n
  11. hours of non-fishing vessel presence, over 50 m (from AIS)
  12. \n
  13. average surface temperature
  14. \n
  15. average current speed
  16. \n
  17. standard deviation of daily temperature
  18. \n
  19. standard deviation of daily current speed
  20. \n
  21. average chlorophyll
  22. \n
\n

AIS matching and vessel identity

\n

AIS data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large “blind spots” along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched SAR detections therefore provide the missing information about vessel traffic in the ocean.

\n

SAR and AIS matching

\n

Matching SAR detections to vessels’ GPS coordinates (from the automatic identification system (AIS) is challenging because the timestamp of the SAR images and AIS records do not coincide, and a single AIS message can potentially match to multiple vessels appearing in the image, and vice versa. To determine the likelihood that a vessel broadcasting AIS corresponded to a specific SAR detection, we followed a matching approach based on probability rasters of where a vessel is likely to be minutes before and after an AIS position was recorded. These rasters were developed from one year of global AIS data from the Global Fishing Watch pipeline which uses Spire Global and Orbcomm sources of satellite data, including roughly 10 billion vessel positions, and computed for six different vessel classes, considering six different speeds and 36 time intervals. So we obtain the likely position of a vessel that could match a SAR detection based on the vessel class, speed and time interval.

\n

AIS matching and vessel identity

\n

Automatic identification system (AIS) data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large “blind spots” along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched SAR detections therefore provide the missing information about vessel traffic in the ocean.

\n

Resources, code and other notes

\n

All code developed in this study for SAR detection, deep learning models, and analyses is open source and freely available at\n \n https://github.com/GlobalFishingWatch/paper-industrial-activity\n .\n

\n

Source data and citations

\n

All vessel data are freely available through the Global Fishing Watch data portal at\n \n https://globalfishingwatch.org\n . All data to reproduce our supporting scientific paper can be downloaded from\n \n https://doi.org/10.6084/m9.figshare.24309475\n \n (statistical analyses and figures) and\n \n https://doi.org/10.6084/m9.figshare.24309469\n \n (model training and evaluation).\n

\n

License

\n

Non-Commercial Use Only. The Site and the Services are provided for Non-Commercial use only in accordance with the CC BY-NC 4.0 license. If you would like to use the Site and/or the Services for commercial purposes, please contact us.", "schema": { "id": "id", "lat": "lat", @@ -2903,8 +2903,8 @@ } }, "public-panama-vessel-identity-fishing": { - "name": "Panama VMS (Public Non fishing vessels)", - "description": "Dataset for VMS Panama - Carriers (Public)", + "name": "Panama VMS (Public Fishing Vessels)", + "description": "Dataset for VMS Panama (Public)", "schema": { "id": "id", "selfReportedInfo": "selfReportedInfo", diff --git a/libs/i18n-labels/fr/datasets.json b/libs/i18n-labels/fr/datasets.json index 4bb9580138..89e3a8c3fe 100644 --- a/libs/i18n-labels/fr/datasets.json +++ b/libs/i18n-labels/fr/datasets.json @@ -2264,7 +2264,7 @@ }, "public-global-sar-presence": { "name": "SAR with Neural classification", - "description": "SAR", + "description": "

Overview

\n

Satellite synthetic aperture radar (SAR) is a spaceborne radar imaging system that can detect at-sea vessels and structures in any weather conditions. Microwave pulses are transmitted by a satellite-based antenna towards the Earth surface. The microwave energy scattered back to the spacecraft is then measured and integrated to form a “backscatter” image. The SAR image contains rich information about the different objects on the water, such as their size, orientation and texture. SAR imaging systems overcome most weather conditions and illumination levels, including clouds or rain due to the cloud penetrating property of microwaves, and daylight or darkness due to radar being an “active” sensor (it shoots and records back its own energy). SAR gives an advantage over some other “passive” satellite sensors, such as electro-optical imagery, consisting of a satellite-based camera recording the sunlight/infrared radiation reflected from/emitted by objects on the ground. This latter method can be confounded by cloud cover, haze, weather events and seasonal darkness at high latitudes.

\n

Use cases

\n\n

Limitations

\n\n

Methods

\n

SAR imagery

\n

We use SAR imagery from the Copernicus Sentinel-1 mission of the European Space Agency (ESA) [1]. The images are sourced from two satellites (S1A and S1B up until December 2021 when S1B stopped operating, and S1A only from 2022 onward) that orbit 180 degrees out of phase with each other in a polar, sun-synchronous orbit. Each satellite has a repeat-cycle of 12 days, so that together they provide a global mapping of coastal waters around the world approximately every six days for the period that both were operating. The number of images per location, however, varies greatly depending on mission priorities, latitude, and degree of overlap between adjacent satellite passes. Spatial coverage also varies over time [2]. Our data consist of dual-polarization images (VH and VV) from the Interferometric Wide (IW) swath mode, with a resolution of about 20 m.

\n

[1]\n \n https://sedas.satapps.org/wp-content/uploads/2015/07/Sentinel-1_User_Handbook.pdf\n \n

\n

[2]\n \n https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1/observation-scenario\n \n

\n

Detection footprints

\n

Detection footprints are areas within each satellite scan (or scene) that our system uses to perform detections. These filters help to keep relevant detections and exclude data that may be inaccurate. Detection footprints are smaller than the total scene as they exclude any land areas and islands, and exclude a 500 meter buffer from the boundaries of the scene and a 1 kilometer buffer from shorelines.

\n

Filtering

\n

GFW has post-processed the SAR detections to reduce noise (false positives), remove offshore infrastructure from this layer focused on vessels, and exclude areas with sea ice at high latitudes.

\n

Vessel detection by SAR

\n

Detecting vessels with SAR is based on an known as Constant False Alarm Rate (CFAR), a threshold algorithm used for anomaly detection in radar imagery. This algorithm is designed to search for pixel values that are unusually bright (the targets) compared to those in the surrounding area (the sea clutter). This method sets a threshold based on the pixel values of the local background (within a window), scanning the whole image pixel-by-pixel. Pixel values above the threshold constitute an anomaly and are likely to be samples from a target, and therefore are included as a detection.

\n

Vessel presence and length estimation

\n

To estimate the length of every detected object and also to identify when our CFAR algorithm made false detections, we designed a deep convolutional neural network (ConvNet) based on the modern ResNet (Residual Networks) architecture. This single-input/multi-output ConvNet takes dual-band SAR image tiles of 80 by 80 pixels as input, and outputs the probability of object presence (known as a “binary classification task”) and the estimated length of the object (known as a “regression task”).

\n

Fishing and non-fishing classification

\n

To identify whether a detected vessel was a fishing or non-fishing vessel we use a machine learning model. For this classification task we used a ConvNeXt architecture modified to process the following two inputs: the estimated length of the vessel from SAR (a scalar quantity) and a stack of environmental rasters centered at the vessel’s location (a multi-channel image). This multi-input-mixed-data/single-output model passes the raster stack (11 channels) through a series of convolutional layers and combines the resulting feature maps with the vessel length value to perform a binary classification: fishing or non-fishing. 

\n

The environmental layers used to differentiate between fishing and non-fishing include:

\n
    \n
  1. vessel density (based on SAR)
  2. \n
  3. average vessel length (based on SAR)
  4. \n
  5. bathymetry
  6. \n
  7. distance from port
  8. \n
  9. hours of non-fishing vessel presence, under 50 m (from AIS)
  10. \n
  11. hours of non-fishing vessel presence, over 50 m (from AIS)
  12. \n
  13. average surface temperature
  14. \n
  15. average current speed
  16. \n
  17. standard deviation of daily temperature
  18. \n
  19. standard deviation of daily current speed
  20. \n
  21. average chlorophyll
  22. \n
\n

AIS matching and vessel identity

\n

AIS data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large “blind spots” along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched SAR detections therefore provide the missing information about vessel traffic in the ocean.

\n

SAR and AIS matching

\n

Matching SAR detections to vessels’ GPS coordinates (from the automatic identification system (AIS) is challenging because the timestamp of the SAR images and AIS records do not coincide, and a single AIS message can potentially match to multiple vessels appearing in the image, and vice versa. To determine the likelihood that a vessel broadcasting AIS corresponded to a specific SAR detection, we followed a matching approach based on probability rasters of where a vessel is likely to be minutes before and after an AIS position was recorded. These rasters were developed from one year of global AIS data from the Global Fishing Watch pipeline which uses Spire Global and Orbcomm sources of satellite data, including roughly 10 billion vessel positions, and computed for six different vessel classes, considering six different speeds and 36 time intervals. So we obtain the likely position of a vessel that could match a SAR detection based on the vessel class, speed and time interval.

\n

AIS matching and vessel identity

\n

Automatic identification system (AIS) data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large “blind spots” along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched SAR detections therefore provide the missing information about vessel traffic in the ocean.

\n

Resources, code and other notes

\n

All code developed in this study for SAR detection, deep learning models, and analyses is open source and freely available at\n \n https://github.com/GlobalFishingWatch/paper-industrial-activity\n .\n

\n

Source data and citations

\n

All vessel data are freely available through the Global Fishing Watch data portal at\n \n https://globalfishingwatch.org\n . All data to reproduce our supporting scientific paper can be downloaded from\n \n https://doi.org/10.6084/m9.figshare.24309475\n \n (statistical analyses and figures) and\n \n https://doi.org/10.6084/m9.figshare.24309469\n \n (model training and evaluation).\n

\n

License

\n

Non-Commercial Use Only. The Site and the Services are provided for Non-Commercial use only in accordance with the CC BY-NC 4.0 license. If you would like to use the Site and/or the Services for commercial purposes, please contact us.", "schema": { "id": "id", "lat": "lat", @@ -2903,8 +2903,8 @@ } }, "public-panama-vessel-identity-fishing": { - "name": "Panama VMS (Public Non fishing vessels)", - "description": "Dataset for VMS Panama - Carriers (Public)", + "name": "Panama VMS (Public Fishing Vessels)", + "description": "Dataset for VMS Panama (Public)", "schema": { "id": "id", "selfReportedInfo": "selfReportedInfo", diff --git a/libs/i18n-labels/id/datasets.json b/libs/i18n-labels/id/datasets.json index c4fb29befe..824e40ca34 100644 --- a/libs/i18n-labels/id/datasets.json +++ b/libs/i18n-labels/id/datasets.json @@ -2264,7 +2264,7 @@ }, "public-global-sar-presence": { "name": "SAR with Neural classification", - "description": "SAR", + "description": "

Overview

\n

Satellite synthetic aperture radar (SAR) is a spaceborne radar imaging system that can detect at-sea vessels and structures in any weather conditions. Microwave pulses are transmitted by a satellite-based antenna towards the Earth surface. The microwave energy scattered back to the spacecraft is then measured and integrated to form a “backscatter” image. The SAR image contains rich information about the different objects on the water, such as their size, orientation and texture. SAR imaging systems overcome most weather conditions and illumination levels, including clouds or rain due to the cloud penetrating property of microwaves, and daylight or darkness due to radar being an “active” sensor (it shoots and records back its own energy). SAR gives an advantage over some other “passive” satellite sensors, such as electro-optical imagery, consisting of a satellite-based camera recording the sunlight/infrared radiation reflected from/emitted by objects on the ground. This latter method can be confounded by cloud cover, haze, weather events and seasonal darkness at high latitudes.

\n

Use cases

\n\n

Limitations

\n\n

Methods

\n

SAR imagery

\n

We use SAR imagery from the Copernicus Sentinel-1 mission of the European Space Agency (ESA) [1]. The images are sourced from two satellites (S1A and S1B up until December 2021 when S1B stopped operating, and S1A only from 2022 onward) that orbit 180 degrees out of phase with each other in a polar, sun-synchronous orbit. Each satellite has a repeat-cycle of 12 days, so that together they provide a global mapping of coastal waters around the world approximately every six days for the period that both were operating. The number of images per location, however, varies greatly depending on mission priorities, latitude, and degree of overlap between adjacent satellite passes. Spatial coverage also varies over time [2]. Our data consist of dual-polarization images (VH and VV) from the Interferometric Wide (IW) swath mode, with a resolution of about 20 m.

\n

[1]\n \n https://sedas.satapps.org/wp-content/uploads/2015/07/Sentinel-1_User_Handbook.pdf\n \n

\n

[2]\n \n https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1/observation-scenario\n \n

\n

Detection footprints

\n

Detection footprints are areas within each satellite scan (or scene) that our system uses to perform detections. These filters help to keep relevant detections and exclude data that may be inaccurate. Detection footprints are smaller than the total scene as they exclude any land areas and islands, and exclude a 500 meter buffer from the boundaries of the scene and a 1 kilometer buffer from shorelines.

\n

Filtering

\n

GFW has post-processed the SAR detections to reduce noise (false positives), remove offshore infrastructure from this layer focused on vessels, and exclude areas with sea ice at high latitudes.

\n

Vessel detection by SAR

\n

Detecting vessels with SAR is based on an known as Constant False Alarm Rate (CFAR), a threshold algorithm used for anomaly detection in radar imagery. This algorithm is designed to search for pixel values that are unusually bright (the targets) compared to those in the surrounding area (the sea clutter). This method sets a threshold based on the pixel values of the local background (within a window), scanning the whole image pixel-by-pixel. Pixel values above the threshold constitute an anomaly and are likely to be samples from a target, and therefore are included as a detection.

\n

Vessel presence and length estimation

\n

To estimate the length of every detected object and also to identify when our CFAR algorithm made false detections, we designed a deep convolutional neural network (ConvNet) based on the modern ResNet (Residual Networks) architecture. This single-input/multi-output ConvNet takes dual-band SAR image tiles of 80 by 80 pixels as input, and outputs the probability of object presence (known as a “binary classification task”) and the estimated length of the object (known as a “regression task”).

\n

Fishing and non-fishing classification

\n

To identify whether a detected vessel was a fishing or non-fishing vessel we use a machine learning model. For this classification task we used a ConvNeXt architecture modified to process the following two inputs: the estimated length of the vessel from SAR (a scalar quantity) and a stack of environmental rasters centered at the vessel’s location (a multi-channel image). This multi-input-mixed-data/single-output model passes the raster stack (11 channels) through a series of convolutional layers and combines the resulting feature maps with the vessel length value to perform a binary classification: fishing or non-fishing. 

\n

The environmental layers used to differentiate between fishing and non-fishing include:

\n
    \n
  1. vessel density (based on SAR)
  2. \n
  3. average vessel length (based on SAR)
  4. \n
  5. bathymetry
  6. \n
  7. distance from port
  8. \n
  9. hours of non-fishing vessel presence, under 50 m (from AIS)
  10. \n
  11. hours of non-fishing vessel presence, over 50 m (from AIS)
  12. \n
  13. average surface temperature
  14. \n
  15. average current speed
  16. \n
  17. standard deviation of daily temperature
  18. \n
  19. standard deviation of daily current speed
  20. \n
  21. average chlorophyll
  22. \n
\n

AIS matching and vessel identity

\n

AIS data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large “blind spots” along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched SAR detections therefore provide the missing information about vessel traffic in the ocean.

\n

SAR and AIS matching

\n

Matching SAR detections to vessels’ GPS coordinates (from the automatic identification system (AIS) is challenging because the timestamp of the SAR images and AIS records do not coincide, and a single AIS message can potentially match to multiple vessels appearing in the image, and vice versa. To determine the likelihood that a vessel broadcasting AIS corresponded to a specific SAR detection, we followed a matching approach based on probability rasters of where a vessel is likely to be minutes before and after an AIS position was recorded. These rasters were developed from one year of global AIS data from the Global Fishing Watch pipeline which uses Spire Global and Orbcomm sources of satellite data, including roughly 10 billion vessel positions, and computed for six different vessel classes, considering six different speeds and 36 time intervals. So we obtain the likely position of a vessel that could match a SAR detection based on the vessel class, speed and time interval.

\n

AIS matching and vessel identity

\n

Automatic identification system (AIS) data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large “blind spots” along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched SAR detections therefore provide the missing information about vessel traffic in the ocean.

\n

Resources, code and other notes

\n

All code developed in this study for SAR detection, deep learning models, and analyses is open source and freely available at\n \n https://github.com/GlobalFishingWatch/paper-industrial-activity\n .\n

\n

Source data and citations

\n

All vessel data are freely available through the Global Fishing Watch data portal at\n \n https://globalfishingwatch.org\n . All data to reproduce our supporting scientific paper can be downloaded from\n \n https://doi.org/10.6084/m9.figshare.24309475\n \n (statistical analyses and figures) and\n \n https://doi.org/10.6084/m9.figshare.24309469\n \n (model training and evaluation).\n

\n

License

\n

Non-Commercial Use Only. The Site and the Services are provided for Non-Commercial use only in accordance with the CC BY-NC 4.0 license. If you would like to use the Site and/or the Services for commercial purposes, please contact us.", "schema": { "id": "id", "lat": "lat", @@ -2903,8 +2903,8 @@ } }, "public-panama-vessel-identity-fishing": { - "name": "Panama VMS (Public Non fishing vessels)", - "description": "Dataset for VMS Panama - Carriers (Public)", + "name": "Panama VMS (Public Fishing Vessels)", + "description": "Dataset for VMS Panama (Public)", "schema": { "id": "id", "selfReportedInfo": "selfReportedInfo", diff --git a/libs/i18n-labels/pt/datasets.json b/libs/i18n-labels/pt/datasets.json index ff4a476694..c7dd2fec1d 100644 --- a/libs/i18n-labels/pt/datasets.json +++ b/libs/i18n-labels/pt/datasets.json @@ -2264,7 +2264,7 @@ }, "public-global-sar-presence": { "name": "SAR with Neural classification", - "description": "SAR", + "description": "

Overview

\n

Satellite synthetic aperture radar (SAR) is a spaceborne radar imaging system that can detect at-sea vessels and structures in any weather conditions. Microwave pulses are transmitted by a satellite-based antenna towards the Earth surface. The microwave energy scattered back to the spacecraft is then measured and integrated to form a “backscatter” image. The SAR image contains rich information about the different objects on the water, such as their size, orientation and texture. SAR imaging systems overcome most weather conditions and illumination levels, including clouds or rain due to the cloud penetrating property of microwaves, and daylight or darkness due to radar being an “active” sensor (it shoots and records back its own energy). SAR gives an advantage over some other “passive” satellite sensors, such as electro-optical imagery, consisting of a satellite-based camera recording the sunlight/infrared radiation reflected from/emitted by objects on the ground. This latter method can be confounded by cloud cover, haze, weather events and seasonal darkness at high latitudes.

\n

Use cases

\n\n

Limitations

\n\n

Methods

\n

SAR imagery

\n

We use SAR imagery from the Copernicus Sentinel-1 mission of the European Space Agency (ESA) [1]. The images are sourced from two satellites (S1A and S1B up until December 2021 when S1B stopped operating, and S1A only from 2022 onward) that orbit 180 degrees out of phase with each other in a polar, sun-synchronous orbit. Each satellite has a repeat-cycle of 12 days, so that together they provide a global mapping of coastal waters around the world approximately every six days for the period that both were operating. The number of images per location, however, varies greatly depending on mission priorities, latitude, and degree of overlap between adjacent satellite passes. Spatial coverage also varies over time [2]. Our data consist of dual-polarization images (VH and VV) from the Interferometric Wide (IW) swath mode, with a resolution of about 20 m.

\n

[1]\n \n https://sedas.satapps.org/wp-content/uploads/2015/07/Sentinel-1_User_Handbook.pdf\n \n

\n

[2]\n \n https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1/observation-scenario\n \n

\n

Detection footprints

\n

Detection footprints are areas within each satellite scan (or scene) that our system uses to perform detections. These filters help to keep relevant detections and exclude data that may be inaccurate. Detection footprints are smaller than the total scene as they exclude any land areas and islands, and exclude a 500 meter buffer from the boundaries of the scene and a 1 kilometer buffer from shorelines.

\n

Filtering

\n

GFW has post-processed the SAR detections to reduce noise (false positives), remove offshore infrastructure from this layer focused on vessels, and exclude areas with sea ice at high latitudes.

\n

Vessel detection by SAR

\n

Detecting vessels with SAR is based on an known as Constant False Alarm Rate (CFAR), a threshold algorithm used for anomaly detection in radar imagery. This algorithm is designed to search for pixel values that are unusually bright (the targets) compared to those in the surrounding area (the sea clutter). This method sets a threshold based on the pixel values of the local background (within a window), scanning the whole image pixel-by-pixel. Pixel values above the threshold constitute an anomaly and are likely to be samples from a target, and therefore are included as a detection.

\n

Vessel presence and length estimation

\n

To estimate the length of every detected object and also to identify when our CFAR algorithm made false detections, we designed a deep convolutional neural network (ConvNet) based on the modern ResNet (Residual Networks) architecture. This single-input/multi-output ConvNet takes dual-band SAR image tiles of 80 by 80 pixels as input, and outputs the probability of object presence (known as a “binary classification task”) and the estimated length of the object (known as a “regression task”).

\n

Fishing and non-fishing classification

\n

To identify whether a detected vessel was a fishing or non-fishing vessel we use a machine learning model. For this classification task we used a ConvNeXt architecture modified to process the following two inputs: the estimated length of the vessel from SAR (a scalar quantity) and a stack of environmental rasters centered at the vessel’s location (a multi-channel image). This multi-input-mixed-data/single-output model passes the raster stack (11 channels) through a series of convolutional layers and combines the resulting feature maps with the vessel length value to perform a binary classification: fishing or non-fishing. 

\n

The environmental layers used to differentiate between fishing and non-fishing include:

\n
    \n
  1. vessel density (based on SAR)
  2. \n
  3. average vessel length (based on SAR)
  4. \n
  5. bathymetry
  6. \n
  7. distance from port
  8. \n
  9. hours of non-fishing vessel presence, under 50 m (from AIS)
  10. \n
  11. hours of non-fishing vessel presence, over 50 m (from AIS)
  12. \n
  13. average surface temperature
  14. \n
  15. average current speed
  16. \n
  17. standard deviation of daily temperature
  18. \n
  19. standard deviation of daily current speed
  20. \n
  21. average chlorophyll
  22. \n
\n

AIS matching and vessel identity

\n

AIS data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large “blind spots” along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched SAR detections therefore provide the missing information about vessel traffic in the ocean.

\n

SAR and AIS matching

\n

Matching SAR detections to vessels’ GPS coordinates (from the automatic identification system (AIS) is challenging because the timestamp of the SAR images and AIS records do not coincide, and a single AIS message can potentially match to multiple vessels appearing in the image, and vice versa. To determine the likelihood that a vessel broadcasting AIS corresponded to a specific SAR detection, we followed a matching approach based on probability rasters of where a vessel is likely to be minutes before and after an AIS position was recorded. These rasters were developed from one year of global AIS data from the Global Fishing Watch pipeline which uses Spire Global and Orbcomm sources of satellite data, including roughly 10 billion vessel positions, and computed for six different vessel classes, considering six different speeds and 36 time intervals. So we obtain the likely position of a vessel that could match a SAR detection based on the vessel class, speed and time interval.

\n

AIS matching and vessel identity

\n

Automatic identification system (AIS) data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large “blind spots” along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched SAR detections therefore provide the missing information about vessel traffic in the ocean.

\n

Resources, code and other notes

\n

All code developed in this study for SAR detection, deep learning models, and analyses is open source and freely available at\n \n https://github.com/GlobalFishingWatch/paper-industrial-activity\n .\n

\n

Source data and citations

\n

All vessel data are freely available through the Global Fishing Watch data portal at\n \n https://globalfishingwatch.org\n . All data to reproduce our supporting scientific paper can be downloaded from\n \n https://doi.org/10.6084/m9.figshare.24309475\n \n (statistical analyses and figures) and\n \n https://doi.org/10.6084/m9.figshare.24309469\n \n (model training and evaluation).\n

\n

License

\n

Non-Commercial Use Only. The Site and the Services are provided for Non-Commercial use only in accordance with the CC BY-NC 4.0 license. If you would like to use the Site and/or the Services for commercial purposes, please contact us.", "schema": { "id": "id", "lat": "lat", @@ -2903,8 +2903,8 @@ } }, "public-panama-vessel-identity-fishing": { - "name": "Panama VMS (Public Non fishing vessels)", - "description": "Dataset for VMS Panama - Carriers (Public)", + "name": "Panama VMS (Public Fishing Vessels)", + "description": "Dataset for VMS Panama (Public)", "schema": { "id": "id", "selfReportedInfo": "selfReportedInfo", diff --git a/libs/i18n-labels/source/datasets.json b/libs/i18n-labels/source/datasets.json index de10dc721d..b2563c8be2 100644 --- a/libs/i18n-labels/source/datasets.json +++ b/libs/i18n-labels/source/datasets.json @@ -2264,7 +2264,7 @@ }, "public-global-sar-presence": { "name": "SAR with Neural classification", - "description": "SAR", + "description": "

Overview

\n

Satellite synthetic aperture radar (SAR) is a spaceborne radar imaging system that can detect at-sea vessels and structures in any weather conditions. Microwave pulses are transmitted by a satellite-based antenna towards the Earth surface. The microwave energy scattered back to the spacecraft is then measured and integrated to form a “backscatter” image. The SAR image contains rich information about the different objects on the water, such as their size, orientation and texture. SAR imaging systems overcome most weather conditions and illumination levels, including clouds or rain due to the cloud penetrating property of microwaves, and daylight or darkness due to radar being an “active” sensor (it shoots and records back its own energy). SAR gives an advantage over some other “passive” satellite sensors, such as electro-optical imagery, consisting of a satellite-based camera recording the sunlight/infrared radiation reflected from/emitted by objects on the ground. This latter method can be confounded by cloud cover, haze, weather events and seasonal darkness at high latitudes.

\n

Use cases

\n\n

Limitations

\n\n

Methods

\n

SAR imagery

\n

We use SAR imagery from the Copernicus Sentinel-1 mission of the European Space Agency (ESA) [1]. The images are sourced from two satellites (S1A and S1B up until December 2021 when S1B stopped operating, and S1A only from 2022 onward) that orbit 180 degrees out of phase with each other in a polar, sun-synchronous orbit. Each satellite has a repeat-cycle of 12 days, so that together they provide a global mapping of coastal waters around the world approximately every six days for the period that both were operating. The number of images per location, however, varies greatly depending on mission priorities, latitude, and degree of overlap between adjacent satellite passes. Spatial coverage also varies over time [2]. Our data consist of dual-polarization images (VH and VV) from the Interferometric Wide (IW) swath mode, with a resolution of about 20 m.

\n

[1]\n \n https://sedas.satapps.org/wp-content/uploads/2015/07/Sentinel-1_User_Handbook.pdf\n \n

\n

[2]\n \n https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1/observation-scenario\n \n

\n

Detection footprints

\n

Detection footprints are areas within each satellite scan (or scene) that our system uses to perform detections. These filters help to keep relevant detections and exclude data that may be inaccurate. Detection footprints are smaller than the total scene as they exclude any land areas and islands, and exclude a 500 meter buffer from the boundaries of the scene and a 1 kilometer buffer from shorelines.

\n

Filtering

\n

GFW has post-processed the SAR detections to reduce noise (false positives), remove offshore infrastructure from this layer focused on vessels, and exclude areas with sea ice at high latitudes.

\n

Vessel detection by SAR

\n

Detecting vessels with SAR is based on an known as Constant False Alarm Rate (CFAR), a threshold algorithm used for anomaly detection in radar imagery. This algorithm is designed to search for pixel values that are unusually bright (the targets) compared to those in the surrounding area (the sea clutter). This method sets a threshold based on the pixel values of the local background (within a window), scanning the whole image pixel-by-pixel. Pixel values above the threshold constitute an anomaly and are likely to be samples from a target, and therefore are included as a detection.

\n

Vessel presence and length estimation

\n

To estimate the length of every detected object and also to identify when our CFAR algorithm made false detections, we designed a deep convolutional neural network (ConvNet) based on the modern ResNet (Residual Networks) architecture. This single-input/multi-output ConvNet takes dual-band SAR image tiles of 80 by 80 pixels as input, and outputs the probability of object presence (known as a “binary classification task”) and the estimated length of the object (known as a “regression task”).

\n

Fishing and non-fishing classification

\n

To identify whether a detected vessel was a fishing or non-fishing vessel we use a machine learning model. For this classification task we used a ConvNeXt architecture modified to process the following two inputs: the estimated length of the vessel from SAR (a scalar quantity) and a stack of environmental rasters centered at the vessel’s location (a multi-channel image). This multi-input-mixed-data/single-output model passes the raster stack (11 channels) through a series of convolutional layers and combines the resulting feature maps with the vessel length value to perform a binary classification: fishing or non-fishing. 

\n

The environmental layers used to differentiate between fishing and non-fishing include:

\n
    \n
  1. vessel density (based on SAR)
  2. \n
  3. average vessel length (based on SAR)
  4. \n
  5. bathymetry
  6. \n
  7. distance from port
  8. \n
  9. hours of non-fishing vessel presence, under 50 m (from AIS)
  10. \n
  11. hours of non-fishing vessel presence, over 50 m (from AIS)
  12. \n
  13. average surface temperature
  14. \n
  15. average current speed
  16. \n
  17. standard deviation of daily temperature
  18. \n
  19. standard deviation of daily current speed
  20. \n
  21. average chlorophyll
  22. \n
\n

AIS matching and vessel identity

\n

AIS data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large “blind spots” along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched SAR detections therefore provide the missing information about vessel traffic in the ocean.

\n

SAR and AIS matching

\n

Matching SAR detections to vessels’ GPS coordinates (from the automatic identification system (AIS) is challenging because the timestamp of the SAR images and AIS records do not coincide, and a single AIS message can potentially match to multiple vessels appearing in the image, and vice versa. To determine the likelihood that a vessel broadcasting AIS corresponded to a specific SAR detection, we followed a matching approach based on probability rasters of where a vessel is likely to be minutes before and after an AIS position was recorded. These rasters were developed from one year of global AIS data from the Global Fishing Watch pipeline which uses Spire Global and Orbcomm sources of satellite data, including roughly 10 billion vessel positions, and computed for six different vessel classes, considering six different speeds and 36 time intervals. So we obtain the likely position of a vessel that could match a SAR detection based on the vessel class, speed and time interval.

\n

AIS matching and vessel identity

\n

Automatic identification system (AIS) data can reveal the identity of vessels, their owners and corporations, and fishing activity. Not all vessels, however, are required to use AIS devices, as regulations vary by country, vessel size, and activity. Vessels engaged in illicit activities can also turn off their AIS transponders or manipulate the locations they broadcast. Also, large “blind spots” along coastal waters arise from nations that restrict access to AIS data that are captured by terrestrial receptors instead of satellites or from poor reception due to high vessel density and low-quality AIS devices. Unmatched SAR detections therefore provide the missing information about vessel traffic in the ocean.

\n

Resources, code and other notes

\n

All code developed in this study for SAR detection, deep learning models, and analyses is open source and freely available at\n \n https://github.com/GlobalFishingWatch/paper-industrial-activity\n .\n

\n

Source data and citations

\n

All vessel data are freely available through the Global Fishing Watch data portal at\n \n https://globalfishingwatch.org\n . All data to reproduce our supporting scientific paper can be downloaded from\n \n https://doi.org/10.6084/m9.figshare.24309475\n \n (statistical analyses and figures) and\n \n https://doi.org/10.6084/m9.figshare.24309469\n \n (model training and evaluation).\n

\n

License

\n

Non-Commercial Use Only. The Site and the Services are provided for Non-Commercial use only in accordance with the CC BY-NC 4.0 license. If you would like to use the Site and/or the Services for commercial purposes, please contact us.", "schema": { "id": "id", "lat": "lat", @@ -2903,8 +2903,8 @@ } }, "public-panama-vessel-identity-fishing": { - "name": "Panama VMS (Public Non fishing vessels)", - "description": "Dataset for VMS Panama - Carriers (Public)", + "name": "Panama VMS (Public Fishing Vessels)", + "description": "Dataset for VMS Panama (Public)", "schema": { "id": "id", "selfReportedInfo": "selfReportedInfo", diff --git a/libs/i18n-labels/val/datasets.json b/libs/i18n-labels/val/datasets.json index 40844d2eb5..cd3fc84de7 100644 --- a/libs/i18n-labels/val/datasets.json +++ b/libs/i18n-labels/val/datasets.json @@ -2264,7 +2264,7 @@ }, "public-global-sar-presence": { "name": "crwdns69930:0crwdne69930:0", - "description": "crwdns70284:0crwdne70284:0", + "description": "crwdns70350:0crwdne70350:0", "schema": { "id": "crwdns66489:0crwdne66489:0", "lat": "crwdns66491:0crwdne66491:0", @@ -2903,8 +2903,8 @@ } }, "public-panama-vessel-identity-fishing": { - "name": "crwdns70346:0crwdne70346:0", - "description": "crwdns70348:0crwdne70348:0", + "name": "crwdns70352:0crwdne70352:0", + "description": "crwdns70354:0crwdne70354:0", "schema": { "id": "crwdns69176:0crwdne69176:0", "selfReportedInfo": "crwdns69178:0crwdne69178:0",