This guide traces through what happens when you load a new tile. At a high level the processing consists of 3 parts:
- Event loop responds to user interaction and updates the internal state of the map (current viewport, camera angle, etc.)
- Tile loading asynchronously fetches tiles, images, fonts, etc. needed by the current state of the map
- Render loop renders the current state of the map to the screen
Ideally the event loop and render frame run at 60 frames per second, and all of the heavy work of tile loading happens asynchronously inside a web worker.
sequenceDiagram
actor user
participant DOM
participant handler_manager
participant handler
participant camera
participant transform
participant map
user->>camera: map#setCenter, map#panTo
camera->>transform: update
camera->>map: fire move event
map->>map: _render()
user->>DOM: resize, pan,<br>click, scroll,<br>...
DOM->>handler_manager: DOM events
handler_manager->>handler: forward event
handler-->>handler_manager: HandlerResult
handler_manager->>transform: update
handler_manager->>map: fire move event
map->>map: _render()
- Transform holds the current viewport details (pitch, zoom, bearing, bounds, etc.). Two places in the code update transform directly:
- Camera (parent class of Map) in response to explicit calls to Camera#panTo, Camera#setCenter
- HandlerManager in response to DOM events. It forwards those events to interaction processors that live in src/ui/handler, which accumulate a merged HandlerResult that kick off a render frame loop, decreasing the inertia and nudging map.transform by that amount on each frame from HandlerManager#_updateMapTransform(). That loop continues in the inertia decreases to 0.
- Both camera and handler_manager are responsible for firing
move
,zoom
,movestart
,moveend
, ... events on the map after they update transform. Each of these events (along with style changes and data load events) triggers a call to Map#_render() which renders a single frame of the map.
sequenceDiagram
%%{init: { 'sequence': {'messageAlign': 'left', 'boxTextMargin': 5} }}%%
participant map
participant source_cache
participant source
participant ajax
participant glyph manager
box rgba(128,128,128,0.1) worker
participant worker
participant worker_source
participant worker_tile
participant bucket
participant worker_ajax
end
map->>source_cache: update(transform)
source_cache->>source_cache: compute covering<br> tiles
source_cache->>source: loadTile() for each<br>missing tile
alt raster_tile_source
source->>ajax: getImage
else image_source
source->>ajax: getImage (once)
else raster_dem_tile_source
source->>ajax: getImage()
source->>worker: loadDEMTile()
worker->>worker: add 1px buffer
worker-->>source: DEMData
else vector_tile_source/geojson_source
source->>worker: loadTile()
worker->>worker_source: loadVectorTile()
alt vector_tile_source
worker_source->>worker_ajax: getArrayBuffer()
worker_source->>worker_source: decode pbf
worker_source->>worker_source: parse vector tile
else geojson_source
worker_source->>worker_ajax: getJSON()
worker_source->>worker_source: geojson-vt parse
worker_source->>worker_source: getTile()
end
worker_source->>worker_tile: parse()
loop for each "layer family"
worker_tile->>worker_tile: calculate layout<br>properties
worker_tile->>worker_tile: createBucket
worker_tile->>bucket: populate()
bucket->>bucket: compute triangles<br>needed by GPU<br>for each feature we<br>have data for
worker_tile->>glyph manager: getGlyphs
glyph manager->>ajax: Fetch font<br>PBFs
glyph manager->>glyph manager: TinySDF
worker_tile->glyph manager: getImages
glyph manager->>ajax: Fetch icon<br>images
glyph manager-->>worker_tile: glyph/Image dependencies
worker_tile->>worker_tile: wait for all requests to finish
worker_tile->>worker_tile: create GlyphAtlas
worker_tile->>worker_tile: create ImageAtlas
worker_tile->>bucket: addFeatures
worker_tile->>bucket: performSymbolLayout
bucket->>bucket: place characters
bucket->>bucket: compute collision<br/>boxes
bucket->>bucket: compute triangles<br/>needed by GPU
end
worker_tile-->>source: callback(bucket, featureIndex, collision boxes, GlyphAtlas, ImageAtlas)
source->>source: loadVectorData()<br/>decode response
end
source-->>source_cache: Tile
source_cache-->>source_cache: _backfillDEM()<br/>copy 1px buffer<br/>from neighboring tiles
source->>source: fire('data', {<br/>dataType: 'source'<br>})
source->>source_cache:<br>
source_cache->map:<br>
map->map: fire('sourcedata')
map->map: render new frame
Map#_render() works in 2 different modes based on the value of Map._sourcesDirty
. When Map._sourcesDirty === true
, it starts by asking each source if it needs to load any new data:
- Call SourceCache#update(transform) on each map data source. This computes the ideal tiles that cover the current viewport and requests missing ones. When a tile is missing, searches child/parent tiles to find the best alternative to show while the ideal tile is loading.
- Call
Source#loadTile(tile, callback)
on each source to load the missing tile. Each source implements this differently:- RasterTileSource#loadTile just kicks off a getImage request using src/util/image_request which keeps a queue of pending requests and limits the number of in-progress requests.
- RasterDEMTileSource#loadTile starts off the same to fetch the image, but then it sends the bytes in a
loadDEMTile
message to a worker to process before returning the results. Getting pixels from the image response requires drawing it to a canvas and reading the pixels back. This can be expensive, so when the browser supportsOffscreenCanvas
, do that in a worker, otherwise do it here before sending.[in web worker]
RasterDEMTileWorkerSource#loadTile loads raw rgb data into a DEMData instance. This copies edge pixels out to a 1px border to avoid edge artifacts and passes that back to the main thread.
- VectorTileSource#loadTile sends a
loadTile
orreloadTile
message to a worker:[in web worker]
Worker#loadTile handles the message and passes it to VectorTileWorkerSource#loadTile- Calls VectorTileWorkerSource#loadVectorTile which uses
- ajax#getArrayBuffer() to fetch raw bytes
- pbf to decode the protobuf, then
- @mapbox/vector-tile#VectorTile to parse the vector tile.
- The result goes into a new WorkerTile instance.
- Calls WorkerTile#parse() and caches the result in the worker by tile ID:
- For each vector tile source layer, for each style layer that depends on the source layer that is currently visible ("layer family"):
- Calculate layout properties (recalculateLayers)
- Call style.createBucket, which delegates to a bucket type in src/data/bucket/*, which are subclasses of src/data/bucket
- Call Bucket#populate() with the features from this source layer in the vector tile. This precomputes all the data that the main thread needs to load into the GPU to render each frame (ie. buffers containing vertices of all the triangles that compose the shape)
- Most layer types just store triangulated features on that first pass, but some layers have data dependencies, so ask the main thread for:
- Font PBFs (getGlyphs)
- Handled by GlyphManager on the main thread which serves as a global cache for glyphs we’ve retrieved. When one is missing it either uses tinysdf to render the character on a canvas, or makes a network request to load the font PBF file for the range that contains the missing glyphs.
- Icons and patterns (getImages({type: 'icon' | 'pattern' }))
- Handled by ImageManager on the main thread which caches images we’ve already fetched and fetches them if missing
- Font PBFs (getGlyphs)
- For each vector tile source layer, for each style layer that depends on the source layer that is currently visible ("layer family"):
- When all data dependencies are available WorkerTile#maybePrepare(), create a new GlyphAtlas and ImageAtlas that store the used font glyph symbols and icon/pattern images into a square matrix that can be loaded into the GPU using potpack. Then call StyleLayer#recalculate() on each layer that was waiting for a data dependency and:
- Call
addFeatures
on each bucket waiting for a pattern - Call src/symbol/symbol_layout#performSymbolLayout() for each bucket waiting for symbols, which computes text layout properties for the zoom level and places each individual symbol based on character shapes and font layout parameters, and stores the triangulated symbol geometries. Also computes collision boxes that will be used to determine which labels to show to avoid collisions
- Call
- Pass the buckets, featureIndex, collision boxes, glyphAtlasImage, and imageAtlas back to the main thread
- Calls VectorTileWorkerSource#loadVectorTile which uses
- GeojsonSource#loadTile() also sends a loadTile or reloadTile message to a worker. The handling is almost exactly the same as a vector tile, except GeojsonWorkerSource extends VectorTileWorkerSource and overrides
loadVectorData
so that instead of making a network request and parsing the PBF, it loads the initial geojson data into geojson-vt and calls the getTile method to get vector tile data from the geojson for each tile the main thread needs. - ImageSource#loadTile() computes the most-zoomed-in tile that contains the entire bounds of the image being rendered and only returns success if the main thread is requesting that tile (the image was already requested when layer was added to the map)
- When the vector sources (geojson/vector tile) responses get back to the main thread, it calls Tile#loadVectorData with the result which deserializes and stores the buckets for each style layer, image/glyph atlases, and lazy-loads the RTL text plugin if this is the first tile to contain RTL text.
- Back up in SourceCache, now that it has the loaded tile:
- SourceCache#_backfillDEM copies the edge pixels to and from all neighboring tiles so that there are no rendering artifacts when each tile computes the slope up to the very edge of the tile.
- Fire a
data {dataType: 'source'}
event on the source, which bubbles up to SourceCache, Style, and Map, which translates it to asourcedata
event and also calls Map#_update() which calls Map#triggerRepaint() then Map#_render() which renders a new frame just like when user interaction triggers transform change.
sequenceDiagram
participant map
participant style
participant painter
participant layer
participant source_cache
participant GPU
actor user
map->>style: update(transform)
style->>layer: recalculate()
layer->>layer: recompute<br>paint properties
map->>source_cache: update(transform)
source_cache->>source_cache: fetch new tiles
map->>painter: render(style)
painter->>source_cache: prepare(context)
loop for each tile
source_cache->>GPU: upload vertices
source_cache->>GPU: upload image textures
end
loop for each layer
painter->>layer: renderLayer(pass=offscreen)
painter->>layer: renderLayer(pass=opaque)
painter->>layer: renderLayer(pass=translucent)
painter->>layer: renderLayer(pass=debug)
loop renderLayer() call for each tile
layer->>GPU: load program
layer->>GPU: drawElements()
GPU->>user: display pixels
end
end
map->>map: triggerRepaint()
When map._sourcesDirty === false
, map#_render() just renders a new frame entirely within the main UI thread:
- Recompute "paint properties" based on the current zoom and current transition status by calling Style#update() with the new transform. This calls
recalculate()
on each style layer to compute the new paint properties. - Fetch new tiles by calling SourceCache#update(transform) (see above)
- Call Painter#render(style) with the current style
- Calls SourceCache#prepare(context) on each source
- Then for each tile in the source:
- Call Tile#upload(context) which calls Bucket#upload(context) on the bucket for each layer in the tile, which uploads all of the vertex attributes needed for rendering to the GPU.
- Call Tile#prepare(imageManager) uploads image textures (patterns, icons) for this tile to the GPU.
- Make 4 passes over each layer, calling
renderLayer()
on the src/render/draw_* file for each kind of layer:offscreen
pass uses the GPU to precompute and cache data to an offscreen framebuffer for custom, hillshading, and heatmap layers. Hillshading precomputes slope using the GPU and heatmapopaque
pass renders fill and background layers with no opacity from top to bottomtranslucent
pass renders each other layer from bottom to topdebug
pass renders debug collision boxes, tile boundaries, etc. on top
- Each
renderLayer()
call loops through each visible tile to paint and for each binds textures, and uses a vertex and attribute shader program defined in src/shaders and calls Program#draw() which sets GPU configuration for the program, sets uniforms needed by the program and callsgl.drawElements()
which actually renders the layer on the screen for that tile.
- Finally, trigger another repaint if there is any more rendering to do. Otherwise trigger an
idle
event because the map is done loading.