/date | string | yes | Date and time in RFC 2822 format. |
+| /package/init-cfg | dict | no | Default package configuration in CONFIG DB format. Defaults to {} |
+| /package/debug-dump | string | No | A command to be executed during system dump |
+| /service | object | yes | Service management related properties. |
+| /service/name | string | yes | Name of the service. There could be two packages e.g: fpm-quagga, fpm-frr but the service name is the same "bgp". For such cases each one have to declare the other service in "conflicts". |
+| /service/requires | list of strings | no | List of SONiC services the application requires.The option maps to systemd's unit "Requires=". |
+| /service/requisite | list of strings | no | List of SONiC services that are requisite for this package.
The option maps to systemd's unit "Requisite=". |
+| /service/wanted-by | list of strings | no | List of SONiC services that wants for this package.
The option maps to systemd's unit "WantedBy=". |
+| /service/after | list of strings | no | Boot order dependency. List of SONiC services the application is set to start after on system boot. |
+| /service/before | list of strings | no | Boot order dependency. List of SONiC services the application is set to start before on system boot. | |
+| /service/delayed | boolean | no | Wether to generate a timer to delay the service on boot. Defaults to false. |
+| /service/dependent-of | lits of strnigs | no | List of SONiC services this application is dependent of.
Specifying in this option a service X, will regenerate the /usr/local/bin/X.sh script and upgrade the "DEPENDENT" list with this package service.
This option is warm-restart related, a warm-restart of service X will not trigger this package service restart.
On the other hand, this service package will be started, stopped, restarted togather with service X.
Example:
For "dhcp-relay", "radv", "teamd" this field will have "swss" service in the list. |
+| /service/post-start-action | string | no | Path to an executable inside Docker image filesystem to be executed after container start.
A package may use this field in case a systemd service should not reach started state before some condition. E.g.: A database service should not reach started state before redis process is not ready. Since, there is no control, when the redis process will start a "post-start-action" script may execute "redis-cli ping" till the ping is succeessful. |
+| /service/pre-shutdown-action | string | no | Path to an executable inside Docker image filesystem to be executed before container stops.
A uses case is to execute a warm-shutdown preparation script.
A script that sends SIGUSR1 to teamd to initiate warm shutdown is one of such examples. |
+| /service/host-service | boolean | no | Multi-ASIC field. Wether a service should run in host namespace. Default is True. |
+| /service/asic-service | boolean | no | Multi-ASIC field. Wether a service should run per ASIC namespace. Default is False. |
+| /service/warm-shutdown/ | object | no | Warm reboot related properties. Used to generate the warm-reboot script. |
+| /service/warm-shutdown/after | lits of strings | no | Warm shutdown order dependency. List of SONiC services the application is set to stop after on warm shutdown.
Example: a "bgp" may specify "radv" in this field in order to avoid radv to announce departure and cause hosts to lose default gateway.
*NOTE*: Putting "radv" here, does not mean the "radv" should be installed as there is no such dependency for the "bgp" package. |
+| /service/warm-shutdown/before | lits of strings | no | Warm shutdown order dependency. List of SONiC services the application is set to stop before on warm shutdown.
Example: a "teamd" service has to stop before "syncd", but after "swss" to be able to send the last LACP PDU though CPU port right before CPU port becomes unavailable. |
+| /service/fast-shutdown/ | object | no | Fast reboot related properties. Used to generate the fast-reboot script. |
+| /service/fast-shutdown/after | lits of strings | no | Same as for warm-shutdown. |
+| /service/fast-shutdown/before | lits of strings | no | Same as for warm-shutdown. |
+| /processes | object | no | Processes infromation |
+| /processes/[name]/reconciles | boolean | no | Wether process performs warm-boot reconciliation, the warmboot-finalizer service has to wait for. Defaults to False. |
+| /container | object | no | Container related properties. |
+| /container/privileged | string | no | Start the container in privileged mode. Later versions of manifest might extend container properties to include docker capabilities instead of privileged mode. Defaults to False. |
+| /container/volumes | list of strings | no | List of mounts for a container. The same syntax used for '-v' parameter for "docker run".
Example: "\:\:\". Defaults to []. |
+| /container/mounts | list of objects | no | List of mounts for a container. Defaults to []. |
+| /container/mounts/[id]/source | string | yes | Source for mount |
+| /container/mounts/[id]/target | string | yes | Target for mount |
+| /container/mounts/[id]/type | string | yes | Type for mount. See docker mount types. |
+| /container/tmpfs | list of strings | no | Tmpfs mounts. Defaults to [] |
+| /container/environment | dict | no | Environment variables for Docker container (key=value). Defaults to {}. |
+| /processes | list | no | A list defining processes running inside the container. |
+| /cli | object | no | CLI plugin information. *NOTE*: Later will deprecated and replaced with a YANG module file path. |
+| /cli/mandatory | boolean| no | Wether CLI is a mandatory functionality for the package. Default: False. |
+| /cli/show-cli-plugin | string | no | A path to a plugin for sonic-utilities show CLI command. |
+| /cli/config-cli-plugin | string | no | A path to a plugin for sonic-utilities config CLI command. |
+| /cli/clear-cli-plugin | string | no | A path to a plugin for sonic-utilities sonic-clear CLI command. |
+
+
diff --git a/doc/sonic-application-extention/sonic-application-extention-hld.md b/doc/sonic-application-extension/sonic-application-extention-hld.md
similarity index 100%
rename from doc/sonic-application-extention/sonic-application-extention-hld.md
rename to doc/sonic-application-extension/sonic-application-extention-hld.md
diff --git a/doc/sonic-application-extention/sonic-versioning-strategy.md b/doc/sonic-application-extension/sonic-versioning-strategy.md
similarity index 100%
rename from doc/sonic-application-extention/sonic-versioning-strategy.md
rename to doc/sonic-application-extension/sonic-versioning-strategy.md
diff --git a/doc/sonic-build-system/build-enhancements.md b/doc/sonic-build-system/build-enhancements.md
new file mode 100644
index 0000000000..a86ae53750
--- /dev/null
+++ b/doc/sonic-build-system/build-enhancements.md
@@ -0,0 +1,561 @@
+
+
+# Build Improvements HLD
+
+#### Rev 0.2
+
+# Table of Contents
+
+- [List of Tables](#list-of-tables)
+- [Revision](#revision)
+- [Definition/Abbreviation](#definitionabbreviation)
+- [About This Manual](#about-this-manual)
+- [Introduction and Scope](#1-introduction-and-scope)
+ - [Current build infrastructure](#11-existingtools-limitation)
+ - [Benefits of this feature](#12-benefits-of-this-feature)
+- [Feature Requirements](#2-feature-requirements)
+ - [Functional Requirements](#21-functional-requirements)
+ - [Configuration and Management Requirements](#22-configuration-and-management-requirements)
+ - [Scalability Requirements](#23-scalability-requirements)
+ - [Warm Boot Requirements](#24-warm-boot-requirements)
+- [Feature Description](#3-feature-description)
+- [Feature Design](#4-feature-design)
+ - [Overview](#41-design-overview)
+ - [Docker-in-Docker build](#42-db-changes)
+ - [SONIC version cache build](#42-db-changes)
+ - [Installer Image Optimization](#42-db-changes)
+- [Serviceability and Debug](#6-serviceability-and-debug)
+- [Warm reboot Support](#7-warm-reboot-support)
+- [Unit Test Cases ](#8-unit-test-cases)
+- [References ](#9-references)
+
+# List of Tables
+
+[Table 1: Abbreviations](#table-1-abbreviations)
+
+# Revision
+| Rev | Date | Author | Change Description |
+|:--:|:--------:|:-----------------:|:------------------------------------------------------------:|
+| 0.1 | | Kalimuthu Velappan | Initial version |
+
+
+# Definition/Abbreviation
+
+### Table 1: Abbreviations
+
+| **Term** | **Meaning** |
+| -------- | ----------------------------------------- |
+| DPKG | Debian Package |
+| DinD | Docker-in-Docker |
+| DooD | Docker-out-of-Docker |
+
+
+# About this Manual
+
+This document provides general information about the build improvements in SONiC.
+
+
+# Introduction and Scope
+
+This document describes the Functionality and High level design of the build improvement in SONiC.
+
+- The current SONiC environment uses container environment for generating the sonic packages, docker container images and installer images with rootfs.
+- On every soonic build, it downloads source code, binary packages, docker images and other tools and utilities from an external world and generates the build artifacts.
+- Inter-dependency between the targets could prevent the build parallelism and cause more delay in the overall build time.
+- Nested docker container would slowdown the Hardware resource access - CPU, memory, Network and Filesystem.
+
+
+This feature provides improvements in three essential areas.
+- Multi user build
+ - Parallel build using Native docker mode.
+ - OverlayFS to virtualize the build root.
+- Build time Optimization
+ - Parallel make jobs - Passing dh '-parallel' flags to all the build targets.
+ - Binary image build optimization
+ - Use tmpfs and OverlayFS to speed up the build process.
+- Build caching
+ - Version cache - Package cache support for build componets that are downloaded from external world.
+ - Image cache support for installer image componets.
+
+Reference:
+- Version caching feature is enhanced on top of DPKG caching and Versioning framework.
+Ref:
+ - https://github.com/Azure/SONiC/blob/master/doc/sonic-build-system/DPKG%20caching%20framework%20.ppt
+ - https://github.com/xumia/SONiC/blob/repd3/doc/sonic-build-system/SONiC-Reproduceable-Build.md
+
+# Feature Requirements
+ - Feature should support build improvements in overall SONiC build.
+ - Enhances the build to run in more parallel mode and optimize the time consuming build paths.
+
+## Functional Requirements
+
+Following requirements are addressed by the design presented in this document:
+
+- Multiuser mode support:
+ - Add a feature in the build infra to support the multiuser container build using native docker mode.
+ - Option to enable/disable the Native docker mode.
+ - Use Jinja template to render the per user sonic Dockerfile.j2
+ - Use OverlayFS to virtualize the build root to resolve inter target dependency.
+
+- Build optimization:
+ - Build optimizatoin for binary image generation.
+ - Pass dh '-parallel' option to all the make targets.
+ - Add caching support for binary image.
+ - Add support for build time dependency over overlayFS support.
+ - Use tmpfs and OverlayFS to speed up the per target build process.
+
+- Caching Requirements:
+ - Sonic image is built by pulling binary and source components from various sources.
+ - Debian repo, python repo, docker repo, web repo, git module and go module repo.
+ - Requires flexibility to select the different versions of a component.
+ - Sonic development is diverged into multiple development branches.
+ - Each development branch needs different version of build components.
+ - Sonic moves to latest after every release.
+ - Release branch needs fixed version of build components as the prebuilt binary and source packages are keep moving to the latest version
+ - Requires Caching/Mirroring support.
+ - Component changes outside the SONIC repo which causes frequent build failures.
+ - Unavailability of external site causes the dependency build failure.
+ - Flexibility to switch between fixed version vs latest version.
+ - Different branch can freeze different´ set of versions.
+ - Still, Individual package should be upgraded to selected versions.
+ - Versions cache should be enabled/disabled globally.
+
+
+## Configuration and Management Requirements
+
+NA
+
+## Scalability Requirements
+
+NA
+
+## Warm Boot Requirements
+
+NA
+
+
+# Feature Description
+
+This feature provides build improvements in SONIC.
+
+# Feature Design
+## Design Overview
+## Multi user Build
+### Native docker mode
+- Docker supports two types of mode to run a container.
+ - Docker-in-Docker(DinD) mode
+ - Native Docker or Docker-out-of-Docker(DooD) mode
+
+- Docker-In-Docker mode.
+ - Installing and running another Docker engine (daemon) inside a Docker container.
+ - Since Docker 0.6, a "privileged" option is added to allow running containers in a special mode with almost all capabilities of the host machine, including kernel features and devices acccess.
+ - As a consequence, Docker engine, as a privileged application, can run inside a Docker container itself.
+ - Docker-in-Docker solution is not recommented, especially in containerized Jenkins systems as potential problems include
+ - Security profile of inner Docker will conflict with one of outer Docker
+ - Incompatible file systems (e.g. AUFS inside Docker container).
+ - As a workaround to address these problems using:
+ - Container creation using dind docker solutions.
+ - To use AUFS in the inner Docker, just promote /var/lib/docker to inner docker.
+ - Apar´t from the security aspect, a lot of performace panalities are involved as it uses the UnionFS/OverlayFS that degrades the performace when number of lower layers are more.
+ - All the child container resource usage is restricted within the paraent container usage.
+
+- Native docker mode.
+ - The DoD mode uses socket file(/var/run/docker.sock) to communitcate with host dockerd daemon.
+ - It uses the shared socket file between HOST and the container to run the build container.
+ - Eg: docker run -v /var/run/docker.sock:/var/run/docker.sock ...
+ - When a new docker container/builder/composer is invoked from a build container:
+ - It is started as a sibiling to build container.
+ - It will run in parallel with build container.
+ - This mode provides a better performance as it can utilize the full potential of host machine.
+
+#### Build Container in SONiC:
+- The current SONiC build infrastructure generats all the SONiC build artifacts inside the docker container environment. When docker is isolated from the host CPU, the docker resource usage and filesystem access are restricted from its full capacity. Docker isolation is more essential for application containers, but for the build containers, the more essentail requirement is the build performace rather than adopting a higher security model. It provides the better build performance when the build containers are run in native mode.
+- Sonic supports both the mode of build container creation.
+- The Native docker mode gives better performace but it has some limitations:
+ - In a shared build servers, sonic docker creation from multiple user would give conflict as it shares the same docker image name.
+- This feature addresses:
+ - Sonic docker container creation in parallel from multiple users.
+ - Since it runs as sibling container, it will provide better container performace.
+ - As it shares the host dockerd, it gives better performance as the multilevel UNIONFS/OverlayFS is not needed.
+
+#### Build Container in SONiC:
+
+
+![ Native Docker Support ](images/sonic-native-docker-support.png)
+
+
+- Currently, the build dockers are created as a user dockers(docker-base-stretch-, etc) that are specific to each user.
+- But the sonic dockers (docker-database, docker-swss, etc) are created with a fixed docker name and that are common to all the users.
+
+ - docker-database:latest
+ - docker-swss:latest
+
+- When multiple builds are triggered on the same build server that creates parallel building issue because all the build jobs are trying to create the same docker with latest tag. This happens only when sonic dockers are built using native host dockerd for sonic docker image creation.
+
+- This feature creates all sonic dockers with user tag.
+- While saving and loading the sonic dockers, it rename the sonic dockers with appropriate user tag.
+- Docker image Load and Save operations are protected with global lock.
+- The user tag is created with combination of user name and SHA ID of Docker control files(Dockerfile.j2, etc)
+- Different user tag is genenrated for a different branch of the same user.
+
+- Docker image save sequence protected with lock as bellow,
+ - docker_image_lock()
+ - docker tag docker-name-\:latest docker-name:latest
+ - docker save docker-name:latest > docker-name-\.gz
+ - docker rm docker-name:latest
+ - docker_image_unlock()
+
+- Docker image load sequence protected with lock as bellow,
+ - docker_image_lock()
+ - docker load -i < docker-name-\.gz
+ - docker tag docker-name:latest docker-name-\:latest
+ - docker rm docker-name:latest
+ - docker_image_unlock()
+
+- The user sonic docker names are derived from '_LOAD_DOCKERS' make variable and using Jinja template, it replaces the FROM docker name with correct user sonic docker name for loading and saving the docker image.
+
+- The template processing covers only for common dockers, Broadcom and VS platform dockers. For other vendor specific dockers, respective vendors need to add the support.
+
+### Target Specific Build Root
+
+- OverlayFS allows multiple virtual rootfs creation for target specific build.
+- Virtual bulid root - Merging the container root(/) and SONiC source(/sonic) and mounted into target specific virutal build root using OverlayFS.
+- Use tmpfs mount for better performance.
+- This would avoid the target specific(UNINSTALLS) and improve the parallel build performance.
+![Virtual Build Root](images/virtual-build-root.png)
+ - \# mkdir -p BUILD
+ - \# mkdir -p BUILD
+ - \# mount -t tmpfs -o size=1G tmpfs BUILD (optional - performace)
+ - \# mount -t overlay overlay -olowerdir=/,upperdir=/sonic/,workdir=BUILD/work BUILD/sonic-buildimage/
+ - \# chroot BUILD/sonic-buildimage/ /bin/bash
+ - bash# mount -t proc proc /proc
+ - bash# dpkg -i && make
+
+### Parallel Make
+ - Propogate the DEB_BUILD_OPTIONS='--parallel' to all it sub target.
+ - Progation of parallel option to python pip install packages through ENV export.
+
+## Version cache support
+
+### Version components
+
+- Sonic build downloads lots of component from external web which includes
+ - Source code files
+ - Prebuilt debian packages
+ - Python PIP packages
+ - Git source code
+ - Docker images
+ - Go modules
+ - Other tools and utilities
+
+- These components are getting updated fequently and the changes are dynamic in nature.
+- Versioning feature support the sonic build to particular version of the package to be downloaded/installed.
+- Versioning ability to select the particular package version, but still it will fetch the package from external world.
+- When external site is down, selected package version is not available or any other issues with connecting to external site or downloading the package would lead to sonic build failure.
+- Due to this dynamic nature, every sonic build might have to change its dependency chain.
+- Version files are stored at files/build/versions folder as below hierarchy.
+```
+files/build/versions/
+├── build
+│ ├── build-sonic-slave-buster
+│ │ ├── versions-deb-buster-amd64
+│ │ ├── versions-py2-buster-amd64
+│ │ └── versions-py3-buster-amd64
+├── default
+│ ├── versions-docker
+│ ├── versions-git
+│ └── versions-web
+├── dockers
+│ ├── docker-base-buster
+│ │ ├── versions-deb-buster-amd64
+│ │ ├── versions-py3-buster-amd64
+│ | ├── versions-git
+│ | ├── versions-web
+ ...
+```
+
+
+![Package Versioning](images/package-versioning.png)
+
+### Version Cache feature
+- The version cache feature allows the sonic build system to cache all the source, binary and its dependencies into local file system.
+- When version cache feature is enabled, first it checks local cache storage for requested package, if it is available, it loads from the cache else it will download from the external web.
+
+![Version Caching](images/version-caching.png)
+
+### Build Version Design
+- Version control files are copied to
+ - To slave container for package build.
+ - To docker builder for sonic slave docker creation.
+ - To docker builder for sonic docker creation.
+ - To Rootfs for binary image generation.
+
+![ Build Version caching ](images/build-version-caching.png)
+
+- Based on the package version, corresponding file will be fetched from the cache if exists.
+- Otherwise the file will be downloaded from the web and cache will be updated with newer version.
+- Version cache feature supports caching for following build components.
+ - DPKG packages
+ - PIP packages
+ - Python packages
+ - Wget/Curl packages
+ - GO modules
+ - GIT modules
+ - Docker images
+
+#### Debian version cache
+
+ - Debian packages are version controlled via preference file that specify each package and corresponding version as below.
+ - iproute==1.0.23
+
+ - When deb package gets installed, it looks for the package version from the version control file. If matches, it installs the package with the specified version in the version control file.
+ - During the package installation, it also save the package into the below cache path.
+ - /var/cache/apt/archives/
+ - If package is already available in the cache path, then it directly installs the package without downloading from the external site.
+ - With the version cache enabled, it preloads all cached packages into deb cache folder, so that any subsequent deb installation will always use the cached path.
+
+![ Debian Packages ](images/dpkg-version-caching.png)
+
+#### PIP version cache
+ - PIP packages are version controlled via constraint file that specify each package and corresponding version as below.
+ - ipaddress==1.0.23
+ -
+ - When a pip package gets installed, it looks for the package version from the version control file. If matches, it installs the package with the specified version in the version control file.
+ - During the package installation, it also save the package into the cache path as below.
+ - pip/http/a/4/6/b/7/a46b74c1407dd55ebf9eeb7eb2c73000028b7639a6ed9edc7981950c
+ - If package is already available in the pip cache path, then it directly installs the package without downloading from the external site.
+ - With the version cache enabled, it preloads all cached packages into pip cache folder, so that any subsequent pip installation will always use the cached path.
+ - During pip installation, the cache path can be specified with --cache-dir option which stores the cache data in the specified directory and version constraint file is given as --constraint option.
+ - Pip vcache folders are created under slave container name or sonic container name appropriately.
+
+![ Python Packages ](images/pip-version-caching.png)
+
+#### Python version cache
+ - Python packages are created via setup.py file.
+ - These packages and their dependencies listed in the setup.py are version controlled via SHA id of the package.
+ - During python package build, python uses setup.py to scan through the dependencies and prerequisties, and then downloads and install them into .eggs folder.
+ - If .eggs folders already exists, it will not reinstall the dependencies.
+ - With version cache enabled, it stores the .eggs files into vcache as a compressed tar file.
+ - Cache file name is formed using SHA value of setup.py.
+ - During package build, if .eggs file exist already, it loads the .eggs from vcache and proceeds with package build.
+
+![ Python Packages ](images/python-version-caching.png)
+
+#### Git clones
+ - Git clone modules are version controlled via commit hash.
+ - On a git clone attempt, version control file(versions-git) is first checked to see if the attempted git clone(url) entry is present,
+ - if entry is not present, then it downloads from the external world and saves the the downloaded git clone as git bundle file into vcache with the commit hash in its name and also updates the version control file.
+ Example: cache file name is formed using url and the commit hash
+ https://salsa.debian.org/debian/libteam.git-f8808df228b00873926b5e7b998ad8b61368d4c5.tgz
+ - if entry is present but git bundle file is not available in vcache, then it downloads from the external world and saves it into vcache with the commit hash
+ in its name.
+ - if entry is present and git bundle file is available in vcache, it gets loaded, unbundled & checkedout with specific commit.
+ - If git clone has any submodules, it is also handled.
+ - The submodules' git bundles are tared along with the main bundle and stored in the vcache. On loading, this tar file will be untared first before unbundling & checking out each submodules' git bundle.
+
+
+
+![ GIT Modules ](images/git-module-version-caching.png)
+
+#### Docker Images
+ - Docker images are version controlled via its SHA id.
+ - During docker image creation, version control script gets executed.
+ - The _PULL_DOCKER variable in the docker Make rule indicates whether the docker needs to be downloaded from docker hub or not.
+ - version control script will look for the matching entry in version control file.
+ - If not present, then it downloads the image and saves in to vcache in gz format and updates the version control file. The cache filename is formed using dockername combined with SHA id.
+ Example: debian-stetch-sha256-7f2706b124ee835c3bcd7dc81d151d4f5eca3f4306c5af5c73848f5f89f10e0b.tgz
+
+ - If present but not available in the cache, then it downloads the image and saves into saves in to cache in gz format.
+ - If present and the docker image is availabe in cache, then it preloads the docker image for container preparation.
+
+ ![ Docker Images ](images/docker-image-version-caching.png)
+
+
+#### Wget/Curl Packages
+ - wget/curl packages are controlled via URL and SHA id of the package.
+ - On wget attempt, version control file(versions-git) is first checked to see if the attempted url entry is present,
+ - if entry is not present, then it downloads from the external world and saves the the downloaded package into vcache with the SHA id of the package in its name and also updates the version control file.
+ Example: cache file name is formed using url and the SHA id.
+ https://salsa.debian.org/debian/libteam.src.gz-f8808df228b00873926b5e7b998ad8b61368d4c5.tgz
+ - if entry is present but package is not available in vcache, then it downloads from the external world and saves it into vcache.
+ - if entry is present and package is also available in vcache, it gets copied from the vcache.
+
+![ Wget Packages ](images/web-version-caching.png)
+
+#### Go modules
+ - In SONiC, all the go modules are installed from go.mod file.
+ - HASH value is calculated from the following contents:
+ - go.mod
+ - Makefile
+ - Common Files
+ - ENV flags
+ - It caches all the go module files as a directory structure instead of compressed tar file as it gives better performace when number of files are more.
+ - Different directory hierarchy is created for each HASH value.
+ - If HASH matches, it uses rsync to sync the cached modules to GOPATH build directory.
+ - While storing/retrieving, the cache content is always protected with global lock.
+
+![ GO Modules ](images/go-module-version-caching.png)
+
+## Docker Build Version Caching
+
+- Each docker build is version controlled via
+ - Dockerfile.j2
+ - Makefile
+ - Commonfiles
+ - ENV flags
+- SHA value is calculated from version control files.
+- Cache file is created for each docker with the docker name and SHA value calculated.
+- Cache file contains the following:
+ - Debian packages
+ - pip packages
+ - wget packages
+ - git packages
+ - go modules
+- Version control script will place the cache file into appropriate location inside docker builder.
+- With version control enabled, docker cache if exists already gets loaded else it will create and update the cache.
+![ Docker Build Version Caching ](images/docker-build-version-caching.png)
+-
+
+## Installer Image Build Optimization
+
+# Installer image generation has six stages:
+
+ - Bootstrap generation
+ - ROOTFS installation
+ - SONiC packages installation
+ - SQUASHFS generation
+ - DockerFS generation
+ - Installer image generation
+
+
+
+
+### Image Preparation:
+- Split into two parts:
+ 1. Debian packages
+ - Bootstrap preparation
+ - General packages installation, such as curl, vim, sudo, python3, etc
+ 2. Sonic packages
+ - Packages that are built and installed from from sonic repo.
+ - Docker images that are built and installed from from sonic repo
+
+- Step (1) can be generated as a base image and it can be run in parallel with the other targets, before build image step.
+ - Benifits:
+ - High hit rate, for less dependencies.
+ - Reduce the cache size.
+ - Improve the concurrency when cache not hit, the step has small dependencies, can be run with any other steps.
+
+#### Bootstrap generation
+ - Debian bootstrap package files are prepared using debootstrap tool.
+ - It downloads set of bootstrap packages and generates the bootstrap filesystem.
+ - Initially, it downloads all the packages and creates them as image file and store them into version cache storage.
+ - Image file is created with specific filename and the HASH value.
+ - HASH value is calculated from SHA value of bootstrap control files which includes:
+ - build_debian.sh
+ - sonic_debian_extension.sh
+ - Version files
+ - Common makefiles and script utilities.
+ - Env Flags
+ - On the subsequent build, if calculated HASH maches with existing version cache filename, it loads the boostrap files from cache.
+
+
+#### Rootfs preparation
+![ Binary Image Generation ](images/binary-image-generation.png)
+
+- Rootfs files system is prepared on top of bootstrap packages.
+- It is prepared by downloading the various opensource debian packages, tools and utilities that are needed for SONiC applications and install them on top of bootstrap fs.
+- The rootfs file system is created as image file system and cached as part of version cache system.
+- Image file is created with installer name and HASH value.
+- The HASH value is calculated from SHA value of following files:
+ - build_debin.sh
+ - sonic_build_extention.j2
+ - Common makefiles
+ - ENV flags
+- On the subsequent build, mount the rootfs from image cache file if exists in version cache.
+- It uses the version control to install the cached packages in one place.
+
+![ Binary Image Version Caching ](images/binary-image-version-caching.png)
+#### SONiC packages installation
+- Install all the sonic packages.
+- Host services, configuration and utilities are installed.
+
+#### SQASHFS generation
+- SquashFS is a readonly filesystem and it is created using squashfs command.
+- It is a compressed version of rootfs contents.
+
+#### dockerfs preparation
+- Dockerfs is created by importing all the sonic docker images and taring /var/log/docker folder.
+- Dockerfs directory is linked to non rootfs directory by mounting an external filesystem to ROOTFS.
+- Parallel loading of docker from compressed gz file.
+
+#### Installer Image generation
+- Tar with pigz compression to get better compression speed as well as compression ratio.
+- Uses the config file to choose the different compression options.
+
+#### Parallel build option
+
+- Stage build provides two stage build.
+ - Phase 1 - Rootfs generation as part of other package generation.
+ - Phase 2 - Docker generation in parallel.
+
+# Make variables
+- The following make variable controls the version caching feature.
+
+ - SONIC_VERSION_CONTROL_COMPONENTS= => Turn on/off the versioning
+ - SONIC_VERSION_CACHE_METHOD=cache=. => Turn on/off version caching
+ - SONIC_VERSION_CACHE_SOURCE= => Cache directory path
+
+# Version freeze
+- Weekly/periodical with version migration to latest.
+
+ - Build with SONIC_VERSION_CONTROL_COMPONENTS=none to generate the new set of package versions in the target.
+ - Run ‘make freeze’ to generate and merge the version changes into the source.
+ - Check-in the new version set with the source.
+
+
+# Cache cleanup
+
+- Recently used cache files are updated with newer timestamp. The Cache framework automatically touch the used cache files to current timestamp.
+- Touch is used to update the package to latest, so the files that are not recent, that can be cleaned up.
+ - touch //.tgz
+- Least-recently-used cache file cleanup command:
+
+```
+
+ find -name “*.tgz” ! -mtime -7 –exec rm {} \;
+
+ Where:
+ -mtime n => Files were modified within last n*24 hours .
+ -mtime -7 => means ( -7 * 24 ) => Files were modified within last 7 days
+ ! -mtime -7 => Files were modified 7 days ago
+
+```
+
+## Build Time Compression
+
+### PoC Build( Buster )
+- Build Config:
+ - Release: Buster
+ - Filesystem: Local
+ - CPU core: 40 Core
+ - DPKG_CACHE: Enabled
+ - VERSION_CACHE: Enabled
+- Build Time:
+ - 5 Minutes ( Bofore: >40 Minutes )
+
+### BuildiTime Measurement
+| **Feature** | **Normal Build** | **Build Enhacement** |
+| --------------------------------- | -------------| -------------------------- |
+| DPKG_CACHE=N
VERSION_CACHE=N | \ | \ |
+| DPKG_CACHE=Y
VERSION_CACHE=y | \ | \ |
+| DPKG_CACHE=N
VERSION_CACHE=y | \ | \ |
+
+# TODO:
+- Migration to bullseye release
+
+
+## References
+
+- Ref:
+ - https://github.com/Azure/SONiC/blob/master/doc/sonic-build-system/DPKG%20caching%20framework%20.ppt
+ - https://github.com/xumia/SONiC/blob/repd3/doc/sonic-build-system/SONiC-Reproduceable-Build.md
diff --git a/doc/sonic-build-system/images/binary-image-generation.png b/doc/sonic-build-system/images/binary-image-generation.png
new file mode 100644
index 0000000000..14de137acc
Binary files /dev/null and b/doc/sonic-build-system/images/binary-image-generation.png differ
diff --git a/doc/sonic-build-system/images/binary-image-version-caching.png b/doc/sonic-build-system/images/binary-image-version-caching.png
new file mode 100644
index 0000000000..255560c250
Binary files /dev/null and b/doc/sonic-build-system/images/binary-image-version-caching.png differ
diff --git a/doc/sonic-build-system/images/build-enhancements.md b/doc/sonic-build-system/images/build-enhancements.md
new file mode 100644
index 0000000000..26fd832068
--- /dev/null
+++ b/doc/sonic-build-system/images/build-enhancements.md
@@ -0,0 +1,463 @@
+
+
+# Build Improvements HLD
+
+#### Rev 0.2
+
+# Table of Contents
+
+- [List of Tables](#list-of-tables)
+- [Revision](#revision)
+- [Definition/Abbreviation](#definitionabbreviation)
+- [About This Manual](#about-this-manual)
+- [Introduction and Scope](#1-introduction-and-scope)
+ - [Current build infrastructure](#11-existingtools-limitation)
+ - [Benefits of this feature](#12-benefits-of-this-feature)
+- [Feature Requirements](#2-feature-requirements)
+ - [Functional Requirements](#21-functional-requirements)
+ - [Configuration and Management Requirements](#22-configuration-and-management-requirements)
+ - [Scalability Requirements](#23-scalability-requirements)
+ - [Warm Boot Requirements](#24-warm-boot-requirements)
+- [Feature Description](#3-feature-description)
+- [Feature Design](#4-feature-design)
+ - [Overview](#41-design-overview)
+ - [Docker-in-Docker build](#42-db-changes)
+ - [SONIC version cache build](#42-db-changes)
+ - [Installer Image Optimization](#42-db-changes)
+- [Serviceability and Debug](#6-serviceability-and-debug)
+- [Warm reboot Support](#7-warm-reboot-support)
+- [Unit Test Cases ](#8-unit-test-cases)
+- [References ](#9-references)
+
+# List of Tables
+
+[Table 1: Abbreviations](#table-1-abbreviations)
+
+# Revision
+| Rev | Date | Author | Change Description |
+|:--:|:--------:|:-----------------:|:------------------------------------------------------------:|
+| 0.1 | | Kalimuthu Velappan | Initial version |
+
+
+# Definition/Abbreviation
+
+### Table 1: Abbreviations
+
+| **Term** | **Meaning** |
+| -------- | ----------------------------------------- |
+| DPKG | Debian Package |
+| DinD | Docker-in-Docker |
+| DooD | Docker-out-of-Docker |
+
+
+# About this Manual
+
+This document provides general information about the build improvements in SONiC.
+
+
+# Introduction and Scope
+
+This document describes the Functionality and High level design of the build improvement in SONiC.
+
+- The current SONiC environment uses container environment for generating the sonic packages, docker container images and installer images with rootfs.
+- On every soonic build, it downloads source code, binary packages, docker images and other tools and utilities from an external world and generates the build artifacts.
+
+This feature provides improvements in three essential areas.
+ - Build container creation using native docker mode.
+ - Package cache support for build componets that are downloaded from external world.
+ - Image cache support for installer image components.
+
+ - Version cache feature is supported on top existing versioning feature.
+ - ref: - [https://github.com/xumia/SONiC/blob/repd3/doc/sonic-build-system/SONiC-Reproduceable-Build.md
+](url)
+# Feature Requirements
+
+## Functional Requirements
+
+Following requirements are addressed by the design presented in this document:
+
+- Multiuser mode support:
+ - Add a feature in the build infra to support the multiuser container build using native docker mode.
+
+- Build optimization:
+ - Build optimizatoin for binary image generation.
+ - Add caching support for binary image.
+ - Add support for build time dependency over overlayFS support.
+
+- Caching Requirements:
+ - Sonic image is built by pulling binary and source components from various sources.
+ - Debian repo, python repo, docker repo, http(s) repo and go module repo.
+ - Requires flexibility to select the different versions of a component.
+ - Sonic development is diverged into multiple development branches.
+ - Each development branch needs different version of build components.
+ - Sonic moves to latest after every release.
+ - Release branch needs fixed version of build components as the prebuilt binary and source packages are keep moving to the latest version
+ - Requires Caching/Mirroring support.
+ - Component changes outside the SONIC repo which causes frequent build failures.
+ - Unavailability of external side causes the dependency build failure.
+ - Flexibility to switch between fixed version vs latest version.
+ - Different branch can freeze different set of versions.
+ - Still, Individual package should be upgraded to selected versions.
+ - Versions cache should be enabled/disabled globally.
+ - Unavailability of external sites should not cause the dependency build failures.
+
+
+
+
+
+## Configuration and Management Requirements
+
+NA
+
+## Scalability Requirements
+
+NA
+
+## Warm Boot Requirements
+
+NA
+
+
+# Feature Description
+
+This feature provides build improvements in SONIC.
+
+# Feature Design
+## Design Overview
+- Docker supports two types of mode to run a container.
+ - Docker-in-Docker(DinD) mode
+ - Native Docker or Docker-out-of-Docker(DooD) mode
+
+- Docker-In-Docker mode.
+ - Installing and running another Docker engine (daemon) inside a Docker container.
+ - Since Docker 0.6, a “privileged” option is added to allow running containers in a special mode with almost all capabilities of the host machine, including kernel features and devices acccess.
+ - As a consequence, Docker engine, as a privileged application, can run inside a Docker container itself.
+ - Docker-in-Docker solution is not recommented, especially in containerized Jenkins systems as potential problems include
+ - Security profile of inner Docker will conflict with one of outer Docker
+ - Incompatible file systems (e.g. AUFS inside Docker container).
+ - As a workaround to address these problems using:
+ - Container creation using dind docker solutions.
+ - To use AUFS in the inner Docker, just promote /var/lib/docker to inner docker.
+ - Apart from the security aspect, a lot of performace panaliteis are involved as it uses the UnionFS/OverlayFS that degrades the performace when number of lower layers are more.
+ - All the child container resource usage is restricted within the paraent container usage.
+
+- Native docker mode.
+ - The DoD mode uses socket file(/var/run/docker.sock) to communitcate with host dockerd daemon.
+ - It uses the shared socket file between HOST and the container to run the build container.
+ - Eg: docker run -v /var/run/docker.sock:/var/run/docker.sock ...
+ - When a new docker container/builder/composer is invoked from a build container:
+ - It is started as a sibiling to build container.
+ - It will run in parallel with build container.
+ - This mode provides a better performance as it can utilize the full potential of host machine.
+
+### Build Container in SONiC:
+- The current SONiC build infrastructure generats all the SONiC build artifacts inside the docker container environment. When docker is isolated from the host CPU, the docker resource usage and filesystem access are restricted from its full capacity. Docker isolation is more essential for application containers, but for the build containers, the more essentail requirement is the build performace rather than adopting a higher security model. It provides the better build performance when the build containers are run in native mode.
+- Sonic supports both the mode of build container creation.
+- The Native docker mode gives better performace but it has some limitations:
+ - In a shared build servers, sonic docker creation from multiple user would give conflict as it shares the same docker image name.
+- This feature addresses:
+ - Sonic docker container creation in parallel from multiple users.
+ - Since it runs as sibiling container, it will degrade the parent container performace.
+ - As it shares the host dockerd, it gives better performance as the multilevel UNIONFS/OverlayFS is not needed.
+
+#### Build Container in SONiC:
+
+
+![ Native Docker Support ](images/sonic-native-docker-support.png)
+
+
+- Currently, the build dockers are created as user dockers(docker-base-stretch-, etc) that are specific to each user. But the sonic dockers (docker-database, docker-swss, etc) are created with a fixed docker name and that are common to all the users.
+
+ - docker-database:latest
+ - docker-swss:latest
+
+- When multiple builds are triggered on the same build server that creates parallel building issue because all the build jobs are trying to create the same docker with latest tag. This happens only when sonic dockers are built using native host dockerd for sonic docker image creation.
+
+- This feature creates all sonic dockers as user sonic dockers and then, whilesaving and loading the user sonic dockers, it rename the user sonic dockers into correct sonic dockers with tag as latest.
+
+- The user sonic docker names are derived from '_LOAD_DOCKERS' make variable and using Jinja template, it replaces the FROM docker name with correct user sonic docker name for
+ loading and saving the docker image.
+
+- The template processing covers only for common dockers, Broadcom and VS platform dockers. For other vendor specific dockers, respective vendors need to add the support.
+
+
+## Version cache support
+
+### Version components
+
+- Sonic build downloads lots of component from external web which includes
+ - Source code files
+ - Prebuilt debian packages
+ - Python PIP packages
+ - Git source code
+ - Docker images
+ - Go modules
+ - Other tools and utilities
+
+- These components are getting updated fequently and the changes are dynamic in nature.
+- Versioning feature support the sonic build to particular version of the package to be downloaded/installed.
+- Versioning ability to select the particular package version, but still it will fetch the package from external world.
+- When external site is down, selected package version is not available or any other issues with connecting to external site or downloading the package would lead to sonic build failure.
+- Due to this dynamic nature, every sonic build might have to change its dependency chain.
+- Version files are stored at files/build/versions folder as below hierarchy.
+```
+files/build/versions/
+├── build
+│ ├── build-sonic-slave-buster
+│ │ ├── versions-deb-buster-amd64
+│ │ ├── versions-py2-buster-amd64
+│ │ └── versions-py3-buster-amd64
+├── default
+│ ├── versions-docker
+│ ├── versions-git
+│ └── versions-web
+├── dockers
+│ ├── docker-base-buster
+│ │ ├── versions-deb-buster-amd64
+│ │ ├── versions-py3-buster-amd64
+│ | ├── versions-git
+│ | ├── versions-web
+ ...
+```
+
+![Package Versioning](images/package-versoning.png)
+
+### Version Cache feature
+- The version cache feature allows the sonic build system to cache all the source, binary and its dependencies into local file system. When version cache feature is enabled, first it checks local cache storage for requested package, if it is available, it loads from the cache else it will download from the external web.
+
+![Version Caching](images/version-caching.png)
+
+### Build Version Design
+- Version control files are copied to
+ - To slave container for package build.
+ - To docker builder for sonic slave docker creation.
+ - To docker builder for sonic docker creation.
+ - To Rootfs for binary image generation.
+
+![ Build Version caching ](images/build-version-caching.png)
+
+- Based on the package version, corresponding file will be fetched from the cache if exists.
+- Otherwise the file will be downloaded from the web and cache will be updated with newer version.
+- Version cache feature supports caching for following build components.
+ - DPKG packages
+ - PIP packages
+ - Python packages
+ - Wget/Curl packages
+ - GO modules
+ - GIT modules
+ - Docker images
+
+#### Debian version cache
+
+ - Debian packages are version controlled via preference file that specify each package and corresponding version as below.
+ - iproute==1.0.23
+ - When deb package gets installed, it looks for the package version from the version control file. If matches, it installs the package with the specified version in the version control file.
+ - During the package installation, it also save the package into the below cache path.
+ - /var/cache/apt/archives/
+ - If package is already available in the cache path, then it directly installs the package without downloading from the external site.
+ - With the version cache enabled, it preloads all cached packages into deb cache folder, so that any subsequent deb installation will always use the cached path.
+
+![ Debian Packages ](images/dpkg-version-caching.png)
+
+#### PIP version cache
+ - PIP packages are version controlled via constraint file that specify each package and corresponding version as below.
+ - ipaddress==1.0.23
+ - When a pip package gets installed, it looks for the package version from the version control file. If matches, it installs the package with the specified version in the version control file.
+ - During the package installation, it also save the package into the cache path as below.
+ - pip/http/a/4/6/b/7/a46b74c1407dd55ebf9eeb7eb2c73000028b7639a6ed9edc7981950c
+ - If package is already available in the pip cache path, then it directly installs the package without downloading from the external site.
+ - With the version cache enabled, it preloads all cached packages into pip cache folder, so that any subsequent pip installation will always use the cached path.
+ - During pip installation, the cache path can be specified with --cache-dir option which stores the cache data in the specified directory and version constraint file is given as --constraint option.
+ - Pip vcache folders are created under slave container name or sonic container name appropriately.
+
+![ Python Packages ](images/pip-version-caching.png)
+
+#### Python version cache
+ - Python packages are created via setup.py file.
+ - These packages and their dependencies listed in the setup.py are version controlled via SHA id of the package.
+ - During python package build, python uses setup.py to scan through the dependencies and prerequisties, and then downloads and install them into .eggs folder.
+ - If .eggs folders already exists, it will not reinstall the dependencies.
+ - With version cache enabled, it stores the .eggs files into vcache as a compressed tar file.
+ - Cache file name is formed using SHA value of setup.py.
+ - During package build, if .eggs file exist already, it loads the .eggs from vcache and proceeds with package build.
+
+![ Python Packages ](images/python-version-caching.png)
+
+#### Git clones
+ - Git clone modules are version controlled via commit hash.
+ - On a git clone attempt, version control file(versions-git) is first checked to see if the attempted git clone(url) entry is present,
+ - if entry is not present, then it downloads from the external world and saves the the downloaded git clone as git bundle file into vcache with the commit hash in its name and also updates the version control file.
+ Example: cache file name is formed using url and the commit hash
+ https://salsa.debian.org/debian/libteam.git-f8808df228b00873926b5e7b998ad8b61368d4c5.tgz
+ - if entry is present but git bundle file is not available in vcache, then it downloads from the external world and saves it into vcache with the commit hash
+ in its name.
+ - if entry is present and git bundle file is available in vcache, it gets loaded, unbundled & checkedout with specific commit.
+ - If git clone has any submodules, it is also handled.
+ - The submodules' git bundles are tared along with the main bundle and stored in the vcache. On loading, this tar file will be untared first before unbundling & checking out each submodules' git bundle.
+
+
+
+![ GIT Modules ](images/git-module-version-caching.png)
+
+#### Docker Images
+ - Docker images are version controlled via its SHA id.
+ - During docker image creation, version control script gets executed.
+ - The _PULL_DOCKER variable in the docker Make rule indicates whether the docker needs to be downloaded from docker hub or not.
+ - version control script will look for the matching entry in version control file.
+ - If not present, then it downloads the image and saves in to vcache in gz format and updates the version control file. The cache filename is formed using dockername combined with SHA id.
+ Example: debian-stetch-sha256-7f2706b124ee835c3bcd7dc81d151d4f5eca3f4306c5af5c73848f5f89f10e0b.tgz
+
+ - If present but not available in the cache, then it downloads the image and saves into saves in to cache in gz format.
+ - If present and the docker image is availabe in cache, then it preloads the docker image for container preparation.
+
+ ![ Docker Images ](images/docker-image-version-caching.png)
+
+
+#### Wget/Curl Packages
+ - wget/curl packages are controlled via URL and SHA id of the package.
+ - On wget attempt, version control file(versions-git) is first checked to see if the attempted url entry is present,
+ - if entry is not present, then it downloads from the external world and saves the the downloaded package into vcache with the SHA id of the package in its name and also updates the version control file.
+ Example: cache file name is formed using url and the SHA id.
+ https://salsa.debian.org/debian/libteam.src.gz-f8808df228b00873926b5e7b998ad8b61368d4c5.tgz
+ - if entry is present but package is not available in vcache, then it downloads from the external world and saves it into vcache.
+ - if entry is present and package is also available in vcache, it gets copied from the vcache.
+
+![ Wget Packages ](images/web-version-caching.png)
+
+#### Go modules
+ - In SONiC, all the go modules are installed from go.mod file.
+ - HASH value is calculated from the following contents:
+ - go.mod
+ - Makefile
+ - Common Files
+ - ENV flags
+ - It caches all the go module files as a directory structure instead of compressed tar file as it gives better performace when number of files are more.
+ - Different directory hierarchy is created for each HASH value.
+ - If HASH matches, it uses rsync to sync the cached modules to GOPATH build directory.
+ - While storing/retrieving, the cache content is always protected with global lock.
+
+![ GO Modules ](images/go-module-version-caching.png)
+
+## Docker Build Version Caching
+
+- Each docker build is version controlled via
+ - Dockerfile.j2
+ - Makefile
+ - Commonfiles
+ - ENV flags
+- SHA value is calculated from version control files.
+- Cache file is created for each docker with the docker name and SHA value calculated.
+- Cache file contains the following:
+ - Debian packages
+ - pip packages
+ - wget packages
+ - git packages
+ - go modules
+- Version control script will place the cache file into appropriate location inside docker builder.
+- With version control enabled, docker cache if exists already gets loaded else it will create and update the cache.
+![ Docker Build Version Caching ](images/docker-build-version-caching.png)
+-
+
+## Installer Optimization
+
+# Installer image generation has six stages:
+ - bootstrap generation
+ - ROOTFS installation
+ - SONiC packages installation
+ - SQASHFS generation
+ - DockerFS generation
+ - Installer image generation
+
+#### Bootstrap generation
+ - Debian bootstrap package files are prepared using debootstrap tool.
+ - It downloads set of bootstrap packages and generates the bootstrap filesystem.
+ - Initially, it downloads all the packages and creates them as image file and store them into version cache storage.
+ - Image file is created with specific filename and the HASH value.
+ - HASH value is calculated from SHA value of bootstrap control files which includes:
+ - build_debian.sh
+ - sonic_debian_extension.sh
+ - Version files
+ - Common makefiles and script utilities.
+ - Env Flags
+ - On the subsequent build, if calculated HASH maches with existing version cache filename, it loads the boostrap files from cache.
+
+
+#### Rootfs preparation
+- Rootfs files system is prepared on top of bootstrap packages.
+- It is prepared by downloading the various opensource debian packages, tools and utilities that are needed for SONiC applications and install them on top of bootstrap fs.
+- The rootfs file system is created as image file system and cached as part of version cache system.
+- Image file is created with installer name and HASH value.
+- The HASH value is calculated from SHA value of following files:
+ - build_debin.sh
+ - sonic_build_extention.j2
+ - Common makefiles
+ - ENV flags
+- On the subsequent build, mount the rootfs from image cache file if exists in version cache.
+- It uses the version control to install the cached packages in one place.
+
+![ Binary Image Version Caching ](images/binary-image-version-caching.png)
+#### SONiC packages installation
+- Install all the sonic packages.
+- Host services, configuration and utilities are installed.
+
+#### SQASHFS generation
+- SquashFS is a readonly filesystem and it created using squashfs command.
+- It is a compressed version of rootfs contents.
+
+#### dockerfs preparation
+- Dockerfs is created by importing all the sonic docker images and taring /var/log/docker folder.
+- Dockerfs directory is linked to non rootfs directory by mounting an external filesystem to ROOTFS.
+- Parallel loading of docker from compressed gz file.
+
+#### Installer Image generation
+- Tar with pigz compression to get better compression speed as well as compression ratio.
+- Uses the config file to choose the different compression options.
+
+#### Parallel build option
+
+- Stage build provides two stage build.
+ - Phase 1 - Rootfs generation as part of other package generation.
+ - Phase 2 - Docker generation in parallel.
+
+# Version freeze
+- Weekly/periodical with version migration to latest.
+
+ - Build with SONIC_VERSION_CONTROL_COMPONENTS=none to generate the new set of package versions in the target.
+ - Run ‘make freeze’ to generate and merge the version changes into the source.
+ - Check-in the new version set in the source.
+
+# Make variables
+- The following make variable controls the version caching feature.
+
+ - SONIC_VERSION_CONTROL_COMPONENTS= => Turn on/off the versioning
+ - SONIC_VERSION_CACHE_METHOD=cache=. => Turn on/off version caching
+ - SONIC_VERSION_CACHE_SOURCE= => Cache directory path
+
+# Cache cleanup
+
+- Recently used cache files are updated with newer timestamp. The Cache framework automatically touch the used cache files to current timestamp.
+- Touch is used to update the package to latest, so the files that are not recent, that can be cleaned up.
+ - touch //.tgz
+- Least-recently-used cache file cleanup command:
+
+```
+
+ find -name “*.tgz” ! -mtime -7 –exec rm {} \;
+
+ Where:
+ -mtime n => Files were modified within last n*24 hours .
+ -mtime -7 => means ( -7 * 24 ) => Files were modified within last 7 days
+ ! -mtime -7 => Files were modified 7 days ago
+
+```
+
+## Build Time Compression
+
+| **Feature** | **Normal Build** | **Build Enhacement** |
+| --------------------------------- | -------------| -------------------------- |
+| DPKG_CACHE=N
VERSION_CACHE=N | \ | \ |
+| DPKG_CACHE=Y
VERSION_CACHE=y | \ | \ |
+| DPKG_CACHE=N
VERSION_CACHE=y | \ | \ |
+
+## References
+https://github.com/xumia/SONiC/blob/repd3/doc/sonic-build-system/SONiC-Reproduceable-Build.md
diff --git a/doc/sonic-build-system/images/build-version-caching.png b/doc/sonic-build-system/images/build-version-caching.png
new file mode 100644
index 0000000000..fc270f6609
Binary files /dev/null and b/doc/sonic-build-system/images/build-version-caching.png differ
diff --git a/doc/sonic-build-system/images/docker-build-version-caching.png b/doc/sonic-build-system/images/docker-build-version-caching.png
new file mode 100644
index 0000000000..bd03214a00
Binary files /dev/null and b/doc/sonic-build-system/images/docker-build-version-caching.png differ
diff --git a/doc/sonic-build-system/images/docker-image-version-caching.png b/doc/sonic-build-system/images/docker-image-version-caching.png
new file mode 100644
index 0000000000..c8f439f4f1
Binary files /dev/null and b/doc/sonic-build-system/images/docker-image-version-caching.png differ
diff --git a/doc/sonic-build-system/images/dpkg-version-caching.png b/doc/sonic-build-system/images/dpkg-version-caching.png
new file mode 100644
index 0000000000..72e50f67e1
Binary files /dev/null and b/doc/sonic-build-system/images/dpkg-version-caching.png differ
diff --git a/doc/sonic-build-system/images/git-module-version-caching.png b/doc/sonic-build-system/images/git-module-version-caching.png
new file mode 100644
index 0000000000..f1377b33db
Binary files /dev/null and b/doc/sonic-build-system/images/git-module-version-caching.png differ
diff --git a/doc/sonic-build-system/images/go-module-version-caching.png b/doc/sonic-build-system/images/go-module-version-caching.png
new file mode 100644
index 0000000000..2e167ad1c5
Binary files /dev/null and b/doc/sonic-build-system/images/go-module-version-caching.png differ
diff --git a/doc/sonic-build-system/images/package-versioning.png b/doc/sonic-build-system/images/package-versioning.png
new file mode 100644
index 0000000000..57bddda576
Binary files /dev/null and b/doc/sonic-build-system/images/package-versioning.png differ
diff --git a/doc/sonic-build-system/images/package-versoning.png b/doc/sonic-build-system/images/package-versoning.png
new file mode 100644
index 0000000000..4a4d2fb7ef
Binary files /dev/null and b/doc/sonic-build-system/images/package-versoning.png differ
diff --git a/doc/sonic-build-system/images/pip-version-caching.png b/doc/sonic-build-system/images/pip-version-caching.png
new file mode 100644
index 0000000000..08e7caddc7
Binary files /dev/null and b/doc/sonic-build-system/images/pip-version-caching.png differ
diff --git a/doc/sonic-build-system/images/python-version-caching.png b/doc/sonic-build-system/images/python-version-caching.png
new file mode 100644
index 0000000000..8dfec4b78a
Binary files /dev/null and b/doc/sonic-build-system/images/python-version-caching.png differ
diff --git a/doc/sonic-build-system/images/sonic-native-docker-support.png b/doc/sonic-build-system/images/sonic-native-docker-support.png
new file mode 100644
index 0000000000..8f3c463e07
Binary files /dev/null and b/doc/sonic-build-system/images/sonic-native-docker-support.png differ
diff --git a/doc/sonic-build-system/images/version-caching.png b/doc/sonic-build-system/images/version-caching.png
new file mode 100644
index 0000000000..86593bccca
Binary files /dev/null and b/doc/sonic-build-system/images/version-caching.png differ
diff --git a/doc/sonic-build-system/images/virtual-build-root.png b/doc/sonic-build-system/images/virtual-build-root.png
new file mode 100644
index 0000000000..64584cd28f
Binary files /dev/null and b/doc/sonic-build-system/images/virtual-build-root.png differ
diff --git a/doc/sonic-build-system/images/web-version-caching.png b/doc/sonic-build-system/images/web-version-caching.png
new file mode 100644
index 0000000000..561857bcbc
Binary files /dev/null and b/doc/sonic-build-system/images/web-version-caching.png differ
diff --git a/doc/sonic-build-system/img/sai-sonic-build-system.drawio.png b/doc/sonic-build-system/img/sai-sonic-build-system.drawio.png
new file mode 100755
index 0000000000..93fce0634b
Binary files /dev/null and b/doc/sonic-build-system/img/sai-sonic-build-system.drawio.png differ
diff --git a/doc/sonic-build-system/img/sonic-sairedis-check.drawio.png b/doc/sonic-build-system/img/sonic-sairedis-check.drawio.png
new file mode 100755
index 0000000000..a9e1cbad72
Binary files /dev/null and b/doc/sonic-build-system/img/sonic-sairedis-check.drawio.png differ
diff --git a/doc/sonic-build-system/saiversioncheck.md b/doc/sonic-build-system/saiversioncheck.md
new file mode 100644
index 0000000000..67f077b018
--- /dev/null
+++ b/doc/sonic-build-system/saiversioncheck.md
@@ -0,0 +1,84 @@
+# SAI API version check
+
+## Motiviation
+
+SONiC is not desing to work in backward compatibility with older vendor SAI implementations.
+SAI headers that SONiC's synd daemon is compiled against are taken from OCP SAI repository while
+the actual libsai.so is taken from sonic-buildimage vendor's directory. This leads to a situation
+that sometimes SAI in sonic-sairedis repository is updated but vendor SAI in sonic-buildimage is not.
+
+This may lead to:
+ - Compilation failure because of ABI changes (syncd cannot be successfully linked with libsai.so)
+ - Attributes ID mismatch, as we add new attributes in a non-backward compatible manner. The result is syncd termination due to invalid usage of attributes or hidden incorrect behavior.
+ - Enum values mismatch, as we add new values to enums in a non-backward compatible manner.
+ - Etc.
+
+
+## SONiC buildsystem overview
+
+This is an illustration how the build system works:
+
+
+
+
+
+Sonic-sairedis contains syncd source code. Syncd is compiled against SAI headers from sonic-sairedis repository and then linked against vendor libsai.so from sonic-buildimage repository.
+In case someone updates sonic-sairedis with new SAI headers and tries to update submodule in sonic-buildimage PR checkers that perform sonic build should.
+The one who wants to update SAI version needs to make sure all SAI vendor implementations are updated in the same PR to not break the image.
+
+It is also worth to mention that some vendors just provide the binary libsai.so unlike Nvidia where we have SAI headers that are provided by Mellanox-SAI repository.
+
+## Proposal
+
+SAI already has SAI_API_VERISON define in headers (saiversion.h):
+
+```c
+#define SAI_MAJOR 1
+#define SAI_MINOR 9
+#define SAI_REVISION 1
+
+#define SAI_VERSION(major, minor, revision) (10000 * (major) + 100 * (minor) + (revision))
+
+#define SAI_API_VERSION SAI_VERSION(SAI_MAJOR, SAI_MINOR, SAI_REVISION)
+```
+
+Currently, given just the libsai.so file, it is not possible to know which SAI headers it was compiled against, as these defines are in the headers.
+We need an API in libisai.so to get the API version that this libsai.so implementation is aligned to.
+
+The proposal is to add such API:
+
+```c
+/**
+ * @brief Retrieve a SAI API version this implementation is aligned to
+ *
+ * @param[out] version Version number
+ *
+ * @return #SAI_STATUS_SUCCESS on success, failure status code on error
+ */
+sai_status_t sai_query_api_version(
+ _Out_ sai_api_version_t *version);
+```
+
+The implementation is simple:
+
+```c
+sai_status_t sai_query_api_version(
+ _Out_ sai_api_version_t *version)
+{
+ *version = SAI_API_VERSION;
+ return SAI_STATUS_SUCCESS;
+}
+```
+
+This SAI_API_VERSION is the one derived from headers that were used by vendor SAI (headers on the left on the Figure 1.).
+
+Using that new API we can implement a configure-time check in sonic-sairedis with autotools AC_TRY_RUN:
+
+
+
+
+
+The check will compare the vendor SAI API version (on the left on the Figure 1) with sairedis SAI API version (on the right on the Figure 2.) and fail if they do not match.
+In case, SAI versioning follows sematical versioning rules, the test program can implement a check for only MAJOR and MINOR version, relaxing the constraint on the PATCH version.
+
+## Questions
diff --git a/doc/sonic-multi-architecture/sonic_arm_support.md b/doc/sonic-multi-architecture/sonic_arm_support.md
new file mode 100644
index 0000000000..b6ce22318e
--- /dev/null
+++ b/doc/sonic-multi-architecture/sonic_arm_support.md
@@ -0,0 +1,231 @@
+# SONIC ARM Architecture support
+
+[![Marvell Technologies](https://www.marvell.com/content/dam/marvell/en/rebrand/marvell-logo3.svg)](https://www.marvell.com/)
+
+# Description
+
+ - This document describes enhancement in SONIC build script to support ARM32 and ARM64
+
+Support for ARM architecture needs changes in the following modules
+
+ - sonic-slave
+ - dockers
+ - rules
+ - Makefile
+ - Buildscript
+ - Repo list
+ - Onie Build
+
+
+
+### User Input
+
+Similar to configuring the platform in the Make, architecture should be user driven.
+
+* [SONIC_ARCH] - make configure PLATFORM=[ASIC_VENDOR] PLATFORM_ARCH=[armhf]
+* Default is X86_64
+
+### Dockers
+Since all the modules and code are compiled inside docker environment, the docker image should be based on multiarch/[distribution]-[arm_arch]
+
+Below dockers use the debian distribution which will now be based on the CPU Architecture distribution.
+```sh
+dockers/docker-base
+dockers/docker-base-stretch
+dockers/docker-ptf
+```
+
+### Developer Notes
+Following are the variables used in make files
+PLATFORM_ARCH : specifies the target architecture, if not set amd64 is chosen
+CONFIGURED_ARCH : In Makefiles, no where amd64 should be hardcoded, instead $(CONFIGURED_ARCH) has to be used
+```sh
+Example: in place of amd64 in below target CONFIGURED_ARCH is replaced
+LINUX_IMAGE = linux-image-$(KVERSION)_$(KERNEL_VERSION)-$(KERNEL_SUBVERSION)_amd64.deb
+LINUX_IMAGE = linux-image-$(KVERSION)_$(KERNEL_VERSION)-$(KERNEL_SUBVERSION)_$(CONFIGURED_ARCH).deb
+```
+
+
+### SONIC Slave Docker
+
+sonic-slave docker provides build environment for the rest of the dockers, it should be able to run the different architecture on the host cpu architecture.
+
+To do such cross compilation, we can make use of binfmt-misc to run target arch binary using qemu-static binary to run on the host cpu architecture.
+
+```sh
+sonic-slave-arm64
+sonic-slave-armhf
+```
+
+qemu static binaries need to be installed and docker for multiarch/qemu-user-static:register is enabled to run.
+
+### Miscellaneous
+
+Architecture specific packages need to installed or ignored.
+Like ixgbe and grub are specific to X86 architecture, which need to be excluded.
+
+
+### Platform
+
+Same platform or board can have variants in CPU vendor. To address this, platform can be made ARCH specific, and customized changes can be added in this platform specific make infra.
+
+```sh
+platform/marvell-armhf/docker-syncd-mrvl-rpc.mk
+platform/marvell-armhf/docker-syncd-mrvl-rpc/99-syncd.conf
+platform/marvell-armhf/docker-syncd-mrvl-rpc/Dockerfile.j2
+platform/marvell-armhf/docker-syncd-mrvl-rpc/ptf_nn_agent.conf
+platform/marvell-armhf/docker-syncd-mrvl.mk
+platform/marvell-armhf/docker-syncd-mrvl/Dockerfile.j2
+platform/marvell-armhf/docker-syncd-mrvl/start.sh
+platform/marvell-armhf/docker-syncd-mrvl/supervisord.conf
+platform/marvell-armhf/docker-syncd-mrvl/syncd.sh
+platform/marvell-armhf/libsaithrift-dev.mk
+platform/marvell-armhf/linux-kernel-armhf.mk
+platform/marvell-armhf/one-image.mk
+platform/marvell-armhf/platform.conf
+platform/marvell-armhf/rules.mk
+platform/marvell-armhf/sai.mk
+platform/marvell-armhf/sai/Makefile
+```
+
+#### Rule/makefile
+
+Hardcoded "amd64" need to be replaced with Makefile variable which hold the target architecture.
+* amd64
+* armhf
+* arm64
+
+```sh
+rules/bash.mk
+rules/docker-base-stretch.mk
+rules/docker-base.mk
+rules/docker-ptf.mk
+rules/docker-snmp-sv2.mk
+rules/frr.mk
+rules/gobgp.mk
+rules/hiredis.mk
+rules/iproute2.mk
+rules/isc-dhcp.mk
+rules/libnl3.mk
+rules/libteam.mk
+rules/libyang.mk
+rules/linux-kernel.mk
+rules/lldpd.mk
+rules/lm-sensors.mk
+rules/mpdecimal.mk
+rules/python3.mk
+rules/quagga.mk
+rules/radvd.mk
+rules/redis.mk
+rules/sairedis.mk
+rules/smartmontools.mk
+rules/snmpd.mk
+rules/socat.mk
+rules/swig.mk
+rules/swss-common.mk
+rules/swss.mk
+rules/tacacs.mk
+rules/telemetry.mk
+rules/thrift.mk
+slave.mk
+src/bash/Makefile
+src/hiredis/Makefile
+src/iproute2/Makefile
+src/isc-dhcp/Makefile
+src/libnl3/Makefile
+src/libteam/Makefile
+src/lm-sensors/Makefile
+src/mpdecimal/Makefile
+src/python3/Makefile
+src/radvd/Makefile
+src/redis/Makefile
+src/smartmontools/Makefile
+src/snmpd/Makefile
+src/socat/Makefile
+src/tacacs/nss/Makefile
+src/tacacs/pam/Makefile
+src/thrift/Makefile
+
+```
+
+### Repo list
+Below repo sources list need to updated as the azure debian repo doesn't have arm packages
+
+
+```sh
+files/apt/sources.list-armhf
+files/build_templates/sonic_debian_extension.j2
+
+```
+
+#### Onie Image
+
+Onie image configuration and build script should be updated for the uboot specific environment for ARM.
+Update target platform for Onie image platform configuration in onie image conf.
+ - onie-image.conf for AMD64
+ - onie-image-armhf.conf for ARMHF
+ - onie-image-arm64.conf for ARM64
+Onie platform config file will chosed based on the target platform
+ - platform//platform.conf
+ platform.conf will be used by the onie installer script to install the onie image
+Onie Installer scripts
+ - installer/x86_64/install.sh
+ - installer/arm64/install.sh
+ - installer/armhf/install.sh
+
+SONIC Image installation is driven by these onie installer scripts which does
+ - Boot loader update with image boot details
+ - Partition the primary disk if not formatted/partitioned
+ - Extract sonic image in the mounted disk under /host directory
+
+For different platforms, the primary storage device may vary, unlike X86 platforms which mainly use varieant of sata disks,
+ARM platform can also use NAND/NOR flash or SD/MMC cards
+The platform dependent partition scheme is moved to platform//platform.conf, where
+selecting primary storage medium, partitioning, formatting, and mounting is taken care.
+The mount path is provided to the generic SONIC installer script, which does common functionalities of extracting image, and copying files.
+
+X86 uses grub as its bootloader, where ARM can use Uboot or proprietary bootloaders.
+Bootloader configuration for boot image details are also updated in platform.conf
+
+#### Sonic Installer
+
+SONIC upgrade from SONIC uses python scripts to access bootloader configuration to update the boot image details, to support
+image upgrade, image deletion, and change boot order.
+For ARM Uboot firmware utilities is used to access boot configuration, as in grub for X86.
+ - sonic_installer/main.py
+
+### Kernel ARM support
+
+Submodule sonic-linux-kernel Makefile and patch need to be updated to compile for respective ARM architecture. As kernel .config will be generated using debian build infra, dpkg env variables need to properly updated to select the architecture.
+
+ - src/sonic-linux-kernel
+
+### Custom Kernel (Expert Mode)
+
+Based on architecture the linux kernel may vary and need to be changed to custom kernel rather that the SONIC default kernel version.
+This can be addressed in platform specific makefiles.
+
+ - platform/marvell-armhf/linux-kernel-armhf.mk
+
+
+### Usage for ARM Architecture
+To build Arm32 bit for (ARMHF) plaform
+
+ # Execute make configure once to configure ASIC and ARCH
+ make configure PLATFORM=[ASIC_VENDOR] SONIC_ARCH=armhf
+ **example**:
+ make configure PLATFORM=marvell-armhf SONIC_ARCH=armhf
+
+To build Arm64 bit for plaform
+
+ # Execute make configure once to configure ASIC and ARCH
+ make configure PLATFORM=[ASIC_VENDOR] SONIC_ARCH=arm64
+ **example**:
+ make configure PLATFORM=marvell-arm64 SONIC_ARCH=arm64
+
+----
+Author
+======
+Antony Rheneus [arheneus@marvell.com]
+Copyright Marvell Technologies
+
diff --git a/doc/srv6/images/Srv6ConfigDBFrr.png b/doc/srv6/images/Srv6ConfigDBFrr.png
new file mode 100644
index 0000000000..40b9b5b656
Binary files /dev/null and b/doc/srv6/images/Srv6ConfigDBFrr.png differ
diff --git a/doc/srv6/images/Srv6Example.png b/doc/srv6/images/Srv6Example.png
new file mode 100644
index 0000000000..78bdafccd2
Binary files /dev/null and b/doc/srv6/images/Srv6Example.png differ
diff --git a/doc/srv6/images/drawing-configdb-frr3.png b/doc/srv6/images/drawing-configdb-frr3.png
new file mode 100644
index 0000000000..2a1cb9359f
Binary files /dev/null and b/doc/srv6/images/drawing-configdb-frr3.png differ
diff --git a/doc/srv6/images/srv6db.png b/doc/srv6/images/srv6db.png
new file mode 100644
index 0000000000..f86ae95e0d
Binary files /dev/null and b/doc/srv6/images/srv6db.png differ
diff --git a/doc/srv6/images/srv6orch.png b/doc/srv6/images/srv6orch.png
new file mode 100644
index 0000000000..1535c9a63f
Binary files /dev/null and b/doc/srv6/images/srv6orch.png differ
diff --git a/doc/srv6/srv6_hld.md b/doc/srv6/srv6_hld.md
new file mode 100644
index 0000000000..6cc62a9b84
--- /dev/null
+++ b/doc/srv6/srv6_hld.md
@@ -0,0 +1,598 @@
+# Segment Routing over IPv6 (SRv6) HLD
+
+# Table of Contents
+
+- [List of Tables](#list-of-tables)
+- [Revision](#revision)
+- [Definition/Abbreviation](#definitionabbreviation)
+- [About This Manual](#about-this-manual)
+- [1 Introuduction and Scope](#1-introuduction-and-scope)
+- [2 Feature Requirements](#2-feature-requirements)
+- [2.1 Functional Requirements](#21-functional-requirements)
+- [2.2 Configuration and Managment Requirements](#22-configuration-and-management-requirements)
+- [2.3 Warm Boot Requirements](#23-warm-boot-requirements)
+- [3 Feature Design](#3-feature-design)
+- [3.1 ConfigDB Changes](#31-configdb-changes)
+- [3.2 AppDB Changes](#32-appdb-changes)
+- [3.3 Orchestration Agent Changes](#33-orchestration-agent-changes)
+- [3.4 SAI](#34-sai)
+- [3.5 YANG Model](#35-yang-model )
+- [4 Unit Test](#4-unit-test)
+- [5 References ](#5-references)
+
+# Revision
+
+| Rev | Date | Author | Change Description |
+| :--: | :-------: | :------------------------: | :---------------------: |
+| 0.1 | 6/5/2021 | Heidi Ou, Kumaresh Perumal | Initial version |
+| 0.2 | 8/24/2021 | Dong Zhang | More explanation |
+| 0.3 | 10/15/2021| Kumaresh Perumal | Minor updates |
+| 0.4 | 10/26/2021| Kumaresh Perumal | Update MY_SID table. |
+
+
+# Definition/Abbreviation
+
+### Table 1: Abbreviations
+
+| ****Term**** | ****Meaning**** |
+| -------- | ----------------------------------------- |
+| BFD | Bidirectional Forwarding Detection |
+| BGP | Border Gateway Protocol |
+| BSID | Binding SID |
+| G-SID | Generalized Segment Identifier |
+| SID | Segment Identifier |
+| SRH | Segment Routing Header |
+| SRv6 | Segment Routing IPv6 |
+| TE | Traffic Engineering |
+| uSID | Micro Segment |
+| VNI | VXLAN Network Identifier |
+| VRF | Virtual Routing and Forwarding |
+
+# About this Manual
+
+This document provides general information about the Segmentation Routing over IPv6 feature implementation in SONiC. It is based on IETF RFC 8754 and RFC 8986.
+
+# 1 Introuduction and Scope
+
+This document describes the Functionality and High level design of the SRv6 feature.
+
+SRv6 has been widely adopted as an IPv6 based SDN solution, which provides programming ability, TE capabilities, and deployment simplicity to network administrators. With current support from a rich ecosystem, including major ASIC manufactures, networking vendors and open source communities, the deployment of SRv6 is accelerating. We want to add SRv6 into SONIC to benefit users in DC as well as beyond DC.
+
+The following are some use cases for SRv6 deployment:
+
+- v4/6VPN, EVPN over best-effort
+- Traffic steering over TE policy
+
+In SRv6 domain, TE policy associated with SID list could be configured on headend nodes, to steer traffic with SRH encapsulation. When traffic reaches egress nodes, the packets are processed based on local defined functions, for example SID list decapsulation and FIB lookup in a particular VRF .
+
+# 2 Feature Requirements
+
+## 2.1 Functional Requirements
+
+This section describes the SONiC requirements for SRv6 feature in phases:
+
+At a high level the following should be supported:
+
+Phase #1
+
+ Should be able to perform the role of SRv6 domain headend node, and endpoint node, more specific:
+- Support END, Endpoint function - The SRv6 instantiation of a prefix SID
+- Support END.DT46, Endpoint with decapsulation and IP table lookup - IP L3VPN use (equivalent of a per-VRF VPN label)
+- Support H.Encaps.Red, H.Encaps with Reduced Encapsulation
+- Support traffic steering on SID list
+
+Later phases:
+- Support H.Encaps, SR Headend Behavior with Encapsulation in an SR Policy
+- Support END.B6.Encaps, Endpoint bound to an SRv6 encapsulation Policy - SRv6 instantiation of a Binding SID
+- Support END.B6.Encaps.Red, END.B6.Encaps with reduced SRH insertion - SRv6 instantiation of a Binding SID
+- Support END.X, Endpoint function with Layer-3 cross-connect - The SRv6 instantiation of a Adj SID
+- Support uSID/G-SID
+- Other programming functions
+- Support HMAC option
+- Support sBFD for SRv6
+- Support anycast SID
+
+This document will focus on Phase #1, while keep the design extendable for future development
+
+## 2.2 Configuration and Management Requirements
+
+1. User should be able to enable SRv6 globally
+
+2. User should be able to configure SID list for encapsulation
+
+3. User should be able to configure SRv6 steering policy
+
+4. User should be able to configure endpoint action and corresponding argument for matched local SID
+
+## 2.3 Warm Boot Requirements
+
+Warm reboot is intended to be supported for planned system, swss and BGP warm reboot.
+
+
+
+# 3 Feature Design
+
+![draw-configdb](images/Srv6ConfigDBFrr.png)
+
+Before FRR is ready, we will use static configuration to set SIDs and apply policy for TE. It enables basic SRv6 operation and populates SRv6 into ASIC, allows SRv6 data plane forwarding. More complicated SRv6 policy can be enabled when SRv6 is fully supported in FRR and passed from FRR to fpmsyncd.
+
+For Phase#1, Controller will update SRV6 related tables in APPL_DB directly using Translib and other SONiC management framework. Sonic-swss python scripts are also used to update SRV6 APPL_DB tables.
+
+## 3.1 ConfigDB Changes
+
+**SRV6_SID_LIST_TABLE**
+
+Description: New table that stores SRv6 SID list configuration.
+
+Schema:
+
+```
+; New table
+; holds SRv6 SID list
+
+key = SRV6_SID_LIST|segment_name
+ ; SID segment name
+; field = value
+path = SID, ; List of SIDs
+
+For example:
+ "SRV6_SID_LIST": {
+ "seg1": {
+ "path": [
+ "BABA:1001:0:10::",
+ "BABA:1001:0:20:F1::"
+ ]
+ },
+ "seg2": {
+ "path": [
+ "BABA:1001:0:30::",
+ "BABA:1001:0:40:F1::"
+ ]
+ }
+ }
+```
+
+**SRV6_MY_SID_TABLE**
+
+Description: New table to hold local SID to behavior mapping
+
+Schema:
+
+```
+; New table
+; holds local SID to behavior mapping, allow 1:1 or n:1 mapping
+
+key = SRV6_MY_SID_TABLE|ipv6address
+; field = value
+block_len = blen ; bit length of block portion in address, default 40
+node_len = nlen ; bit length of node ID portion in address, default 24
+func_len = flen ; bit length of function portion in address, default 16
+arg_len = alen ; bit length of argument portion in address
+action = behavior ; behaviors defined for local SID
+vrf = VRF_TABLE.key ; VRF name for END.DT46, can be empty
+adj = address, ; Optional, list of adjacencies for END.X
+policy = SRV6_POLICY.key ; Optional, policy name for END.B6.ENCAP
+source = address, ; Optional, list of src addrs for encap for END.B6.ENCAP
+
+For example:
+ "SRV6_MY_SID_TABLE" : {
+ "BABA:1001:0:20:F1::" : {
+ "action": "end.dt46",
+ "vrf": "VRF-1001"
+ },
+ "BABA:1001:0:40:F1::" : {
+ "action": "end.dt46",
+ "vrf": "VRF-1001"
+ },
+ "BABA:1001:0:20:F2::" : {
+ "action": "end.x",
+ "adj": [
+ BABA:2001:0:10::1,
+ BABA:2001:0:10::2
+ ],
+ },
+ "BABA:1001:0:20:F3::" : {
+ "action": "end.b6.encap",
+ "policy": "policy1"
+ "source": "A::1"
+ }
+ }
+```
+
+**SRV6_POLICY_TABLE**
+
+Description: New table that stores SRv6 policy .
+
+Schema:
+
+```
+; New table
+; holds SRv6 policy
+
+key = SRV6_POLICY|policy_name
+
+; field = value
+segment = SRv6_SID_LIST.key, ; List of segment names
+
+For example:
+ "SRV6_POLICY": {
+ "policy1": {
+ "segment": ["seg1", "seg2"]
+ },
+ "policy2": {
+ "segment": ["seg1"]
+ }
+ }
+```
+
+**SRV6_STEER_MAP**
+
+Description: New table that stores prefix to policy mapping .
+
+Schema:
+
+```
+; New table
+; holds prefix to SRv6 SID list encapsulation mapping
+
+key = SRV6_STEER|VRF_NAME:prefix
+ ; Prefix to be steered
+; field = value
+policy = SRV6_POLICY.key ; Policy to steer the prefix
+source = address ; Source addresses for encapsulation
+
+For example:
+ "SRV6_STEER": {
+ "Vrf-red|11.11.11.0/24": {
+ "policy": "policy1",
+ "source": "A::1"
+ },
+ "Vrf-blue|2001:a::0/64": {
+ "policy": "policy2",
+ "source": "A::1"
+ }
+ }
+```
+
+## 3.2 AppDB changes
+
+**New SRV6_SID_LIST_TABLE**
+
+Description: New table to hold SRv6 SID list.
+
+Schema:
+
+```
+; New table
+; holds SRv6 SID list
+
+key = SRV6_SID_LIST_TABLE:segment_name
+
+; field = value
+path = SID, ; List of SIDs
+```
+
+**New SRV6_MY_SID_TABLE**
+
+Description: New table to hold local SID to behavior mapping
+
+Schema:
+
+```
+; New table
+; holds local SID to behavior mapping
+
+key = SRV6_MY_SID_TABLE:block_len:node_len:func_len:arg_len:ipv6address
+
+; field = value
+action = behavior ; behaviors defined for local SID
+vrf = VRF_TABLE.key ; VRF name for END.DT46, can be empty
+adj = address, ; List of adjacencies for END.X, can be empty
+segment = SRv6_SID_LIST.key, ; List of segment names for END.B6.ENCAP, can be empty
+source = address, ; List of src addrs for encap for END.B6.ENCAP
+```
+
+**Modify ROUTE_TABLE**
+
+Description: Existing Route Table is extended to add SID list.
+
+Schema:
+
+```
+;Stores a list of routes
+;Status: Mandatory
+
+key = ROUTE_TABLE:VRF_NAME:prefix ;
+nexthop = prefix, ; IP addresses separated ',' (empty indicates no gateway). May indicate VXLAN endpoint if vni_label is non zero.
+intf = ifindex? PORT_TABLE.key ; zero or more separated by ',' (zero indicates no interface)
+vni_label = VRF.vni ; zero or more separated by ',' (empty value for non-vxlan next-hops). May carry MPLS label in future.
+router_mac = mac_address ; zero or more remote router MAC address separated by ',' (empty value for non-vxlan next-hops)
+blackhole = BIT ; Set to 1 if this route is a blackhole (or null0)
+segment = SRV6_SID_LIST.key ; New optional field. List of segment names, separated by ','
+seg_src = address ; New optional field. Source addrs for sid encap
+```
+
+**Two cases:**
+
+**CASE A :** route entry with the same key(VRF_NAME:prefix ) already exists in APPL_DB ROUTE_TABLE
+
+**CASE B:** route entry with the same key(VRF_NAME:prefix ) DOES NOT exist in APPL_DB ROUTE_TABLE
+
+For both cases, we don't care fields **nexthop**, **intf**, **vni_lable**, **route_mac** and **blackhole**, since srv6 related fields will be added which includes segments. Segments actually is lists of sids which tell the packets will be added SRV6 encap header and SID list will be used for nexthop lookup in SRV6Orch.
+
+
+
+For Controller, it only needs to below information and update APPL_DB ROUTE_TABLE no matter it exists or not.
+
+**key**: the key in ROUTE_TABLE is the same as the one in SRV6_STEER_MAP
+
+**segment**: form SRV6_STEER_MAP entry, the policy field indicates the entry in SRV6_POLICY_TABLE, the segment field information is there. Srv6Orch will use segment to find sid list and sids for nexthop lookup.
+
+**seg_src**: form SRV6_STEER_MAP entry, the source field indicates what will be used here
+
+ EXAMPLE : how to modify ROUTE_TABLE
+ current CONFIG_DB:
+ "SRV6_SID_LIST": {
+ "seg1": {
+ "path": [
+ "BABA:1001:0:10::",
+ "BABA:1001:0:20:F1::"
+ ]
+ },
+ "seg2": {
+ "path": [
+ "BABA:1001:0:30::",
+ "BABA:1001:0:40:F1::"
+ ]
+ }
+ }
+
+ "SRV6_STEER": {
+ "Vrf-red|11.11.11.0/24": {
+ "policy": "policy1",
+ "source": "A::1"
+ },
+ "Vrf-blue|2001:a::0/64": {
+ "policy": "policy2",
+ "source": "A::1"
+ }
+ }
+
+ "SRV6_POLICY": {
+ "policy1": {
+ "segment": "seg1, seg2"
+ },
+ "policy2": {
+ "segment": "seg1"
+ }
+ }
+
+ current APPL_DB:
+ "ROUTE_TABLE": {
+ "Vrf-red:11.11.11.0/24": {
+ "nexthop" : "109.109.109.109",
+ "ifname" : "Vlan1001",
+ "vni_label" : "1001",
+ "router_mac" : "c6:97:75:ed:06:72"
+ }
+ }
+
+ future APPL_DB:
+ "ROUTE_TABLE": {
+ "Vrf-red:11.11.11.0/24": {
+ "nexthop" : "109.109.109.109",
+ "ifname" : "Vlan1001",
+ "vni_label" : "1001",
+ "router_mac" : "c6:97:75:ed:06:72",
+
+ "segment": "seg1,seg2",
+ "seg_src": "A::1"
+ }
+ }
+
+SRV6_STEER_TABLE generated route entry has higher priority than the entry in ROUTE_TABLE if any matched. Controller will update ROUTE_TABLE entry and modify it in APPL_DB ROUTE_TABLE if any.
+
+In Srv6Orch, it will mark which route entry is Srv6 modified and having higher priority to do SID and nexthop lookup, FRR or other modules cannot modify these high priority routes, they can only be modified via Srv6Orch.
+
+**Resolve SID NextHop Via Controller or Others:**
+
+If the SID subnet (below example, 2000::31 on E31) is directly connected to E11, the nexthop could be found, if not, we should have a controller to indicate nexthop information on E11 for subnet 2000::31, since FRR is not involved at this moment on Phase #1. A static route should be installed via controller in APPL_DB ROUTE_TABLE. Or the network itself has some basic ipv6 protocol is ruuning, and all the basic ipv6 informaion is fully exchanged, it depends on how the architecture is designed.
+
+Beside adding/modifing routes, controller could delete routes. When controller deletes some routes, then the higher priority flag will be removed and the routes will be deleted. Frr or other modules could modify the routes the same way as we did today when the srv6 high priority flag doesn't exist.
+
+**An Example as below:**
+![draw-configdb](images/Srv6Example.png)
+
+
+## 3.3 Orchestration Agent Changes
+
+New Orchagent(SRV6Orch) is created to manage all SRV6 related objects. SRV6Orchagent listens to APP_DB for regular updates and create/update SAI objects in ASIC_DB.
+
+![draw-configdb](images/srv6db.png)
+
+![draw-configdb](images/srv6orch.png)
+
+**SRV6Orchagent**
+
+This orchagent is responsible to create SRV6 related objects in ASIC_DB with the information from APP_DB.
+
+
+
+SRV6Orchagent listens to all updates of SRV6_SID_LIST_TABLE to create SAI_SEGMENTROUTE_SIDLIST object with list of V6 prefixes. It also creates a SRV6 Nexthop with the existing SIDLIST object handle. Any update to V6 prefixes to the segment will be pushed to ASIC_DB.
+
+
+
+When a route entry is added to ROUTE_TABLE, routeOrchagent calls srv6Orchagent to get the SRV6 nexthop with all associated segment prefixes. If a route entry is referenced by list of ECMP segments, the orchagent creates a ECMP group with already created Nexthop members and pass the ECMP object handle to routeOrchagent. When all route entries referenced by the ECMP groups are deleted, ECMP group object is deleted.
+
+
+
+Orchagent listens to SRV6_MY_SID_TABLE in APP_DB to create SAI objects in ASIC_DB. For SRV6_MY_SID_TABLE's END.X action, this orchagent queries the existing IP NextHop and NextHopGroup database and use the existing object handle and update ASIC_DB. When IP NextHop doesn't exist, SRV6_MY_SID_TABLE objects are programmed with Drop action and notify NeighOrch to resolve IP NextHop. When that NextHop is resolved, SRV6Orchagent updates SRV6_MY_SID_TABLE with valid IP NextHop handle and Forward action. This orchagent creates a new ECMP group when Nexthop exists for all the Nexthop addresses in END.X action and no matching group exists in the DB. For SRV6_MY_SID_TABLE's END.DT46 action, orchagent passes the VRF handle associated with VRF name to ASIC_DB. For SRV6_MY_SID_TABLE's END.B6 Encaps, orchagent use existing Nexthop/NexthopGroup for the list of segments or create a new NexthopGroup.
+
+
+
+**NextHopKey Changes**
+
+
+
+RouteOrch uses NexthopKey to create SAI next hop objects. To support SRV6 segments in the nextHop, key is modified to include segment string and source address string used for SRv6 source encapsulation.
+
+
+```
+Struct NextHopKey {
+
+ IpAddress ip_address;
+
+ ...
+ string segment;
+
+ string srv6_source;
+
+ ...
+
+}
+```
+
+
+
+## 3.4 SAI
+
+ https://github.com/opencomputeproject/SAI/compare/master...ashutosh-agrawal:srv6
+
+SR Source behavior:
+
+1) Create a SID list object with 3 segments
+
+ sidlist_entry_attrs[0].id = SAI_SEGMENTROUTE_SIDLIST_ATTR_TYPE
+
+ sidlist_entry_attrs01].value.s32 = SAI_SEGMENTROUTE_SIDLIST_TYPE_ENCAPS_RED
+
+ sidlist_entry_attrs[1].id = SAI_SEGMENTROUTE_SIDLIST_ATTR_SEGMENT_LIST
+
+ sidlist_entry_attrs[1].value.objlist.count = 3;
+
+ CONVERT_STR_TO_IPV6(sidlist_entry_attrs[1].value.objlist.list[0], "2001:db8:85a3::8a2e:370:7334");
+
+ CONVERT_STR_TO_IPV6(sidlist_entry_attrs[1].value.objlist.list[1], "2001:db8:85a3::8a2e:370:2345");
+
+ CONVERT_STR_TO_IPV6(sidlist_entry_attrs[1].value.objlist.list[2], "2001:db8:85a3::8a2e:370:3456");
+
+ saistatus = sai_v6sr_api->create_segmentroute_sidlist(&sidlist_id, switch_id, 2, sidlist_entry_attrs);
+
+
+
+2) Create Nexthop with the sidlist object
+
+ nexthop_entry_attrs[0].id = SAI_NEXTHOP_ATTR_TYPE
+
+ nexthop_entry_attrs[0].value = SAI_NEXT_HOP_TYPE_SEGMENTROUTE_SIDLIST
+
+ nexthop_entry_attrs[1].id = SAI_NEXTHOP_ATTR_TUNNEL_ID
+
+ nexthop_entry_attrs[1].value.oid = tunnel_id
+
+ nexthop_entry_attrs[2].id = SAI_NEXT_HOP_ATTR_SEGMENTROUTE_SIDLIST_ID
+
+ nexthop_entry_attrs[2].value.oid = sidlist_id
+
+ saistatus = sai_nexthop_api->create_nexthop(&nexthop_id, switch_id, 3, nexthop_entry_attrs)
+
+
+
+3) Create route entry with SRv6 Nexthop
+
+ route_entry.switch_id = 0
+
+ route_entry.vr_id = vr_id_1 // created elsewhere
+
+ route_entry.destination.addr_family = SAI_IP_ADDR_FAMILY_IPV4
+
+ route_entry.destination.addr.ip4 = "198.51.100.0"
+
+ route_entry.destination.addr.mask = "255.255.255.0"
+
+ route_entry_attrs[0].id = SAI_ROUTE_ENTRY_ATTR_NEXT_HOP_ID;
+
+ route_entry_attrs[0].value.oid = nexthop_id;
+
+ saisstatus = sairoute_api->create_route(&route_entry, 1, route_entry_attrs)
+
+
+
+SR TRansit/Endpoint behavior
+
+
+
+my_sid_entry.switch_id = 0
+
+my_sid_entry.vr_id = vr_id_1 // underlay VRF
+
+my_sid_entry.locator_len = 64
+
+my_sid_entry.function_len = 8
+
+CONVERT_STR_TO_IPV6(my_sid_entry.sid, "2001:db8:0:1::1000:0:0:0");
+
+
+
+my_sid_attr[0].id = SAI_MY_SID_ENTRY_ATTR_ENDPOINT_BEHAVIOR
+
+my_sid_attr[0].value = SAI_MY_SID_ENTRY_ENDPOINT_TYPE_DT46
+
+my_sid_attr[1].id = SAI_MY_SID_ENTRY_ATTR_VRF
+
+my_sid_attr[1].value.oid = vr_id_1001 // overlay vrf, created elsewhere
+
+saistatus = saiv6sr_api->create_my_sid_entry(&my_sid_entry, 2, my_sid_attr)
+
+
+## 3.5 YANG Model
+```
+module: sonic-srv6
+ +--rw sonic-srv6
+ +--rw SRV6_SID_LIST
+ | +--rw SRV6_SID_LIST_LIST* [name]
+ | +--rw name string
+ | +--rw path* inet:ipv6-address
+ +--rw SRV6_MY_SID
+ | +--rw SRV6_MY_SID_LIST* [ip-address]
+ | +--rw ip-address inet:ipv6-address
+ | +--rw block_len? uint16
+ | +--rw node_len? uint16
+ | +--rw func_len? uint16
+ | +--rw arg_len? uint16
+ | +--rw action? enumeration
+ | +--rw vrf? -> /vrf:sonic-vrf/VRF/VRF_LIST/name
+ | +--rw adj* inet:ipv6-address
+ | +--rw policy? -> /sonic-srv6/SRV6_POLICY/SRV6_POLICY_LIST/name
+ | +--rw source? inet:ipv6-address
+ +--rw SRV6_POLICY
+ | +--rw SRV6_POLICY_LIST* [name]
+ | +--rw name string
+ | +--rw segment* -> /sonic-srv6/SRV6_SID_LIST/SRV6_SID_LIST_LIST/name
+ +--rw SRV6_STEER
+ +--rw SRV6_STEER_LIST* [vrf-name ip-prefix]
+ +--rw vrf-name -> /vrf:sonic-vrf/VRF/VRF_LIST/name
+ +--rw ip-prefix union
+ +--rw policy? -> /sonic-srv6/SRV6_POLICY/SRV6_POLICY_LIST/name
+ +--rw source? inet:ipv6-address
+```
+
+## 4 Unit Test
+
+TBD
+
+## 5 References
+
+- [SAI IPv6 Segment Routing Proposal for SAI 1.2.0](https://github.com/opencomputeproject/SAI/blob/1066c815ddd7b63cb9dbf4d76e06ee742bc0af9b/doc/SAI-Proposal-IPv6_Segment_Routing-1.md)
+
+- [RFC 8754](https://tools.ietf.org/html/rfc8754)
+- [RFC 8986](https://www.rfc-editor.org/rfc/rfc8986.html)
+- [draft-filsfils-spring-segment-routing-policy](https://tools.ietf.org/html/draft-filsfils-spring-segment-routing-policy-06)
+
+- [draft-ali-spring-bfd-sr-policy-06](https://tools.ietf.org/html/draft-ali-spring-bfd-sr-policy-06)
+
+- [draft-filsfils-spring-net-pgm-extension-srv6-usid](https://tools.ietf.org/html/draft-filsfils-spring-net-pgm-extension-srv6-usid-08)
+
+- [draft-cl-spring-generalized-srv6-for-cmpr](https://tools.ietf.org/html/draft-cl-spring-generalized-srv6-for-cmpr-02)
+
+
diff --git a/doc/subport/sonic-sub-port-intf-hld.md b/doc/subport/sonic-sub-port-intf-hld.md
index 5f33638552..1e0ad3f4bd 100644
--- a/doc/subport/sonic-sub-port-intf-hld.md
+++ b/doc/subport/sonic-sub-port-intf-hld.md
@@ -9,9 +9,10 @@
* [1 Requirements](#1-requirements)
* [2 Schema design](#2-schema-design)
* [2.1 Configuration](#21-configuration)
- * [2.1.1 config_db.json](#211-config-db-json)
- * [2.1.2 CONFIG_DB](#212-config-db)
- * [2.1.3 CONFIG_DB schemas](#213-config-db-schemas)
+ * [2.1.1 Naming Convention for sub-interfaces](#211-naming-convention-for-sub-interfaces)
+ * [2.1.2 config_db.json](#211-config-db-json)
+ * [2.1.3 CONFIG_DB](#212-config-db)
+ * [2.1.4 CONFIG_DB schemas](#213-config-db-schemas)
* [2.2 APPL_DB](#22-appl-db)
* [2.3 STATE_DB](#23-state-db)
* [2.4 SAI](#24-sai)
@@ -27,6 +28,7 @@
* [3.1 Sub port interface creation](#31-sub-port-interface-creation)
* [3.2 Sub port interface runtime admin status change](#32-sub-port-interface-runtime-admin-status-change)
* [3.3 Sub port interface removal](#33-sub-port-interface-removal)
+ * [3.4 Sub port MTU Configuration](#34-sub-port-mtu-configuration)
* [4 CLI](#4-cli)
* [4.1 Config commands](#41-config-commands)
* [4.1.1 Config a sub port interface](#411-config-a-sub-port-interface)
@@ -44,19 +46,23 @@
* [6.3.2 Remove all IP addresses from a sub port interface](#632-remove-all-ip-addresses-from-a-sub-port-interface)
* [6.3.3 Remove a sub port interface](#633-remove-a-sub-port-interface)
* [7 Scalability](#7-scalability)
- * [8 Port channel renaming](#8-port-channel-renaming)
+ * [8 upgrade and downgrade considerations](#8-upgrade-and-downgrade-considerations)
* [9 Appendix](#9-appendix)
* [9.1 Difference between a sub port interface and a vlan interface](#91-difference-between-a-sub-port-interface-and-a-vlan-interface)
- * [10 Open questions](#10-open-questions)
- * [11 Acknowledgment](#11-acknowledgment)
- * [12 References](#12-references)
+ * [10 API Library](#10-api-library)
+ * [10.1 SWSS CPP Library](#101-swss-cpp-library)
+ * [10.2 Python Library](#102-python-library)
+ * [11 Open questions](#11-open-questions)
+ * [12 Acknowledgment](#12-acknowledgment)
+ * [13 References](#13-references)
# Revision history
-| Rev | Date | Author | Change Description |
-|:---:|:-----------:|:------------------:|-----------------------------------|
-| 0.1 | 07/01/2019 | Wenda Ni | Initial version |
+| Rev | Date | Author | Change Description |
+|:---:|:-----------:|:------------------:|---------------------------------------------------------|
+| 0.1 | 07/01/2019 | Wenda Ni | Initial version |
+| 0.2 | 12/17/2020 | Broadcom | Subinterface naming convention changes and enhancements |
# Scope
A sub port interface is a logical interface that can be created on a physical port or a port channel.
@@ -96,8 +102,14 @@ A sub port interface shall support the following features:
* VRF
* RIF counters
* QoS setting inherited from parent physical port or port channel
-* mtu inherited from parent physical port or port channel
+* MTU:
+ MTU of the subinterface is inherited from the parent interface (physical or portchannel)
+ If subinterface MTU is configured, MTU on subinterface will be configured with:
+ - If Subinterface MTU <= parent port MTU, configured subinterface MTU will be applied.
+ - If Subinterface MTU > parent port MTU, parent port MTU will be applied.
* Per sub port interface admin status config
+ - Kernel subinterface netdev admin UP can be performed only if parent interface netdev is admin UP.
+ Hence subinterface admin UP is performed only after parent interface is admin UP.
# 2 Schema design
@@ -105,19 +117,48 @@ We introduce a new table "VLAN_SUB_INTERFACE" in the CONFIG_DB to host the attri
For APPL_DB and STATE_DB, we do not introduce new tables for sub port interfaces, but reuse existing tables to host sub port interface keys.
## 2.1 Configuration
-### 2.1.1 config_db.json
+### 2.1.1 Naming Convention for sub-interfaces:
+
+Since Kernel has netdevice name length restriction to 15, Physical sub-interfaces(in case interface number > 99) and port channel sub-interfaces cannot follow the same nomenclature as physical interfaces.
+Hence short name convention needs to be supported for subinterfaces.
+
+All DB & kernel netdevice corresponding to the sub-interface will be created based on user configuration.
+- If user configures subinterfaces in short name format, all DB & kernel netdevices will be created in short name format.
+- If user configures subinterfaces in existing long name format, all DB & netdevices will be created with existing long name format.
+
+Short naming conventions for sub-interfaces will have Ethxxx.yyyy, Poxxx.yyyy format.
+Long naming conventions for sub-interfaces will have Ethernetxx.yyyy.
+Physical subinterfaces on interface number exceeding 2 digits and PortChannel subinterfaces in long name format were not supported earlier and will NOT be supported due to name length restriction.
+
+Intfmgrd & IntfsOrch which manages sub-interfaces should be aware of this mapping to get parent interface properties.
+
+SWSS CPP library & Click Python API library will be provided to perform short name to long name conversion and vice versa.
+Please refer to the API library section for details.
+
+All click config CLIs for sub-interfaces will be enhanced to accept both long name & short name format for subinterfaces.
+
+### 2.1.2 config_db.json
```
"VLAN_SUB_INTERFACE": {
- "{{ port_name }}.{{ vlan_id }}": {
- "admin_status" : "{{ adminstatus }}"
+ "{{ port_name }}.{{ subinterface_id }}": {
+ "vlan" : <1-4094>,
+ "admin_status" : "{{ adminstatus }}",
+ "vrf_name" :
},
- "{{ port_name }}.{{ vlan_id }}|{{ ip_prefix }}": {}
+ "{{ port_name }}.{{ subinterface_id }}|{{ ip_prefix }}": {}
},
```
A key in the VLAN_SUB_INTERFACE table is the name of a sub port, which consists of two sections delimited by a "." (symbol dot).
-The section before the dot is the name of the parent physical port or port channel. The section after the dot is the dot1q encapsulation vlan id.
+The section before the dot is the name of the parent physical port or port channel. The section after the dot is a unique number which uniquely identifies the sub-interface on the parent interface.
+Sub-interface id value represents vlan id in long name format.
+Sub-interface id value in short name format uniqeuly identifies subinterface under the parent interface. It can be in range 1-99999999(Subinterface ID cannot exceed 8 digits).
+
+vlan field is applicable only for short name format subinterfaces.
+vlan field identifies the vlan to which the sub-interface is associated using .1Q trunking.
+Note that subinterface_id and vlan_id for a subinterface can be different in short name format.
-mtu of a sub port interface is inherited from its parent physical port or port channel, and is not configurable in the current design.
+In Click CLI, user will be able to configure the vlan id associated with the sub-interface in short name format.
+In existing long name format Sub-interface id is used as vlan id.
admin_status of a sub port interface can be either up or down.
In the case field "admin_status" is absent in the config_db.json file, a sub port interface is set admin status up by default at its creation.
@@ -125,34 +166,48 @@ In the case field "admin_status" is absent in the config_db.json file, a sub por
Example configuration:
```
"VLAN_SUB_INTERFACE": {
- "Ethernet64.10": {
+ "Ethernet0.100": {
"admin_status" : "up"
},
- "Ethernet64.10|192.168.0.1/21": {},
- "Ethernet64.10|fc00::/7": {}
+ "Ethernet0.100|192.0.0.1/21": {},
+ "Ethernet0.100|fc0a::/112": {}
+ "Eth64.10": {
+ “vlan” : 100,
+ "admin_status" : "up"
+ },
+ "Eth64.10|192.168.0.1/21": {},
+ "Eth64.10|fc00::/7": {}
},
```
-### 2.1.2 CONFIG_DB
+### 2.1.3 CONFIG_DB
```
-VLAN_SUB_INTERFACE|{{ port_name }}.{{ vlan_id }}
+VLAN_SUB_INTERFACE|{{ port_name }}.{{ subinterface_id }}
+ "vlan" : "{{ vlan-id }}"
"admin_status" : "{{ adminstatus }}"
-VLAN_SUB_INTERFACE|{{ port_name }}.{{ vlan_id }}|{{ ip_prefix }}
+VLAN_SUB_INTERFACE|{{ port_name }}.{{ subinterface_id }}|{{ ip_prefix }}
"NULL" : "NULL"
```
-### 2.1.3 CONFIG_DB schemas
+### 2.1.4 CONFIG_DB schemas
```
; Defines for sub port interface configuration attributes
key = VLAN_SUB_INTERFACE|subif_name ; subif_name is the name of the sub port interface
; subif_name annotations
-subif_name = port_name "." vlan_id ; port_name is the name of parent physical port or port channel
- ; vlanid is DIGIT 1-4094
+subif_name = port_name "." subinterface_id ; port_name is the name of parent physical port or port channel
+ ; In short name format subinterface_id is DIGIT 1-99999999
+ ; In long name format subinterface_id is vlan id.
; field = value
admin_status = up / down ; admin status of the sub port interface
+
+; field = value
+vlan = <1-4094> ; Vlan id in range <1-4094>
+
+; field = value
+vrf_name = ; Name of the Vrf
```
```
@@ -183,24 +238,40 @@ ls32 = ( h16 ":" h16 ) / IPv4address
Example:
```
-VLAN_SUB_INTERFACE|Ethernet64.10
+VLAN_SUB_INTERFACE|Ethernet0.100
"admin_status" : "up"
-VLAN_SUB_INTERFACE|Ethernet64.10|192.168.0.1/21
+VLAN_SUB_INTERFACE|Ethernet0.100|192.0.0.1/21
"NULL" : "NULL"
-VLAN_SUB_INTERFACE|Ethernet64.10|fc00::/7
+VLAN_SUB_INTERFACE|Ethernet0.100|fc0a::/112
+ "NULL" : "NULL"
+
+VLAN_SUB_INTERFACE|Eth64.10
+ "vlan" : 100,
+ "admin_status" : "up"
+
+VLAN_SUB_INTERFACE|Eth64.10|192.168.0.1/21
+ "NULL" : "NULL"
+
+VLAN_SUB_INTERFACE|Eth64.10|fc00::/7
"NULL" : "NULL"
```
## 2.2 APPL_DB
```
-INTF_TABLE:{{ port_name }}.{{ vlan_id }}
+INTF_TABLE:{{ port_name }}.{{ subinterface_id }}
+ "vlan" : "{{ vlan id }}"
"admin_status" : "{{ adminstatus }}"
; field = value
admin_status = up / down ; admin status of the sub port interface
+; field = value
+vlan = <1-4094> ; Vlan id in range <1-4094>
+
+; field = value
+vrf_name = ; Name of the Vrf
INTF_TABLE:{{ port_name }}.{{ vlan_id }}:{{ ip_prefix }}
"scope" : "{{ visibility_scope }}"
@@ -213,14 +284,26 @@ family = IPv4 / IPv6 ; address family
Example:
```
-INTF_TABLE:Ethernet64.10
+INTF_TABLE:Ethernet0.100
+ "admin_status" : "up"
+
+INTF_TABLE:Ethernet0.100:192.0.0.1/24
+ "scope" : "global"
+ "family": "IPv4"
+
+INTF_TABLE:Ethernet0.100:fc0a::/112
+ "scope" : "global"
+ "family": "IPv6"
+
+INTF_TABLE:Eth64.10
+ "vlan" : 100
"admin_status" : "up"
-INTF_TABLE:Ethernet64.10:192.168.0.1/24
+INTF_TABLE:Eth64.10:192.168.0.1/24
"scope" : "global"
"family": "IPv4"
-INTF_TABLE:Ethernet64.10:fc00::/7
+INTF_TABLE:Eth64.10:fc00::/7
"scope" : "global"
"family": "IPv6"
```
@@ -229,29 +312,42 @@ INTF_TABLE:Ethernet64.10:fc00::/7
Following the current schema, sub port interface state of a physical port is set to the PORT_TABLE, while sub port interface state of a port channel is set to the LAG_TABLE.
```
-PORT_TABLE|{{ port_name }}.{{ vlan_id }}
+PORT_TABLE|{{ port_name }}.{{ subinterface_id }}
"state" : "ok"
```
```
-LAG_TABLE|{{ port_name }}.{{ vlan_id }}
+LAG_TABLE|{{ port_name }}.{{ subinterface_id }}
"state" : "ok"
```
```
-INTERFACE_TABLE|{{ port_name }}.{{ vlan_id }}|{{ ip_prefix }}
+INTERFACE_TABLE|{{ port_name }}.{{ subinterface_id }}|{{ ip_prefix }}
"state" : "ok"
```
Example:
```
-PORT_TABLE|Ethernet64.10
+PORT_TABLE|Ethernet0.100
+ "state" : "ok"
+```
+```
+INTERFACE_TABLE|Ethernet0.100|192.0.0.1/21
+ "state" : "ok"
+```
+```
+INTERFACE_TABLE|Ethernet0.100|fc0a::/112
+ "state" : "ok"
+```
+
+```
+PORT_TABLE|Eth64.10
"state" : "ok"
```
```
-INTERFACE_TABLE|Ethernet64.10|192.168.0.1/21
+INTERFACE_TABLE|Eth64.10|192.168.0.1/21
"state" : "ok"
```
```
-INTERFACE_TABLE|Ethernet64.10|fc00::/7
+INTERFACE_TABLE|Eth64.10|fc00::/7
"state" : "ok"
```
@@ -335,23 +431,28 @@ sai_status_t status = remove_router_interface(rif_id);
Inside SONiC, we use iproute2 package to manage host sub port interfaces.
Specifically, we use `ip link add link name type vlan id ` to create a host sub port interface.
-This command implies the dependancy that a parent host interface must be created before the creation of a host sub port interface.
+This command implies the dependency that a parent host interface must be created before the creation of a host sub port interface.
Example:
```
-ip link add link Ethernet64 name Ethernet64.10 type vlan id 10
-ip link set Ethernet64.10 mtu 9100
-ip link set Ethernet64.10 up
+ip link add link Ethernet0 name Ethernet0.100 type vlan id 100
+ip link set Ethernet0.100 mtu 9100
+ip link set Ethernet0.100 up
+ip link add link Ethernet64 name Eth64.10 type vlan id 100
+ip link set Eth64.10 mtu 9100
+ip link set Eth64.10 up
```
```
-ip link del Ethernet64.10
+ip link del Ethernet0.100
+ip link del Eth64.10
```
We use `ip address` and `ip -6 address` to add and remove ip adresses on a host sub port interface.
Example:
```
-ip address add 192.168.0.1/24 dev Ethernet64.10
+ip address add 192.0.0.1/24 dev Ethernet0.100
+ip address add 192.168.0.1/24 dev Eth64.10
```
Please note that the use of iproute2 package is internal to SONiC, specifically IntfMgrd.
@@ -369,14 +470,42 @@ Internally, a sub port interface is represented as a Port object to be perceived
# 3 Event flow diagrams
## 3.1 Sub port interface creation
-![](sub_intf_creation_flow.png)
+![](sub_intf_creation_flow_version_2.png)
+
+* Field vlan added to config_db carries vlan id associated to the subinterface.
+* Sub Interface will be created and treated ready only if vlan corresponding to subinterface is configured.
## 3.2 Sub port interface runtime admin status change
-![](sub_intf_set_admin_status_flow.png)
+![](sub_intf_set_admin_status_flow_version_2.png)
+
+Admin status of the subinterface is tied to its parent interface admin status:
+* Kernel does not allow subinterface netdev UP until its parent netdev is UP.
+* IntfMgrd looks up the admin status of parent interface from STATE_DB|PORT_TABLE.
+ - OP: admin UP of subinterface: If Parent interface is admin UP, subinterface admin UP is performed.
+ - OP: admin down of subinterface: No dependency on parent interface admin status. Subinterface admin down performed.
+* IntfMgrd also subscribes to STATE_DB|PORT_TABLE and APPL_DB|LAG_TABLE for parent interface admin status change to update associated subinterface admin status.
## 3.3 Sub port interface removal
![](sub_intf_removal_flow.png)
+## 3.4 Sub port MTU Configuration
+![](sub_intf_set_mtu_flow_version_2.png)
+
+MTU on subinterface has dependency on MTU configured on parent interface.
+
+* Kernel does not allow subinterface netdev MTU to exceed its parent netdev MTU.
+* By default kernel inherits subinterface netdev MTU from parent netdev.
+* If Parent netdev MTU is updated to lower value than any of its subinterface netdev MTU, kernel updates subinterface netdev MTU to parent netdev MTU. But, kernel does NOT restore previous subinterface MTU if parent netdev MTU is configured > subinterface MTU.
+
+To solve above dependency:
+
+* Whenever MTU is updated on subinterface
+ - If configured MTU <= Parent MTU, update subinterface MTU.
+ - If configured MTU > Parent interface MTU, do not update subinterface MTU and cache the configured MTU.
+* IntfMgrd subscribes to STATE_DB|PORT_TABLE & APPL_DB|LAG_TABLE.
+ - If Parent interface MTU is changed to < subinterface MTU, APPL_DB|INTF_TABLE for subinterface is updated to parent interface MTU.
+ - If Parent interface MTU is changed to > subinterface MTU, update subinterface MTU to user configured subinterface MTU.
+
# 4 CLIs
## 4.1 Config commands
### 4.1.1 Config a sub port interface
@@ -409,7 +538,7 @@ Commands:
del Remove a sub port interface
```
```
-Usage: config subinterface add
+Usage: config subinterface add [vlan <1-4094>]
```
```
Usage: config subinterface del
@@ -477,7 +606,8 @@ Example:
```
Sub port interface Speed MTU Vlan Admin Type
------------------ ------- ----- ------ ------- -------------------
- Ethernet64.10 100G 9100 10 up dot1q-encapsulation
+ Eth64.10 100G 9100 100 up dot1q-encapsulation
+ Ethernet0.100 100G 9100 100 up dot1q-encapsulation
```
No operational status is defined on RIF (sub port interface being a type of RIF) in SAI spec.
@@ -551,10 +681,8 @@ We enforce a minimum scalability requirement on the number of sub port interface
| Number of sub port interfaces per physical port or port channel | 250 |
| Number of sub port interfaces per switch | 750 |
-# 8 Port channel renaming
-Linux has the limitation of 15 characters on an interface name.
-For sub port interface use cases on port channels, we need to redesign the current naming convention for port channels (PortChannelXXXX, 15 characters) to take shorter names (such as, PoXXXX, 6 characters).
-Even when the parent port is a physical port, sub port interface use cases, such as Ethernet128.1024, still exceed the 15-character limit on an interface name.
+# 8 Upgrade and Downgrade considerations
+Since subinterface are supported in existing long name CONFIG_DB format, Upgrade and downgrade will be seamless with no impact to subinterface functionality.
# 9 Appendix
## 9.1 Difference between a sub port interface and a vlan interface
@@ -564,7 +692,48 @@ Vlan interface is a router interface (RIF type vlan Vlan#) facing a .1Q bridge.
![](vlan_intf_rif.png "Fig. 3: Vlan interface")
__Fig. 3: Vlan interface__
-# 10 Open questions:
+# 10 API Library
+All DB & Kernel netdev corresponding to the subinterface can be created with short name & existing long name format.
+Intfmgrd & IntfsOrch which manages sub-interfaces should be able to fetch parent interface properties for a given subinterface.
+
+## 10.1 SWSS CPP Library
+In CPP, applications can use subintf class provided by sonic-swss library to fetch attributes of subinterface.
+
+Subintf class provides below methods:
+
+1. isValid()
+This method returns true if the subinterface is valid.
+Subinterface will be considered valid if it follows format Ethxxx.yyyy, Poxxx.yyyy & Ethernetxx.yyyy.
+
+2. subIntfIdx()
+Returns a subinterface index as an integer type.
+
+3. longName()
+Returns subinterface name in longname format.
+
+4. shortName()
+Returns subinterface name in shortname format.
+
+5. parentIntfLongName()
+Returns parent interface name in longname format.
+
+6. parentIntfShortName()
+Returns parent interface in shortname format.
+
+
+## 10.2 Python Library
+In Python, applications can use interface library in utilities_common to perform conversion to longname or shortname.
+
+1. intf_get_longname()
+Returns interface in longname format.
+It returns a longname format for both subinterface and parent interface depending on what argument is being passed.
+
+2. intf_get_shortname()
+Returns interface in shortname format.
+It returns a shortname format for both subinterface and parent interface depending on what argument is being passed.
+
+
+# 11 Open questions:
1. Miss policy to be defined in SAI specification
When a 802.1q tagged packet is received on a physical port or a port channel, it will go to the sub port interface that matches the VLAN id inside the packet.
@@ -573,10 +742,10 @@ __Fig. 3: Vlan interface__
As shown in Fig. 1, there is possiblity that a physical port or a port channel may not have a RIF type port created.
In this case, if an untagged packet is received on the physical port or port channel, what is the policy on handling the untagged packet?
-# 11 Acknowledgment
+# 12 Acknowledgment
Wenda would like to thank his colleagues with Microsoft SONiC team, Shuotian, Prince, Pavel, and Qi in particular, Itai with Mellanox for all discussions that shape the design proposal, and community members for comments and feedbacks that improve the design.
-# 12 References
+# 13 References
[1] SAI_Proposal_Bridge_port_v0.9.docx https://github.com/opencomputeproject/SAI/blob/master/doc/bridge/SAI_Proposal_Bridge_port_v0.9.docx
[2] Remove the need to create an object id for vlan in creating a sub port router interface https://github.com/opencomputeproject/SAI/pull/998
diff --git a/doc/subport/sub_intf_creation_flow_version_2.png b/doc/subport/sub_intf_creation_flow_version_2.png
new file mode 100644
index 0000000000..8b9f252379
Binary files /dev/null and b/doc/subport/sub_intf_creation_flow_version_2.png differ
diff --git a/doc/subport/sub_intf_set_admin_status_flow_version_2.png b/doc/subport/sub_intf_set_admin_status_flow_version_2.png
new file mode 100644
index 0000000000..989d284608
Binary files /dev/null and b/doc/subport/sub_intf_set_admin_status_flow_version_2.png differ
diff --git a/doc/subport/sub_intf_set_mtu_flow_version_2.png b/doc/subport/sub_intf_set_mtu_flow_version_2.png
new file mode 100644
index 0000000000..d9e79f01b4
Binary files /dev/null and b/doc/subport/sub_intf_set_mtu_flow_version_2.png differ
diff --git a/doc/system_health_monitoring/system-health-HLD.md b/doc/system_health_monitoring/system-health-HLD.md
index 9b534af838..25b11be2f6 100644
--- a/doc/system_health_monitoring/system-health-HLD.md
+++ b/doc/system_health_monitoring/system-health-HLD.md
@@ -5,69 +5,64 @@
| Rev | Date | Author | Change Description |
|:---:|:-----------:|:------------------:|-----------------------------------|
| 0.1 | | Kebo Liu | Initial version |
-
+ | 0.2 | | Junchao Chen | Check service status without monit|
## 1. Overview of the system health monitor
-System health monitor is intended to monitor both critical services and peripheral device status and leverage system log, system status LED to and CLI command output to indicate the system status.
-
-In current SONiC implementation, already have Monit which is monitoring the critical services status and also have a set of daemons(psud, thermaltcld, etc.) inside PMON collecting the peripheral devices status.
-
-System health monitoring service will not monitor the critical services or devices directly, it will reuse the result of Monit and PMON daemons to summary the current status and decide the color of the system health LED.
-
-### 1.1 Services under Monit monitoring
-
-For the Monit, now below services and file system is under monitoring:
-
- admin@sonic# monit summary -B
- Monit 5.20.0 uptime: 1h 6m
- Service Name Status Type
- sonic Running System
- rsyslog Running Process
- telemetry Running Process
- dialout_client Running Process
- syncd Running Process
- orchagent Running Process
- portsyncd Running Process
- neighsyncd Running Process
- vrfmgrd Running Process
- vlanmgrd Running Process
- intfmgrd Running Process
- portmgrd Running Process
- buffermgrd Running Process
- nbrmgrd Running Process
- vxlanmgrd Running Process
- snmpd Running Process
- snmp_subagent Running Process
- sflowmgrd Running Process
- lldpd_monitor Running Process
- lldp_syncd Running Process
- lldpmgrd Running Process
- redis_server Running Process
- zebra Running Process
- fpmsyncd Running Process
- bgpd Running Process
- staticd Running Process
- bgpcfgd Running Process
- root-overlay Accessible Filesystem
- var-log Accessible Filesystem
-
-
-By default any above services or file systems is not in good status will be considered as fault condition.
-
-### 1.2 Peripheral devices status which could impact the system health status
+System health monitor is intended to monitor both critical services/processes and peripheral device status and leverage system log, system status LED to and CLI command output to indicate the system status.
+
+In current SONiC implementation, Monit service can monitor the file system as well as customized script status, system health monitor can rely on Monit service to monitor these items. There are also a set of daemons such as psud, thermaltcld inside PMON to collect the peripheral devices status.
+
+System health monitor needs to monitor the critical service/processes status and borrow the result of Monit service/PMON daemons to summarize the current status and decide the color of the system health LED.
+
+### 1.1 Monitor critical services/processes
+
+#### 1.1.1 Monitor critical services
+
+1. Read FEATURE table in CONFIG_DB, any service whose "STATE" field was configured with "enabled" or "always_enabled" is expected to run in the system
+2. Get running services via docker tool (Use python docker library to get running containers)
+3. Compare result of #1 and result of #2, any difference will be considered as fault condition
+
+#### 1.1.2 Monitor critical processes
+
+1. Read FEATURE table in CONFIG_DB, any service whose "STATE" field was configured with "enabled" or "always_enabled" is expected to run in the system
+2. Get critical processes of each running service by reading file /etc/supervisor/critical_processes (Use `docker inspect --format "{{.GraphDriver.Data.MergedDir}}"` to get base director for a container)
+3. For each container, use "supervisorctl status" to get its critical process status, any critical process is not in "RUNNING" status will be considered as fault condition.
+
+### 1.2 Services under Monit monitoring
+
+For the Monit, now below programs and file systems are under monitoring:
+
+```
+admin@sonic:~$ sudo monit summary -B
+Monit 5.20.0 uptime: 22h 56m
+ Service Name Status Type
+ sonic Running System
+ rsyslog Running Process
+ root-overlay Accessible Filesystem
+ var-log Accessible Filesystem
+ routeCheck Status ok Program
+ diskCheck Status ok Program
+ container_checker Status ok Program
+ vnetRouteCheck Status ok Program
+ container_memory_telemetry Status ok Program
+```
+
+By default any service is not in expected status will be considered as fault condition.
+
+### 1.3 Peripheral devices status which could impact the system health status
- Any fan is missing/broken
-- Fan speed is below minimal range
+- Fan speed is lower than minimal value
- PSU power voltage is out of range
-- PSU temperature is too hot
+- PSU temperature is higher than threshold
- PSU is in bad status
-- ASIC temperature is too hot
+- ASIC temperature is higher than threshold
-### 1.3 Customization of monitored critical services and devices
+### 1.4 Customization of monitored critical services and devices
-#### 1.3.1 Ignore some of monitored critical services and devices
+#### 1.4.1 Ignore some of monitored critical services and devices
The list of monitored critical services and devices can be customized by a configuration file, the user can rule out some services or device sensors status from the monitor list. System health monitor will load this configuration file at next run and ignore the services or devices during the routine check.
```json
{
@@ -91,12 +86,12 @@ The filter string is case sensitive. Currently, it support following filters:
- .temperature: ignore temperature check for a specific PSU
- .voltage: ignore voltage check for a specific PSU
-The default filter is to filter nothing. Unknown filters will be silently ignored. The "serivces_to_ignore" and "devices_to_ignore" section must be an string array or it will use default filter.
+The default filter is to filter nothing. Unknown filters will be silently ignored. The "services_to_ignore" and "devices_to_ignore" section must be an string array or it will use default filter.
This configuration file will be platform specific and shall be added to the platform folder(/usr/share/sonic/device/{platform_name}/system_health_monitoring_config.json).
-#### 1.3.2 Extend the monitoring with adding user specific program to Monit
-Monit support to check program(scripts) exit status, if user want to monitor something that beyond critical serives or some special device not included in the above list, they can provide a specific scripts and add it to Monit check list, then the result can also be collected by the system health monitor. It requires 2 steps to add an external checker.
+#### 1.4.2 Extend the monitoring with adding user specific program to monitor
+Monit supports to check program(scripts) exit status, if user wants to monitor something that beyond critical services or some special device not included in the above list, they can provide specific scripts and add them to Monit checking list. Then the result can also be collected by the system health monitor. It requires two steps to add an external checker.
1. Prepare program whose command line output must qualify:
@@ -130,9 +125,9 @@ The configuration shall be:
}
```
-### 1.4 system status LED color definition
+### 1.5 system status LED color definition
-default system status LED color definition is like
+default system status LED color definition is like
| Color | Status | Description |
|:----------------:|:-------------:|:-----------------------:|
@@ -153,27 +148,30 @@ Considering that different vendors platform may have different LED color capabil
}
```
+The field "booting" is deprecated because there is no booting stage anymore. For backward compatible, user can still configure this field but it won't take effect.
+
## 2. System health monitor service business logic
-System health monitor daemon will running on the host, periodically(every 60s) check the "monit summary" command output and PSU, fan, thermal status which stored in the state DB, if anything wrong with the services monitored by monit or peripheral devices, system status LED will be set to fault status. When fault condition relieved, system status will be set to normal status.
+System health monitor daemon will run on the host, and periodically (every 60 seconds) check critical services, processes status, output of the command "monit summary", PSU, Fan, and thermal status which is stored in the state DB. If anything is abnormal, system status LED will be set to fault status. When fault condition relieved, system status will be set to normal status.
-Before the switch boot up finish, the system health monitoring service shall be able to know the switch is in boot up status(see open question 1).
+System health service shall start after database.service and updategraph.service. Monit service has a default 300 seconds start delay, system health service shall not wait for Monit service as Monit service only monitors part of the system. But system health service shall treat system as "Not OK" until Monit service start to work.
-If monit service is not avalaible, will consider system in fault condition.
-FAN/PSU/ASIC data not available will also considered as fault conditon.
+Empty FEATURE table will be considered as fault condition.
+A service whose critical_processes file cannot be parsed will be considered as fault condition. Empty or absence of critical_processes file is not a fault condition and shall be skipped.
+If Monit service is not running or in dead state, the system will be considered in fault condition.
+If FAN/PSU/ASIC data is not available, this will be considered as fault condition.
Incomplete data in the DB will also be considered as fault condition, e.g., PSU voltage data is there but threshold data not available.
Monit, thermalctld and psud will raise syslog when fault condition encountered, so system health monitor will only generate some general syslog on these situation to avoid redundant. For example, when fault condition meet, "system health status change to fault" can be print out, "system health status change to normal" when it recovered.
-this service will be started after system boot up(after database.service and updategraph.service).
## 3. System health data in redis database
System health service will populate system health data to STATE db. A new table "SYSTEM_HEALTH_INFO" will be created to STATE db.
; Defines information for a system health
- key = SYSTEM_HEALTH_INFO ; health information for the switch
+ key = SYSTEM_HEALTH_INFO ; health information for the switch
; field = value
summary = STRING ; summary status for the switch
= STRING ; an entry for a service or device
@@ -244,7 +242,7 @@ Add a new "show system-health" command line to the system
system-health Show system health status
...
-"show system-health" CLI has three sub command, "summary" and "detail" and "monitor-list". With command "summary" will give brief outpt of system health status while "detail" will be more verbose.
+"show system-health" CLI has three sub command, "summary" and "detail" and "monitor-list". With command "summary" will give brief output of system health status while "detail" will be more verbose.
"monitor-list" command will list all the services and devices under monitoring.
admin@sonic# show system-health ?
@@ -281,7 +279,7 @@ When something is wrong
for the "detail" sub command output, it will give out all the services and devices status which is under monitoring, and also the ignored service/device list will also be displayed.
-"moniter-list" will give a name list of services and devices exclude the ones in the ignore list.
+"monitor-list" will give a name list of services and devices exclude the ones in the ignore list.
When the CLI been called, it will directly analyze the "monit summary" output and the state DB entries to present a summary about the system health status. The status analyze logic of the CLI shall be aligned/shared with the logic in the system health service.
@@ -300,20 +298,8 @@ Fault condition and CLI output string table
| FAN data is not available in the DB|FAN data is not available|
| ASIC data is not available in the DB|ASIC data is not available|
-See open question 2 for adding configuration CLIs.
-
## 6. System health monitor test plan
1. If some critical service missed, check the CLI output, the LED color and error shall be as expected.
2. Simulate PSU/FAN/ASIC and related sensor failure via mock sysfs and check the CLI output, the LED color and error shall be as expected.
-3. Change the monitor service/device list then check whether the system health monitor service works as expected; also check whether the result of "show system-health monitor-list" aligned.
-
-## 7. Open Questions
-
-1. How to determine the SONiC system is in boot up stage? The current design is to compare the system up time with a "boot_timeout" value. The system up time is got from "cat /proc/uptime". The default "boot_timeout" is 300 seconds and can be configured by configuration. System health service will not do any check until SONiC system finish booting.
-
-```json
-{
- "boot_timeout": 300
-}
-```
+3. Change the monitor service/device list then check whether the system health monitor service works as expected; also check whether the result of "show system-health monitor-list" aligned.
diff --git a/doc/vxlan/EVPN/EVPN_VXLAN_HLD.md b/doc/vxlan/EVPN/EVPN_VXLAN_HLD.md
index 64db15f155..3074b03a28 100644
--- a/doc/vxlan/EVPN/EVPN_VXLAN_HLD.md
+++ b/doc/vxlan/EVPN/EVPN_VXLAN_HLD.md
@@ -2,7 +2,7 @@
# EVPN VXLAN HLD
-#### Rev 0.9
+#### Rev 1.0
# Table of Contents
@@ -28,7 +28,11 @@
- [COUNTER_DB](#counter_db-changes)
- [4.3 Modules Design and Flows](#43-modules-design-and-flows)
- [4.3.1 Tunnel Creation](#431-tunnel-auto-discovery-and-creation)
+ - [4.3.1.1 P2P Tunnel Creation](#4311-p2p-tunnel-creation)
+ - [4.3.1.2 P2MP Tunnel Creation](#4312-p2mp-tunnel-creation)
- [4.3.2 Tunnel Deletion](#432-tunnel-deletion)
+ - [4.3.2.1 P2P Tunnel Deletion](#4321-p2p-tunnel-deletion)
+ - [4.3.2.2 P2MP Tunnel Deletion](#4322-p2mp-tunnel-deletion)
- [4.3.3 Mapper Handling](#433-per-tunnel-mapper-handling)
- [4.3.4 VXLAN State DB Changes](#434-vxlan-state-db-changes)
- [4.3.5 Tunnel ECMP](#435-support-for-tunnel-ecmp)
@@ -69,6 +73,7 @@
| 0.7 | | Rajesh Sankaran | Click and SONiC CLI added |
| 0.8 | | Hasan Naqvi | Linux kernel section and fdbsyncd testcases added |
| 0.9 | | Nikhil Kelhapure | Warm Reboot Section added |
+| 1.0 | | Sudharsan D.G | Using P2MP Tunnel for Layer2 functionality |
# Definition/Abbreviation
@@ -87,7 +92,8 @@
| VRF | Virtual Routing and Forwarding |
| VTEP | VXLAN Tunnel End point |
| VXLAN | Virtual Extended LAN |
-
+| P2P | Point to Point Tunnel |
+| P2MP | Point to MultiPoint Tunnel |
# About this Manual
This document provides general information about the EVPN VXLAN feature implementation based on RFC 7432 and 8365 in SONiC.
@@ -623,6 +629,9 @@ In the current implementation, Tunnel Creation handling in the VxlanMgr and Vxla
The VTEP is represented by a VxlanTunnel Object created as above with the DIP as 0.0.0.0 and
SAI object type as TUNNEL. This SAI object is P2MP.
+Some vendors support P2P Tunnels to handle Layer2 extension and fdb learning while some vendors support using existing P2MP for handling Layer2 scenarios. The difference between the two approaches is the way in which the remote end point flooding is done. In P2P tunnel based approach, for every end point discovered from IMET a P2P tunnel object is created in the hardware and the bridge port created with this tunnel object is added as a VLAN member to the VLAN. In P2MP tunnel based approach, when an IMET route is received the remote end point along with local P2MP tunnel bridge port is added as L2MC group member along for the L2MC group associated with the VLAN. In order to handle both scenarios, evpn_remote_vni orch which currently handles remote VNI is split into two types - evpn_remote_vni_p2p to handle the flow involving the P2P tunnel creation and evpn_remote_vni_p2mp to handle the flow for using the existing P2MP tunnel. The decision to chose which orch to use is dependent on the SAI enum query capability for the attribute SAI_TUNNEL_ATTR_PEER_MODE. If the vendors have SAI_TUNNEL_PEER_MODE_P2P listed, then evpn_remote_vni_p2p orch will be used, else evpn_remote_vni_p2mp will be used. These enhancements abstract the two different modes that can be used to program the SAI. For an external user, there will be no changes from usability perspective since the schema is unchanged.
+
+#### 4.3.1.1 P2P Tunnel creation
In this feature enhancement, the following events result in remote VTEP discovery and trigger tunnel creation. These tunnels are referred to as dynamic tunnels and are P2P.
- IMET route rx
@@ -643,10 +652,15 @@ For every dynamic tunnel discovered, the following processing occurs.
The creation sequence assuming only IMET rx is depicted in the diagram below.
![Tunnel Creation](images/tunnelcreate.PNG "Figure : Tunnel Creation")
-__Figure 5: EVPN Tunnel Creation__
+__Figure 5.1: EVPN P2P Tunnel Creation__
-### 4.3.2 Tunnel Deletion
+#### 4.3.1.2 P2MP Tunnel Creation
+In the current implementation P2MP tunnel creation flow exist with the exception of a bridgeport not created for P2MP tunnel. To support using P2MP tunnel for L2 purposes a bridge port is created for the P2MP tunnel object.
+![P2MP Tunnel Creation](images/p2mptunnelcreate.jpg "Figure : P2MP Tunnel Creation")
+__Figure 5.2: EVPN P2MP Tunnel Creation__
+### 4.3.2 Tunnel Deletion
+#### 4.3.2.1 P2P Tunnel Deletion
EVPN Tunnel Deletion happens when the refcnt goes down to zero. So depending on the last route being deleted (IMET, MAC or IP prefix) the tunnel is deleted.
sai_tunnel_api remove calls are incompletely handled in the current implementation.
@@ -656,6 +670,9 @@ The following will be added as part of tunnel deletion.
- sai_tunnel_remove_map, sai_tunnel_remove_tunnel_termination, sai_tunnel_remove_tunnel when the tunnel is to be removed on account of the last entry being removed.
- VxlanTunnel object will be deleted.
+#### 4.3.2.2 P2MP Tunnel Deletion
+In case of P2MP tunnels, the flow is same as the existing flow where the tunnel is deleted after last vxlan-vni map or vrf-vni map is deleted. Additionally before the tunnel deletion, the bridge port created is deleted.
+
### 4.3.3 Per Tunnel Mapper handling
The SAI Tunnel interface requires encap and decap mapper id to be specified along with every sai tunnel create call.
@@ -698,6 +715,7 @@ It is proposed to handle these variances in the SAI implementation.
### 4.3.6 IMET route handling
+#### 4.3.6.1 P2P Tunnel Vlan extension
The IMET route is used in EVPN to specify how BUM traffic is to be handled. This feature enhancement supports only ingress replication as the method to originate BUM traffic.
The VLAN, Remote IP and VNI to be used is encoded in the IMET route.
@@ -707,7 +725,15 @@ The VLAN, Remote IP and VNI to be used is encoded in the IMET route.
The IMET rx processing sequence is depicted in the diagram below.
![Vlan extension](images/vlanextend.PNG "Figure : VLAN Extension")
-__Figure 6: IMET route processing VLAN extension__
+__Figure 6.1: IMET route processing P2P Tunnel VLAN extension__
+
+#### 4.3.6.2 P2MP Tunnel Vlan extension
+
+Similar to P2P tunnel scenario, the feature supports only the ingress replication. However the remote end points are added to VLAN as follows. In SONiC VLAN is created currently using SAI_VLAN_FLOOD_CONTROL_TYPE_ALL(default). To support flooding in P2MP based tunnels, the VLAN's flood control type is set to SAI_VLAN_FLOOD_CONTROL_TYPE_COMBINED which would support flooding to local ports as well as an additional multicast group. When type 2 prefixs are received, the remote end points are added to VLAN by creating a L2MC group and setting it to VLAN created in combined mode, and adding one L2MC group member per remote end point as shown in the flow below
+
+![P2MP Vlan extension](images/p2mpvlanextension.jpg "Figure : P2MP VLAN Extension")
+__Figure 6.2: IMET route processing P2MP TunnelVLAN extension__
+
##### FRR processing
When remote IMET route is received, fdbsyncd will install entry in REMOTE_VNI_TABLE in APP_DB:
@@ -1078,10 +1104,20 @@ Linux kernel version 4.9.x used in SONiC requires backport of a few patches to s
| Vrf-1 | 104 |
+-------+-------+
Total count : 1
-
+
4. show vxlan tunnel
+ +-----------------------+---------------+------------------+------------------+---------------------------------+
+ | vxlan tunnel name | source ip | destination ip | tunnel map name | tunnel map mapping(vni -> vlan) |
+ +=======================+===============+==================+==================+=================================+
+ | Vtep1 | 4.4.4.4 | | map_50_Vlan5 | 50 -> 5 |
+ +-----------------------+---------------+------------------+------------------+---------------------------------+
+ | Vtep1 | 4.4.4.4 | | map_100_Vlan10 | 100 -> 10 |
+ +-----------------------+---------------+------------------+------------------+---------------------------------+
+
+5. show vxlan remotevtep
- lists all the discovered tunnels.
- SIP, DIP, Creation Source, OperStatus are the columns.
+ - Since P2P tunnels are not created in the hardware on the flow where P2MP tunnel itself is used flooding using L2MC group, this table will not be populated.
+---------+---------+-------------------+--------------+
| SIP | DIP | Creation Source | OperStatus |
@@ -1092,7 +1128,7 @@ Linux kernel version 4.9.x used in SONiC requires backport of a few patches to s
+---------+---------+-------------------+--------------+
Total count : 2
-5. show vxlan remote_mac
+6. show vxlan remote_mac
- lists all the MACs learnt from the specified remote ip or all the remotes for all vlans. (APP DB view)
- VLAN, MAC, RemoteVTEP, VNI, Type are the columns.
@@ -1125,7 +1161,7 @@ Linux kernel version 4.9.x used in SONiC requires backport of a few patches to s
Total count : 2
-6. show vxlan remote_vni
+7. show vxlan remote_vni
- lists all the VLANs learnt from the specified remote ip or all the remotes. (APP DB view)
- VLAN, RemoteVTEP, VNI are the columns
@@ -1147,7 +1183,35 @@ Linux kernel version 4.9.x used in SONiC requires backport of a few patches to s
+---------+--------------+-------+
Total count : 1
-
+8. show vxlan counters(P2MP Tunnel)
+ +--------+---------+----------+--------+---------+----------+--------+
+ | Tunnel | RX_PKTS | RX_BYTES | RX_PPS | TX_PKTS | TX_BYTES | TX_PPS |
+ +========+=========+==========+========+=========+==========+========+
+ | Vtep1 | 1234 | 1512034 | 10/s | 2234 | 2235235 | 23/s |
+ +--------+---------+----------+--------+---------+----------+--------+
+
+9. show vxlan counters(P2P Tunnels)
+ +--------------+---------+----------+--------+---------+----------+--------+
+ | Tunnel | RX_PKTS | RX_BYTES | RX_PPS | TX_PKTS | TX_BYTES | TX_PPS |
+ +==============+=========+==========+========+=========+==========+========+
+ | EVPN_2.2.2.2 | 1234 | 1512034 | 10/s | 2234 | 2235235 | 23/s |
+ +--------------+---------+----------+--------+---------+----------+--------+
+ | EVPN_3.2.3.2 | 2344 | 162034 | 15/s | 200 | 55235 | 2/s |
+ +--------------+---------+----------+--------+---------+----------+--------+
+ | EVPN_2.2.2.2 | 9853 | 9953260 | 27/s | 8293 | 7435211 | 18/s |
+ +--------------+---------+----------+--------+---------+----------+--------+
+
+
+10. show vxlan counters EVPN_5.1.6.8 (Per P2P Tunnel)
+ EVPN_5.1.6.8
+ ---------
+
+ RX:
+ 13 packets
+ N/A bytes
+ TX:
+ 1,164 packets
+ N/A bytes
```
### 5.2 KLISH CLI
@@ -1385,18 +1449,26 @@ To support warm boot, all the sai_objects must be uniquely identifiable based on
- Verify that there is a SAI_OBJECT_TYPE_BRIDGE_PORT pointing to the above created P2P tunnel.
- Verify that there is a SAI_OBJECT_TYPE_VLAN_MEMBER entry for the vlan corresponding to the VNI created and pointing to the above bridge port.
7. Add more REMOTE_VNI table entries to different Remote IP.
- - Verify that additional SAI_OBJECT_TYPE_TUNNEL, BRIDGEPORT and VLAN_MEMBER objects are created.
+ - Verify that additional SAI_OBJECT_TYPE_TUNNEL, BRIDGEPORT and VLAN_MEMBER objects are created in case of platforms that create dynamic P2P tunnels on type 3 routes.
+ - Verify that vlan flood type is set to SAI_VLAN_FLOOD_CONTROL_TYPE_COMBINED. Verify that L2MC group is created and SAI_OBJECT_TYPE_L2MC_GROUP_MEMBER with end point IP and P2MP bridge port is created and set in vlan's unknown unicast and broadcast flood group in case of platforms that use P2MP tunnel on type 3 routes.
8. Add more REMOTE_VNI table entries to the same Remote IP.
- - Verify that additional SAI_OBJECT_TYPE_VLAN_MEMBER entries are created pointing to the already created BRIDGEPORT object per remote ip.
-9. Remove the additional entries created above and verify that the created VLAN_MEMBER entries are deleted.
-10. Remove the last REMOTE_VNI entry for a DIP and verify that the created VLAN_MEMBER, TUNNEL, BRIDGEPORT ports are deleted.
+ - Verify that additional SAI_OBJECT_TYPE_VLAN_MEMBER entries are created pointing to the already created BRIDGEPORT object per remote ip in case of platforms that create dynamic P2P tunnels on type 3 routes.
+ - Verify that additional SAI_OBJECT_TYPE_L2MC_GROUP_MEMBER entries are created per remote ip with P2MP bridge port in case of platforms that use P2MP tunnel on type 3 routes.
+9. Remove the additional entries created above
+ - Verify that the created VLAN_MEMBER entries are deleted in case of platforms that create VLAN_MEMBER.
+ - Verify that L2MC_GROUP_MEMBER entries are deleted in case of platforms creating SAI_OBJECT_TYPE_L2MC_GROUP_MEMBER per end point IP.
+10. Remove the last REMOTE_VNI entry for a DIP
+ - Verify that the created VLAN_MEMBER, TUNNEL, BRIDGEPORT ports are deleted for platforms that use P2P Tunnels.
+ - Verify that L2MC_GROUP_MEMBERS are removed, L2MC_GROUP is deleted and vlan's flood group are set to null object as well as vlan's flood type is updated to SAI_VLAN_FLOOD_CONTROL_TYPE_ALL in case of platforms that use P2MP tunnel.
### 8.2 FdbOrch
1. Create a VXLAN_REMOTE_VNI entry to a remote destination IP.
2. Add VXLAN_REMOTE_MAC entry to the above remote IP and VLAN.
- - Verify ASIC DB table fdb entry is created with remote_ip and bridgeport information.
+ - Verify ASIC DB table fdb entry is created with remote_ip and bridgeport information.
+ - In case of platforms that use P2P tunnel, verify that P2P tunnel's bridgeport is used.
+ - In case of platforms that use P2MP tunnel, verify that P2MP tunnel's bridge port is used.
3. Remove the above MAC entry and verify that the corresponding ASIC DB entry is removed.
4. Repeat above steps for remote static MACs.
5. Add MAC in the ASIC DB and verify that the STATE_DB MAC_TABLE is updated.
diff --git a/doc/vxlan/EVPN/images/p2mptunnelcreate.jpg b/doc/vxlan/EVPN/images/p2mptunnelcreate.jpg
new file mode 100644
index 0000000000..8c0322eb8a
Binary files /dev/null and b/doc/vxlan/EVPN/images/p2mptunnelcreate.jpg differ
diff --git a/doc/vxlan/EVPN/images/p2mpvlanextension.jpg b/doc/vxlan/EVPN/images/p2mpvlanextension.jpg
new file mode 100644
index 0000000000..1b27ea34da
Binary files /dev/null and b/doc/vxlan/EVPN/images/p2mpvlanextension.jpg differ
diff --git a/doc/vxlan/Overlay ECMP with BFD.md b/doc/vxlan/Overlay ECMP with BFD.md
new file mode 100644
index 0000000000..4485cd6237
--- /dev/null
+++ b/doc/vxlan/Overlay ECMP with BFD.md
@@ -0,0 +1,360 @@
+# Overlay ECMP with BFD monitoring
+## High Level Design Document
+### Rev 1.1
+
+# Table of Contents
+
+ * [Revision](#revision)
+
+ * [About this Manual](#about-this-manual)
+
+ * [Definitions/Abbreviation](#definitionsabbreviation)
+
+ * [1 Requirements Overview](#1-requirements-overview)
+ * [1.1 Usecase](#11-usecase)
+ * [1.2 Functional requirements](#12-functional-requirements)
+ * [1.3 CLI requirements](#13-cli-requirements)
+ * [1.4 Warm Restart requirements ](#14-warm-restart-requirements)
+ * [1.5 Scaling requirements ](#15-scaling-requirements)
+ * [1.6 SAI requirements ](#16-sai-requirements)
+ * [2 Modules Design](#2-modules-design)
+ * [2.1 Config DB](#21-config-db)
+ * [2.2 App DB](#22-app-db)
+ * [2.3 Module Interaction](#23-module-interaction)
+ * [2.4 Orchestration Agent](#24-orchestration-agent)
+ * [2.5 Monitoring and Health](#25-monitoring-and-health)
+ * [2.6 BGP](#26-bgp)
+ * [2.7 CLI](#27-cli)
+ * [2.8 Test Plan](#28-test-plan)
+
+###### Revision
+
+| Rev | Date | Author | Change Description |
+|:---:|:-----------:|:------------------:|-----------------------------------|
+| 0.1 | 09/09/2021 | Prince Sunny | Initial version |
+| 1.0 | 09/13/2021 | Prince Sunny | Revised based on review comments |
+| 1.1 | 10/08/2021 | Prince Sunny | BFD section seperated |
+| 1.2 | 10/18/2021 | Prince Sunny/Shi Su | Test Plan added |
+| 1.3 | 11/01/2021 | Prince Sunny | IPv6 test cases added |
+| 1.4 | 12/03/2021 | Prince Sunny | Added scaling section, extra test cases |
+
+# About this Manual
+This document provides general information about the Vxlan Overlay ECMP feature implementation in SONiC with BFD support. This is an extension to the existing VNET Vxlan support as defined in the [Vxlan HLD](https://github.com/Azure/SONiC/blob/master/doc/vxlan/Vxlan_hld.md)
+
+
+# Definitions/Abbreviation
+###### Table 1: Abbreviations
+| | |
+|--------------------------|--------------------------------|
+| BFD | Bidirectional Forwarding Detection |
+| VNI | Vxlan Network Identifier |
+| VTEP | Vxlan Tunnel End Point |
+| VNet | Virtual Network |
+
+
+# 1 Requirements Overview
+
+## 1.1 Usecase
+
+Below diagram captures the use-case. In this, ToR is a Tier0 device and Leaf is a Tier1 device. Vxlan tunnel is established from Leaf (Tier1) to a VTEP endpoint. ToR (Tier0), Spine (Tier3) are transit devices.
+
+
+![](https://github.com/Azure/SONiC/blob/master/images/vxlan_hld/OverlayEcmp_UseCase.png)
+
+### Packet flow
+
+- The packets destined to the Tunnel Enpoint shall be Vxlan encapsulated by the Leaf (Tier1).
+- Return packet from the Tunnel Endpoint (LBs) back to Leaf may or may not be Vxlan encapsualted.
+- Some flows e.g. BFD over Vxlan shall require decapsulating Vxlan packets at Leaf.
+
+## 1.2 Functional requirements
+
+At a high level the following should be supported:
+
+- Configure ECMP with Tunnel Nexthops (IPv4 and IPv6)
+- Support IPv6 tunnel that can support both IPv4 and IPv6 traffic
+- Tunnel Endpoint monitoring via BFD
+- Add/Withdraw Nexthop based on Tunnel or Endpoint health
+
+## 1.3 CLI requirements
+- User should be able to show the Vnet routes
+- This is an enhancement to existing show command
+
+## 1.4 Warm Restart requirements
+No special handling for Warm restart support.
+
+## 1.5 Scaling requirements
+At a minimum level, the following are the estimated scale numbers
+
+| Item | Expected value |
+|--------------------------|-----------------------------|
+| ECMP groups | 512 |
+| ECMP group member | 128 |
+| Tunnel (Overlay) routes | 16k |
+| Tunnel endpoints | 4k |
+| BFD monitoring | 4k |
+
+## 1.6 SAI requirements
+In addition to supporting Overlay ECMP (TUNNEL APIs) and BFD (HW OFFLOAD), the platform must support the following SAI attributes
+| API |
+|--------------------------|
+| SAI_SWITCH_ATTR_VXLAN_DEFAULT_ROUTER_MAC |
+| SAI_SWITCH_ATTR_VXLAN_DEFAULT_PORT |
+
+
+# 2 Modules Design
+
+The following are the schema changes.
+
+## 2.1 Config DB
+
+Existing Vxlan and Vnet tables.
+
+### 2.1.1 VXLAN Table
+```
+VXLAN_TUNNEL|{{tunnel_name}}
+ "src_ip": {{ip_address}}
+ "dst_ip": {{ip_address}} (OPTIONAL)
+```
+### 2.1.2 VNET/Interface Table
+```
+VNET|{{vnet_name}}
+ "vxlan_tunnel": {{tunnel_name}}
+ "vni": {{vni}}
+ "scope": {{"default"}} (OPTIONAL)
+ "peer_list": {{vnet_name_list}} (OPTIONAL)
+ "advertise_prefix": {{false}} (OPTIONAL)
+```
+
+## 2.2 APP DB
+
+### VNET
+
+The following are the changes for Vnet Route table
+
+Existing:
+
+```
+VNET_ROUTE_TUNNEL_TABLE:{{vnet_name}}:{{prefix}}
+ "endpoint": {{ip_address}}
+ "mac_address":{{mac_address}} (OPTIONAL)
+ "vni": {{vni}}(OPTIONAL)
+```
+
+Proposed:
+```
+VNET_ROUTE_TUNNEL_TABLE:{{vnet_name}}:{{prefix}}
+ "endpoint": {{ip_address1},{ip_address2},...}
+ "endpoint_monitor": {{ip_address1},{ip_address2},...} (OPTIONAL)
+ "mac_address":{{mac_address1},{mac_address2},...} (OPTIONAL)
+ "vni": {{vni1},{vni2},...} (OPTIONAL)
+ "weight": {{w1},{w2},...} (OPTIONAL)
+ “profile”: {{profile_name}} (OPTIONAL)
+```
+
+```
+key = VNET_ROUTE_TUNNEL_TABLE:vnet_name:prefix ; Vnet route tunnel table with prefix
+; field = value
+ENDPOINT = list of ipv4 addresses ; comma separated list of endpoints
+ENDPOINT_MONITOR = list of ipv4 addresses ; comma separated list of endpoints, space for empty/no monitoring
+MAC_ADDRESS = 12HEXDIG ; Inner dst mac in encapsulated packet
+VNI = DIGITS ; VNI value in encapsulated packet
+WEIGHT = DIGITS ; Weights for the nexthops, comma separated (Optional)
+PROFILE = STRING ; profile name to be applied for this route, for community
+ string etc (Optional)
+```
+
+## 2.3 Module Interaction
+
+Overlay routes can be programmed via RestAPI or gNMI/gRPC interface which is not described in this document. A highlevel module interaction is shown below
+
+![](https://github.com/Azure/SONiC/blob/master/images/vxlan_hld/OverlayEcmp_ModuleInteraction.png)
+
+## 2.4 Orchestration Agent
+Following orchagents shall be modified.
+
+### VnetOrch
+
+#### Requirements
+
+- Vnetorch to add support to handle multiple endpoints for APP_VNET_RT_TUNNEL_TABLE_NAME based route task.
+- Reuse Nexthop tunnel based on the endpoint configuration.
+- If there is already the same endpoint exists, use that as member for Nexthop group.
+- Similar to above, reuse nexthop group, if multiple routes are programmed with the same set of nexthops.
+- Provide support for endpoint modification for a route prefix. Require SAI support for SET operation of routes.
+- Provide support for endpoint deletion for a route prefix. Orchagent shall check the existing entries and delete any tunnel/nexthop based on the new route update
+- Ensure backward compatibility with single endpoint routes
+- Use SAI_NEXT_HOP_GROUP_MEMBER_ATTR_WEIGHT for specifying weights to nexthop member
+- Desirable to have per tunnel stats via sai_tunnel_stat_t
+
+#### Detailed flow
+
+VnetOrch is one of the critical module for supporting overlay ecmp. VnetOrch subscribes to VNET and ROUTE updates from APP_DB.
+
+When a new route update is processed by the add operation,
+
+1. VnetOrch checks the nexthop group and if it exists, reuse the group
+2. For a new nexthop group member, add the ECMP member and identify the corresponding monitoring IP address. Create a mapping between the monitoring IP and nexthop tunnel endpoint.
+3. Initiate a BFD session for the monitoring IP if it does not exist
+4. Based on the BFD implementation (BfdOrch vs Control plane BFD), subscribe to BFD state change, either directly as subject observer (similar to port oper state notifications in orchagent) or via STATEDB update.
+5. Based on the VNET global configuration to advertise prefixes, indicate to STATEDB if the prefix must be advertised by BGP/FRR only if there is atleast one active nexthop. Remove this entry if there are no active nexthops indicated by BFD session down so that the network pfx is no longer advertised.
+
+#### Monitoring Endpoint Mapping
+
+VNET_ROUTE_TUNNEL_TABLE can provide monitoring endpoint IPs which can be different from the tunnel termination endpoints. VnetOrch creates a mapping for such endpoints and based on the monitoring endpoint (MonEP1) health, proceed with adding/removing nexthop tunnel endpoint (EP1) from the ECMP group for the respective prefix. It is assumed that for one tunnel termination endpoint (EP1), there shall be only one corresponding monitoring endpoint (MonEP1).
+
+#### Pros of SWSS to handle route update based on tunnel nexthop health:
+
+- No significant changes, if BFD session management is HW offload via SAI notifications or Control Plane assisted.
+- Similar to NHFLAGS handling for existing route ECMP group
+- Better performance in re-programming routes in ASIC instead of separate process to monitor and modify each route prefix by updating DB entries
+
+### Bfd HW offload
+
+This design requires endpoint health monitoring by setting BFD sessions via HW offload. Details of BFD orchagent and HW offloading is captured in this [document](https://github.com/Azure/SONiC/blob/master/doc/bfd/BFD%20HW%20Offload%20HLD.md)
+
+
+## 2.5 Monitoring and Health
+
+The routes are programmed based on the health of tunnel endpoints. It is possible that a tunnel endpoint health is monitored via another dedicated “monitoring” endpoint. Implementation shall enforce a “keep-alive” mechanism to monitor the health of end point and withdraw or reinstall the route when the endpoint is inactive or active respectively.
+When an endpoint is deemed unhealthy, router shall perform the following actions:
+1. Remove the nexthop from the ECMP path. If all endpoints are down, the route shall be withdrawn.
+2. If 50% of the nexthops are down, an alert shall be generated.
+
+## 2.6 BGP
+
+Advertise VNET routes
+The overlay routes programmed on the device must be advertised to BGP peers. This can be achieved by the “network” command.
+
+For example:
+```
+router bgp 1
+ address-family ipv4 unicast
+ network 10.0.0.0/8
+ exit-address-family
+ ```
+
+This configuration example says that network 10.0.0.0/8 will be announced to all neighbors. FRR bgpd doesn’t care about IGP routes when announcing its routes.
+
+
+## 2.7 CLI
+
+The following commands shall be modified/added :
+
+```
+ - show vnet routes all
+ - show vnet routes tunnel
+```
+
+Config commands for VNET, VNET Routes and BFD session is not considered in this design. This shall be added later based on requirement.
+
+## 2.8 Test Plan
+
+Pre-requisite:
+
+Create VNET and Vxlan tunnel as an below:
+
+```
+{
+ "VXLAN_TUNNEL": {
+ "tunnel_v4": {
+ "src_ip": "10.1.0.32"
+ }
+ },
+
+ "VNET": {
+ "Vnet_3000": {
+ "vxlan_tunnel": "tunnel_v4",
+ "vni": "3000",
+ "scope": "default"
+ }
+ }
+```
+Similarly for IPv6 tunnels
+
+```
+{
+ "VXLAN_TUNNEL": {
+ "tunnel_v6": {
+ "src_ip": "fc00:1::32"
+ }
+ },
+
+ "VNET": {
+ "Vnet_3001": {
+ "vxlan_tunnel": "tunnel_v6",
+ "vni": "3001",
+ "scope": "default"
+ }
+ }
+```
+
+Note: It can be safely assumed that only one type of tunnel exists - i.e, either IPv4 or IPv6 for this use-case
+
+For ```default``` scope, no need to associate interfaces to a VNET
+
+VNET tunnel routes must be created as shown in the example below
+
+```
+[
+ "VNET_ROUTE_TUNNEL_TABLE:Vnet_3000:100.100.2.1/32": {
+ "endpoint": "1.1.1.2",
+ "endpoint_monitor": "1.1.2.2"
+ }
+]
+```
+
+With IPv6 tunnels, prefixes can be either IPv4 or IPv6
+
+```
+[
+ "VNET_ROUTE_TUNNEL_TABLE:Vnet_3001:100.100.2.1/32": {
+ "endpoint": "fc02:1000::1",
+ "endpoint_monitor": "fc02:1000::2"
+ },
+ "VNET_ROUTE_TUNNEL_TABLE:Vnet_3001:20c0:a820:0:80::/64": {
+ "endpoint": "fc02:1001::1",
+ "endpoint_monitor": "fc02:1001::2"
+ }
+]
+```
+
+### Test Cases
+
+#### Overlay ECMP
+
+It is assumed that the endpoint IPs may not have exact match underlay route but may have an LPM underlay route or a default route. Test must consider both IPv4 and IPv6 traffic for routes configured as example shown above
+
+| Step | Goal | Expected results |
+|-|-|-|
+|Create a tunnel route to a single endpoint a. Send packets to the route prefix dst| Tunnel route create | Packets are received only at endpoint a |
+|Set the tunnel route to another endpoint b. Send packets to the route prefix dst | Tunnel route set | Packets are received only at endpoint b |
+|Remove the tunnel route. Send packets to the route prefix dst | Tunnel route remove | Packets are not received at any ports with dst IP of b |
+|Create tunnel route 1 with two endpoints A = {a1, a2}. Send multiple packets (varying tuple) to the route 1's prefix dst. | ECMP route create | Packets are received at both a1 and a2 |
+|Create tunnel route 2 to endpoint group A Send multiple packets (varying tuple) to route 2’s prefix dst | ECMP route create | Packets are received at both a1 and a2 |
+|Set tunnel route 2 to endpoint group B = {b1, b2}. Send packets to route 2’s prefix dst | ECMP route set | Packets are received at either b1 or b2 |
+|Send packets to route 1’s prefix dst. By removing route 2 from group A, no change expected to route 1 | NHG modify | Packets are received at either a1 or a2 |
+|Set tunnel route 2 to single endpoint b1. Send packets to route 2’s prefix dst | NHG modify | Packets are recieved at b1 only |
+|Set tunnel route 2 to shared endpoints a1 and b1. Send packets to route 2’s prefix dst | NHG modify | Packets are recieved at a1 or b1 |
+|Remove tunnel route 2. Send packets to route 2’s prefix dst | ECMP route remove | Packets are not recieved at any ports with dst IP of a1 or b1 |
+|Set tunnel route 3 to endpoint group C = {c1, c2, c3}. Ensure c1, c2, and c3 matches to underlay default route. Send 10000 pkt with random hash to route 3's prefix dst | NHG distribution | Packets are distributed equally across c1, c2 and c3 |
+|Modify the underlay default route nexthop/s. Send packets to route 3's prefix dst | Underlay ECMP | No change to packet distribution. Packets are distributed equally across c1, c2 and c3 |
+|Remove the underlay default route. | Underlay ECMP | Packets are not recieved at c1, c2 or c3 |
+|Re-add the underlay default route. | Underlay ECMP | Packets are equally recieved at c1, c2 or c3 |
+|Bring down one of the port-channels. | Underlay ECMP | Packets are equally recieved at c1, c2 or c3 |
+|Create a more specific underlay route to c1. | Underlay ECMP | Verify c1 packets are received only on the c1's nexthop interface |
+|Create tunnel route 4 to endpoint group A Send packets (fixed tuple) to route 4’s prefix dst | Vxlan Entropy | Verify Vxlan entropy|
+|Change the udp src port of original packet to route 4’s prefix dst | Vxlan Entropy | Verify Vxlan entropy is changed|
+|Change the udp dst port of original packet to route 4’s prefix dst | Vxlan Entropy | Verify Vxlan entropy is changed|
+|Change the src ip of original packet to route 4’s prefix dst | Vxlan Entropy | Verify Vxlan entropy is changed|
+|Create/Delete overlay routes to 16k with unique endpoints upto 4k | CRM | Verify crm resourse for route (ipv4/ipv6) and nexthop (ipv4/ipv6) |
+|Create/Delete overlay nexthop groups upto 512 | CRM | Verify crm resourse for nexthop_group |
+|Create/Delete overlay nexthop group members upto 128 | CRM | Verify crm resourse for nexthop_group_member |
+
+#### BFD and health monitoring
+
+TBD
+
+#### BGP advertising
+
+TBD
diff --git a/doc/xrcvd/transceiver-monitor-hld.md b/doc/xrcvd/transceiver-monitor-hld.md
index 97b875c3d8..b5ea73097f 100644
--- a/doc/xrcvd/transceiver-monitor-hld.md
+++ b/doc/xrcvd/transceiver-monitor-hld.md
@@ -237,7 +237,7 @@ A thread will be started to periodically refresh the DOM sensor information.
Detailed flow as showed in below chart:
-![](https://github.com/keboliu/SONiC/blob/master/images/xcvrd-flow.svg)
+![](https://github.com/Azure/SONiC/blob/d1159ca728112f10319fa47de4df89c445a27efc/images/transceiver_monitoring_hld/xcvrd_flow.svg)
#### 1.4.1 State machine of sfp\_state\_update\_task process ####
diff --git a/doc/ztp/ztp.md b/doc/ztp/ztp.md
index 53c41b8ffd..053c98b3f2 100644
--- a/doc/ztp/ztp.md
+++ b/doc/ztp/ztp.md
@@ -715,7 +715,7 @@ If user does not provide both DHCP option 67 or DHCP option 239, ZTP service con
Following is the order in which DHCP options are processed:
-1. The ZTP JSON file specified in pre-defined location as part of the image Local file on disk */host/ztp/ztp_local_data.json*.
+1. The ZTP JSON file specified in pre-defined location as part of the image Local file on disk */host/ztp/ztp_data_local.json*.
2. ZTP JSON URL specified via DHCP Option-67
3. ZTP JSON URL constructed using DHCP Option-66 TFTP server name, DHCP Option-67 file path on TFTP server
4. ZTP JSON URL specified via DHCPv6 Option-59
diff --git a/images/VM_image2.png b/images/VM_image2.png
index c0be5e5c21..33308c93cf 100644
Binary files a/images/VM_image2.png and b/images/VM_image2.png differ
diff --git a/images/ecmp/order_ecmp_pic.png b/images/ecmp/order_ecmp_pic.png
new file mode 100644
index 0000000000..a4166331fb
Binary files /dev/null and b/images/ecmp/order_ecmp_pic.png differ
diff --git a/sonic_latest_images.html b/sonic_latest_images.html
index 0f331d5aa2..5636a5f36f 100644
--- a/sonic_latest_images.html
+++ b/sonic_latest_images.html
@@ -45,8 +45,8 @@
-
Latest Successful Builds
-
NOTE: This page is updated manually once in a while and hence may not be pointing to the latest MASTER image. The current links are based on 16thOct2021 successful builds. To get the latest master image, refer pipelines page.
+
Latest Successful Builds
+
@@ -67,7 +67,10 @@ Latest Successful Builds
- click here for previous builds
+ NOTE: The 5 digit number given in the cells specifies the build Id of the images.
+
+
+ click here for previous builds
@@ -88,18 +91,21 @@ Latest Successful Builds
images = Object.keys(data[branches[i]]);
for (let j = 0; j < images.length; j++) {
image_name = images[j];
- image = data[branches[i]][images[j]];
+ image = data[branches[i]][images[j]];
image_platform = image_name.split(".")[0];
+ image_platform2 = image_name;
if(image_platform.length == 1){
platform = ""
}else{
platform = image_platform.split("sonic-")[1];
+ platform2 = image_platform2.split("sonic-")[1];
if(platform.length == 1){
platform = ""
}
}
image_avail = true;
image_url = image['url'];
+ build_id = image['build'];
if(image_url === 'null' || image_url === ""){
image_avail = false;
}
@@ -108,7 +114,7 @@ Latest Successful Builds
$("#disp_table").append(platform_column);
}
if (image_avail)
- image_column =""+image_name+" | ";
+ image_column =""+platform2+"-"+build_id+" | ";
else
image_column ="N/A | ";
diff --git a/supported_devices_platforms_md.sh b/supported_devices_platforms_md.sh
new file mode 100644
index 0000000000..fa3d0f76e0
--- /dev/null
+++ b/supported_devices_platforms_md.sh
@@ -0,0 +1,185 @@
+!/usr/bin/env bash
+git checkout -b sonic_image_md_update
+git config --global user.email "xinxliu@microsoft.com"
+git config --global user.name "xinliu-seattle"
+git reset --hard
+git pull origin sonic_image_md_update
+
+
+#set -euo pipefail
+
+DEFID_BRCM="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/definitions?name=Azure.sonic-buildimage.official.broadcom' | jq -r '.value[0].id')"
+DEFID_MLNX="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/definitions?name=Azure.sonic-buildimage.official.mellanox' | jq -r '.value[0].id')"
+DEFID_VS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/definitions?name=Azure.sonic-buildimage.official.vs' | jq -r '.value[0].id')"
+DEFID_INNO="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/definitions?name=Azure.sonic-buildimage.official.innovium' | jq -r '.value[0].id')"
+DEFID_BFT="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/definitions?name=Azure.sonic-buildimage.official.barefoot' | jq -r '.value[0].id')"
+DEFID_CHE="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/definitions?name=Azure.sonic-buildimage.official.cache' | jq -r '.value[0].id')"
+DEFID_CTC="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/definitions?name=Azure.sonic-buildimage.official.centec' | jq -r '.value[0].id')"
+DEFID_CTC64="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/definitions?name=Azure.sonic-buildimage.official.centec-arm64' | jq -r '.value[0].id')"
+DEFID_GRC="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/definitions?name=Azure.sonic-buildimage.official.generic' | jq -r '.value[0].id')"
+DEFID_MRV="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/definitions?name=Azure.sonic-buildimage.official.marvell-armhf' | jq -r '.value[0].id')"
+DEFID_NPH="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/definitions?name=Azure.sonic-buildimage.official.nephos' | jq -r '.value[0].id')"
+
+first=1
+for BRANCH in master
+do
+ first=''
+ BUILD_BRCM="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds?definitions='"${DEFID_BRCM}"'&branchName=refs/heads/'"${BRANCH}"'&$top=1&resultFilter=succeeded&api-version=6.0' | jq -r '.value[0].id')"
+ BUILD_BRCM_TS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_BRCM}"'?api-version=6.0' | jq -r '.queueTime')"
+ BUILD_MLNX="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds?definitions='"${DEFID_MLNX}"'&branchName=refs/heads/'"${BRANCH}"'&$top=1&resultFilter=succeeded&api-version=6.0' | jq -r '.value[0].id')"
+ BUILD_MLNX_TS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_MLNX}"'?api-version=6.0' | jq -r '.queueTime')"
+ BUILD_VS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds?definitions='"${DEFID_VS}"'&branchName=refs/heads/'"${BRANCH}"'&$top=1&resultFilter=succeeded&api-version=6.0' | jq -r '.value[0].id')"
+ BUILD_VS_TS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_VS}"'?api-version=6.0' | jq -r '.queueTime')"
+ BUILD_INNO="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds?definitions='"${DEFID_INNO}"'&branchName=refs/heads/'"${BRANCH}"'&$top=1&resultFilter=succeeded&api-version=6.0' | jq -r '.value[0].id')"
+ BUILD_INNO_TS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_INNO}"'?api-version=6.0' | jq -r '.queueTime')"
+ BUILD_BFT="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds?definitions='"${DEFID_BFT}"'&branchName=refs/heads/'"${BRANCH}"'&$top=1&resultFilter=succeeded&api-version=6.0' | jq -r '.value[0].id')"
+ BUILD_BFT_TS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_BFT}"'?api-version=6.0' | jq -r '.queueTime')"
+ BUILD_CHE="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds?definitions='"${DEFID_CHE}"'&branchName=refs/heads/'"${BRANCH}"'&$top=1&resultFilter=succeeded&api-version=6.0' | jq -r '.value[0].id')"
+ BUILD_CHE_TS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_CHE}"'?api-version=6.0' | jq -r '.queueTime')"
+ BUILD_CTC="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds?definitions='"${DEFID_CTC}"'&branchName=refs/heads/'"${BRANCH}"'&$top=1&resultFilter=succeeded&api-version=6.0' | jq -r '.value[0].id')"
+ BUILD_CTC_TS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_CTC}"'?api-version=6.0' | jq -r '.queueTime')"
+ BUILD_CTC64="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds?definitions='"${DEFID_CTC64}"'&branchName=refs/heads/'"${BRANCH}"'&$top=1&resultFilter=succeeded&api-version=6.0' | jq -r '.value[0].id')"
+ BUILD_CTC64_TS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_CTC64}"'?api-version=6.0' | jq -r '.queueTime')"
+ BUILD_GRC="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds?definitions='"${DEFID_GRC}"'&branchName=refs/heads/'"${BRANCH}"'&$top=1&resultFilter=succeeded&api-version=6.0' | jq -r '.value[0].id')"
+ BUILD_GRC_TS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_GRC}"'?api-version=6.0' | jq -r '.queueTime')"
+ BUILD_MRV="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds?definitions='"${DEFID_MRV}"'&branchName=refs/heads/'"${BRANCH}"'&$top=1&resultFilter=succeeded&api-version=6.0' | jq -r '.value[0].id')"
+ BUILD_MRV_TS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_MRV}"'?api-version=6.0' | jq -r '.queueTime')"
+ BUILD_NPH="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds?definitions='"${DEFID_NPH}"'&branchName=refs/heads/'"${BRANCH}"'&$top=1&resultFilter=succeeded&api-version=6.0' | jq -r '.value[0].id')"
+ BUILD_NPH_TS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_NPH}"'?api-version=6.0' | jq -r '.queueTime')"
+
+ #echo " [*] Last successful builds for \"${BRANCH}\":"
+ #echo " Broadcom: ${BUILD_BRCM}"
+ #echo " Mellanox: ${BUILD_MLNX}"
+ #echo " Virtual Switch: ${BUILD_VS}"
+
+ ARTF_BRCM="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_BRCM}"'/artifacts?artifactName=sonic-buildimage.broadcom&api-version=5.1' | jq -r '.resource.downloadUrl')"
+ ARTF_MLNX="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_MLNX}"'/artifacts?artifactName=sonic-buildimage.mellanox&api-version=5.1' | jq -r '.resource.downloadUrl')"
+ ARTF_VS="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_VS}"'/artifacts?artifactName=sonic-buildimage.vs&api-version=5.1' | jq -r '.resource.downloadUrl')"
+ ARTF_INNO="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_INNO}"'/artifacts?artifactName=sonic-buildimage.innovium&api-version=5.1' | jq -r '.resource.downloadUrl')"
+ ARTF_BFT="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_BFT}"'/artifacts?artifactName=sonic-buildimage.barefoot&api-version=5.1' | jq -r '.resource.downloadUrl')"
+ ARTF_CHE="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_CHE}"'/artifacts?artifactName=sonic-buildimage.cache&api-version=5.1' | jq -r '.resource.downloadUrl')"
+ ARTF_CTC="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_CTC}"'/artifacts?artifactName=sonic-buildimage.centec&api-version=5.1' | jq -r '.resource.downloadUrl')"
+ ARTF_CTC64="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_CTC64}"'/artifacts?artifactName=sonic-buildimage.centec-arm64&api-version=5.1' | jq -r '.resource.downloadUrl')"
+ ARTF_GRC="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_GRC}"'/artifacts?artifactName=sonic-buildimage.generic&api-version=5.1' | jq -r '.resource.downloadUrl')"
+ ARTF_MRV="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_MRV}"'/artifacts?artifactName=sonic-buildimage.marvell-armhf&api-version=5.1' | jq -r '.resource.downloadUrl')"
+ ARTF_NPH="$(curl -s 'https://dev.azure.com/mssonic/build/_apis/build/builds/'"${BUILD_NPH}"'/artifacts?artifactName=sonic-buildimage.nephos&api-version=5.1' | jq -r '.resource.downloadUrl')"
+
+echo "# Supported Platforms" > supported_devices_platforms.md
+
+echo "#### Following is the list of platforms that supports SONiC." >> supported_devices_platforms.md
+echo "| S.No | Vendor | Platform | ASIC Vendor | Switch ASIC | Port Configuration | Image |" >> supported_devices_platforms.md
+echo "| ---- | -------------- | ----------- | ----------------- | ----------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |" >> supported_devices_platforms.md
+echo "| 1 | Accton | AS4630-54PE | Broadcom | Helix 5 | 48x1G + 4x25G + 2x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 2 | Accton | AS5712-54X | Broadcom | Trident 2 | 72x10G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 3 | Accton | AS5812-54X | Broadcom | Trident 2 | 72x10G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 4 | Accton | AS5835-54T | Broadcom | Trident 3 | 48x10G + 6x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 5 | Accton | AS5835-54X | Broadcom | Trident 3 | 48x10G + 6x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 6 | Accton | AS6712-32X | Broadcom | Trident 2 | 32x40G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 7 | Accton | AS7116-54X | Nephos | Taurus | 48x25G + 6x100G | [SONiC-ONIE-Nephos]($(echo "${ARTF_NPH}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-nephos.bin/')) |" >> supported_devices_platforms.md
+echo "| 8 | Accton | AS7312-54X | Broadcom | Tomahawk | 48x25G + 6x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 9 | Accton | AS7312-54XS | Broadcom | Tomahawk | 48x25G + 6x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 10 | Accton | AS7315-27XB | Broadcom | Qumran | 20x10G + 4x25G + 3x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 11 | Accton | AS7326-56X | Broadcom | Trident 3 | 48x25G + 8x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 12 | Accton | AS7512-32X | Cavium | XPliantCNX880** | 32x100G | [SONiC-ONIE-Cavium](https://sonic-build.azurewebsites.net/ui/sonic/Pipelines) |" >> supported_devices_platforms.md
+echo "| 13 | Accton | AS7712-32X | Broadcom | Tomahawk | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 14 | Accton | AS7716-32X | Broadcom | Tomahawk | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 15 | Accton | AS7716-32XB | Broadcom | Tomahawk | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 16 | Accton | AS7726-32X | Broadcom | Trident 3 | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 17 | Accton | AS7816-64X | Broadcom | Tomahawk 2 | 64x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 18 | Accton | AS9716-32D | Broadcom | Tomahawk 3 | 32x400G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 19 | Accton | Minipack | Broadcom | Tomahawk 3 | 128x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 20 | Alpha
Networks | SNH60A0-320Fv2 | Broadcom | Tomahawk | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 21 | Alpha
Networks | SNH60B0-640F | Broadcom | Tomahawk 2 | 64x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 22 | Arista | 7050QX-32 | Broadcom | Trident 2 | 32x40G | [SONiC-Aboot-Broadcom]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-aboot-barefoot.swi/')) |" >> supported_devices_platforms.md
+echo "| 23 | Arista | 7050QX-32S | Broadcom | Trident 2 | 32x40G | [SONiC-Aboot-Broadcom]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-aboot-barefoot.swi/')) |" >> supported_devices_platforms.md
+echo "| 24 | Arista | 7050CX3-32S | Broadcom | Trident 3 | 32x100G + 2x10G | [SONiC-Aboot-Broadcom]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-aboot-barefoot.swi/')) |" >> supported_devices_platforms.md
+echo "| 25 | Arista | 7060CX-32S | Broadcom | Tomahawk | 32x100G + 2x10G | [SONiC-Aboot-Broadcom]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-aboot-barefoot.swi/')) |" >> supported_devices_platforms.md
+echo "| 26 | Arista | 7060DX4-32 | Broadcom | Tomahawk 3 | 32x400G + 2x10G | [SONiC-Aboot-Broadcom]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-aboot-barefoot.swi/')) |" >> supported_devices_platforms.md
+echo "| 27 | Arista | 7060PX4-32 | Broadcom | Tomahawk 3 | 32x400G + 2x10G | [SONiC-Aboot-Broadcom]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-aboot-barefoot.swi/')) |" >> supported_devices_platforms.md
+echo "| 28 | Arista | 7170-32CD | Barefoot | Tofino | 32x100G + 2x10G | [SONiC-Aboot-Barefoot]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-aboot-barefoot.swi/')) |" >> supported_devices_platforms.md
+echo "| 29 | Arista | 7170-64C | Barefoot | Tofino | 64x100G + 2x10G | [SONiC-Aboot-Barefoot]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-aboot-barefoot.swi/')) |" >> supported_devices_platforms.md
+echo "| 30 | Arista | 7260CX3-64 | Broadcom | Tomahawk 2 | 64x100G + 2x10G | [SONiC-Aboot-Broadcom]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-aboot-barefoot.swi/')) |" >> supported_devices_platforms.md
+echo "| 31 | Arista | 7280CR3-32D4 | Broadcom | Jericho 2 | 32x100G + 4x400G | [SONiC-Aboot-Broadcom]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-aboot-barefoot.swi/')) |" >> supported_devices_platforms.md
+echo "| 32 | Arista | 7280CR3-32P4 | Broadcom | Jericho 2 | 32x100G + 4x400G | [SONiC-Aboot-Broadcom]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-aboot-barefoot.swi/')) |" >> supported_devices_platforms.md
+echo "| 33 | Barefoot | SONiC-P4 | Barefoot | P4 Emulated | Configurable | [SONiC-P4](https://sonic-build.azurewebsites.net/ui/sonic/Pipelines) |" >> supported_devices_platforms.md
+echo "| 34 | Barefoot | Wedge 100BF-32 | Barefoot | Tofino | 32x100G | [SONiC-ONIE-Barefoot]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-barefoot.bin/')) |" >> supported_devices_platforms.md
+echo "| 35 | Barefoot | Wedge 100BF-65X | Barefoot | Tofino | 32x100G | [SONiC-ONIE-Barefoot]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-barefoot.bin/')) |" >> supported_devices_platforms.md
+echo "| 36 | Celestica | DX010 | Broadcom | Tomahawk | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 37 | Celestica | E1031 | Broadcom | Helix4 | 48x1G + 4x10G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 38 | Celestica | midstone-200i | Innovium | Teralynx 7 | 128x100G |[SONiC-ONIE-Innovium]($(echo "${ARTF_INNO}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-innovium-dbg.bin/')) |" >> supported_devices_platforms.md
+echo "| 39 | Celestica | Silverstone | Broadcom | Tomahawk 3 | 32x400G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 40 | Celestica | Seastone_2 | Broadcom | Trident 3 | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 41 | Centec | E582-48X2Q | Centec | Goldengate | 48x10G + 2x40G + 4x100G | [SONiC-ONIE-Centec]($(echo "${ARTF_CTC}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-centec.bin/')) |" >> supported_devices_platforms.md
+echo "| 42 | Centec | E582-48X6Q | Centec | Goldengate | 48x10G + 6x40G | [SONiC-ONIE-Centec]($(echo "${ARTF_CTC}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-centec.bin/')) |" >> supported_devices_platforms.md
+echo "| 43 | Cig | CS6436-56P | Nephos | NP8366 | 48x25G + 8x100G | [SONiC-ONIE-Nephos]($(echo "${ARTF_NPH}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-nephos.bin/')) |" >> supported_devices_platforms.md
+echo "| 44 | Cig | CS5435-54P | Nephos | NP8363 | 10GX48,100GX6 | [SONiC-ONIE-Nephos]($(echo "${ARTF_NPH}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-nephos.bin/')) |" >> supported_devices_platforms.md
+echo "| 45 | Cig | CS6436-54P | Nephos | NP8365 | 25GX48,100GX6 | [SONiC-ONIE-Nephos]($(echo "${ARTF_NPH}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-nephos.bin/')) |" >> supported_devices_platforms.md
+echo "| 46 | Dell | N3248PXE | Broadcom | Trident 3.X5 | 48x10GCU+4x25G-2x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 47 | Dell | N3248TE | Broadcom | Trident 3.X3 | 48x1G+4x10G-2x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 48 | Dell | S5212F | Broadcom | Trident 3.X5 | 12x25G+3x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 49 | Dell | S5224F | Broadcom | Trident 3.X5 | 24x25G+4x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 50 | Dell | S5232F-ON | Broadcom | Trident 3 | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 51 | Dell | S5248F-ON | Broadcom | Trident 3-2T | 48x25G,4x100G,2x200G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 52 | Dell | s5296F | Broadcom | Trident 3 | 96x25G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 53 | Dell | S6000-ON | Broadcom | Trident 2 | 32x40G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 54 | Dell | S6100-ON | Broadcom | Tomahawk | 64x40G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 55 | Dell | Z9100-ON | Broadcom | Tomahawk | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 56 | Dell | Z9264F-ON | Broadcom | Tomahawk 2 | 64x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 57 | Dell | Z9332F-ON | Broadcom | Tomahawk 3 | 32x400G,2x10G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 58 | Dell | Z9332f-C32 | Broadcom | Tomahawk 3 | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 59 | Delta | AG5648 | Broadcom | Tomahawk | 48x25G + 6x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 60 | Delta | AG9032V1 | Broadcom | Tomahawk | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 61 | Delta | AG9032V2A | Broadcom | Trident 3 | 32x100G + 1x10G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 62 | Delta | AG9064 | Broadcom | Tomahawk 2 | 64x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 63 | Delta | et-c032if | Innovium | Teralynx 7 | 32x400G |[SONiC-ONIE-Innovium]($(echo "${ARTF_INNO}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-innovium-dbg.bin/')) |" >> supported_devices_platforms.md
+echo "| 64 | Delta | ET-6448M | Marvell | Prestera 98DX3255 | 48xGE + 4x10G | [SONiC-ONIE-Marvell]($(echo "${ARTF_MRV}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-marvell-armhf.bin/')) |" >> supported_devices_platforms.md
+echo "| 65 | Delta | agc032 | Broadcom | Tomahawk3 | 32x400G + 2x10G | [SONiC-ONIE-Marvell]($(echo "${ARTF_MRV}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-marvell-armhf.bin/')) |" >> supported_devices_platforms.md
+echo "| 66 | Embedway | ES6220 (48x10G) | Centec | Goldengate | 48x10G + 2x40G + 4x100G | [SONiC-ONIE-Centec]($(echo "${ARTF_CTC}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-centec.bin/')) |" >> supported_devices_platforms.md
+echo "| 67 | Embedway | ES6428A-X48Q2H4 | Centec | Goldengate | 4x100G + 2x40G + 48x10G | [SONiC-ONIE-Centec]($(echo "${ARTF_CTC}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-centec.bin/')) |" >> supported_devices_platforms.md
+echo "| 68 | Facebook | Wedge 100-32X | Broadcom | Tomahawk | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 69 | Ingrasys | S8810-32Q | Broadcom | Trident 2 | 32x40G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 70 | Ingrasys | S8900-54XC | Broadcom | Tomahawk | 48x25G + 6x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 71 | Ingrasys | S8900-64XC | Broadcom | Tomahawk | 48x25G + 16x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 72 | Ingrasys | S9100-32X | Broadcom | Tomahawk | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 73 | Ingrasys | S9130-32X | Nephos | Taurus | 32x100G | [SONiC-ONIE-Nephos]($(echo "${ARTF_NPH}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-nephos.bin/')) |" >> supported_devices_platforms.md
+echo "| 74 | Ingrasys | S9180-32X | Barefoot | Tofino | 32x100G | [SONiC-ONIE-Barefoot]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-barefoot.bin/')) |" >> supported_devices_platforms.md
+echo "| 75 | Ingrasys | S9200-64X | Broadcom | Tomahawk 2 | 64x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 76 | Ingrasys | S9230-64X | Nephos | Taurus | 64x100G | [SONiC-ONIE-Nephos]($(echo "${ARTF_NPH}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-nephos.bin/')) |" >> supported_devices_platforms.md
+echo "| 77 | Ingrasys | S9280-64X | Barefoot | Tofino | 64x100G | [SONiC-ONIE-Barefoot]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-barefoot.bin/')) |" >> supported_devices_platforms.md
+echo "| 78 | Inventec | D6254QS | Broadcom | Trident 2 | 72x10G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 79 | Inventec | D6356 | Broadcom | Trident 3 | 48x25G + 8x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 80 | Inventec | D6556 | Broadcom | Trident 3 | 48x25G + 8x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 81 | Inventec | D7032Q | Broadcom | Tomahawk | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 82 | Inventec | D7054Q | Broadcom | Tomahawk | 48x25G + 6x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 83 | Inventec | D7264Q | Broadcom | Tomahawk 2 | 64x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 84 | Juniper Networks| QFX5210-64C | Broadcom | Tomahawk 2 | 64x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 85 | Juniper Networks| QFX5200-32C-S | Broadcom | Tomahawk 1 | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 86 | Marvell | RD-ARM-48XG6CG-A4 | Marvell | Prestera 98EX54xx | 6x100G+48x10G | [SONiC-ONIE-Marvell]($(echo "${ARTF_MRV}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-marvell-armhf.bin/')) |" >> supported_devices_platforms.md
+echo "| 87 | Marvell | RD-BC3-4825G6CG-A4 | Marvell | Prestera 98CX84xx | 6x100G+48x25G | [SONiC-ONIE-Marvell]($(echo "${ARTF_MRV}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-marvell-armhf.bin/')) |" >> supported_devices_platforms.md
+echo "| 88 | Marvell | 98cx8580 | Marvell | Prestera CX | 32x400G + 16x400G | [SONiC-ONIE-Marvell]($(echo "${ARTF_MRV}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-marvell-armhf.bin/')) |" >> supported_devices_platforms.md
+echo "| 89 | Nvidia | SN2010 | Nvidia | Spectrum | 18x25G + 4x100G | [SONiC-ONIE-Mellanox]($(echo "${ARTF_MLNX}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-mellanox.bin/')) |" >> supported_devices_platforms.md
+echo "| 90 | Nvidia | SN2100 | Nvidia | Spectrum | 16x100G | [SONiC-ONIE-Mellanox]($(echo "${ARTF_MLNX}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-mellanox.bin/')) |" >> supported_devices_platforms.md
+echo "| 91 | Nvidia | SN2410 | Nvidia | Spectrum | 48x25G + 8x100G | [SONiC-ONIE-Mellanox]($(echo "${ARTF_MLNX}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-mellanox.bin/')) |" >> supported_devices_platforms.md
+echo "| 92 | Nvidia | SN2700 | Nvidia | Spectrum | 32x100G | [SONiC-ONIE-Mellanox]($(echo "${ARTF_MLNX}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-mellanox.bin/')) |" >> supported_devices_platforms.md
+echo "| 93 | Nvidia | SN3420 | Nvidia | Spectrum 2 | 48x25G + 12x100G | [SONiC-ONIE-Mellanox]($(echo "${ARTF_MLNX}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-mellanox.bin/')) |" >> supported_devices_platforms.md
+echo "| 94 | Nvidia | SN3700 | Nvidia | Spectrum 2 | 32x200G | [SONiC-ONIE-Mellanox]($(echo "${ARTF_MLNX}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-mellanox.bin/')) |" >> supported_devices_platforms.md
+echo "| 95 | Nvidia | SN3700C | Nvidia | Spectrum 2 | 32x100G | [SONiC-ONIE-Mellanox]($(echo "${ARTF_MLNX}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-mellanox.bin/')) |" >> supported_devices_platforms.md
+echo "| 96 | Nvidia | SN3800 | Nvidia | Spectrum 2 | 64x100G | [SONiC-ONIE-Mellanox]($(echo "${ARTF_MLNX}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-mellanox.bin/')) |" >> supported_devices_platforms.md
+echo "| 97 | Nvidia | SN4600C | Nvidia | Spectrum 3 | 64x100G | [SONiC-ONIE-Mellanox]($(echo "${ARTF_MLNX}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-mellanox.bin/')) |" >> supported_devices_platforms.md
+echo "| 98 | Nvidia | SN4700 | Nvidia | Spectrum 3 | 32x400G | [SONiC-ONIE-Mellanox]($(echo "${ARTF_MLNX}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-mellanox.bin/')) |" >> supported_devices_platforms.md
+echo "| 99 | Mitac | LY1200-B32H0-C3 | Broadcom | Tomahawk | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 100 | Pegatron | Porsche | Nephos | Taurus | 48x25G + 6x100G | [SONiC-ONIE-Nephos]($(echo "${ARTF_NPH}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-nephos.bin/')) |" >> supported_devices_platforms.md
+echo "| 101 | Quanta | T3032-IX7 | Broadcom | Trident 3 | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 102 | Quanta | T4048-IX8 | Broadcom | Trident 3 | 48x25G + 8x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 103 | Quanta | T4048-IX8C | Broadcom | Trident 3 | 48x25G + 8x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 104 | Quanta | T7032-IX1B | Broadcom | Tomahawk | 32x100G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 105 | Quanta | T9032-IX9 | Broadcom | Tomahawk 3 | 32x400G | [SONiC-ONIE-Broadcom]($(echo "${ARTF_BRCM}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-broadcom.bin/')) |" >> supported_devices_platforms.md
+echo "| 106 | Wnc | OSW1800 | Barefoot | Tofino | 48x25G + 6x100G | [SONiC-ONIE-Barefoot]($(echo "${ARTF_BFT}" | sed 's/format=zip/format=file\&subpath=\/target\/sonic-barefoot.bin/')) |" >> supported_devices_platforms.md
+
+
+
+done
+
+git add supported_devices_platforms.md
+git commit -m "latest links for sonic images in supported platform md file"
+git push -f --set-upstream origin sonic_image_md_update
diff --git a/supported_devices_platforms_md.sh - Shortcut.lnk b/supported_devices_platforms_md.sh - Shortcut.lnk
new file mode 100644
index 0000000000..2b38a482f7
Binary files /dev/null and b/supported_devices_platforms_md.sh - Shortcut.lnk differ