-
-Provides an implementation of the [XDG Base Directory Specification](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html).
-The specification defines a set of standard paths for storing application files,
-including data and configuration files. For portability and flexibility reasons,
-applications should use the XDG defined locations instead of hardcoding paths.
-The package also includes the locations of well known [user directories](https://wiki.archlinux.org/index.php/XDG_user_directories), as well as
-other common directories such as fonts and applications.
-
-The current implementation supports **most flavors of Unix**, **Windows**, **macOS** and **Plan 9**.
-On Windows, where XDG environment variables are not usually set, the package uses [Known Folders](https://docs.microsoft.com/en-us/windows/win32/shell/known-folders)
-as defaults. Therefore, appropriate locations are used for common [folders](https://docs.microsoft.com/en-us/windows/win32/shell/knownfolderid) which may have been redirected.
-
-See usage [examples](#usage) below. Full documentation can be found at https://pkg.go.dev/github.com/adrg/xdg.
-
-## Installation
- go get github.com/adrg/xdg
-
-## Default locations
-
-The package defines sensible defaults for XDG variables which are empty or not
-present in the environment.
-
-- On Unix-like operating systems, XDG environment variables are tipically defined.
-Appropriate default locations are used for the environment variables which are not set.
-- On Windows, XDG environment variables are usually not set. If that is the case,
-the package relies on the appropriate [Known Folders](https://docs.microsoft.com/en-us/windows/win32/shell/knownfolderid).
-Sensible fallback locations are used for the folders which are not set.
-
-### XDG Base Directory
-
-
- Unix-like operating systems
-
-
-| |
|
-| :-----------------------------------------------------------: | :--------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------: |
-| Home | Profile | %USERPROFILE% |
-| Applications | Programs CommonPrograms | %APPDATA%\Microsoft\Windows\Start Menu\Programs %ProgramData%\Microsoft\Windows\Start Menu\Programs |
-| Fonts | Fonts - | %SystemRoot%\Fonts %LOCALAPPDATA%\Microsoft\Windows\Fonts |
-
-
-
-## Usage
-
-#### XDG Base Directory
-
-```go
-package main
-
-import (
- "log"
-
- "github.com/adrg/xdg"
-)
-
-func main() {
- // XDG Base Directory paths.
- log.Println("Home data directory:", xdg.DataHome)
- log.Println("Data directories:", xdg.DataDirs)
- log.Println("Home config directory:", xdg.ConfigHome)
- log.Println("Config directories:", xdg.ConfigDirs)
- log.Println("Home state directory:", xdg.StateHome)
- log.Println("Cache directory:", xdg.CacheHome)
- log.Println("Runtime directory:", xdg.RuntimeDir)
-
- // Other common directories.
- log.Println("Home directory:", xdg.Home)
- log.Println("Application directories:", xdg.ApplicationDirs)
- log.Println("Font directories:", xdg.FontDirs)
-
- // Obtain a suitable location for application config files.
- // ConfigFile takes one parameter which must contain the name of the file,
- // but it can also contain a set of parent directories. If the directories
- // don't exist, they will be created relative to the base config directory.
- configFilePath, err := xdg.ConfigFile("appname/config.yaml")
- if err != nil {
- log.Fatal(err)
- }
- log.Println("Save the config file at:", configFilePath)
-
- // For other types of application files use:
- // xdg.DataFile()
- // xdg.StateFile()
- // xdg.CacheFile()
- // xdg.RuntimeFile()
-
- // Finding application config files.
- // SearchConfigFile takes one parameter which must contain the name of
- // the file, but it can also contain a set of parent directories relative
- // to the config search paths (xdg.ConfigHome and xdg.ConfigDirs).
- configFilePath, err = xdg.SearchConfigFile("appname/config.yaml")
- if err != nil {
- log.Fatal(err)
- }
- log.Println("Config file was found at:", configFilePath)
-
- // For other types of application files use:
- // xdg.SearchDataFile()
- // xdg.SearchStateFile()
- // xdg.SearchCacheFile()
- // xdg.SearchRuntimeFile()
-}
-```
-
-#### XDG user directories
-
-```go
-package main
-
-import (
- "log"
-
- "github.com/adrg/xdg"
-)
-
-func main() {
- // XDG user directories.
- log.Println("Desktop directory:", xdg.UserDirs.Desktop)
- log.Println("Download directory:", xdg.UserDirs.Download)
- log.Println("Documents directory:", xdg.UserDirs.Documents)
- log.Println("Music directory:", xdg.UserDirs.Music)
- log.Println("Pictures directory:", xdg.UserDirs.Pictures)
- log.Println("Videos directory:", xdg.UserDirs.Videos)
- log.Println("Templates directory:", xdg.UserDirs.Templates)
- log.Println("Public directory:", xdg.UserDirs.PublicShare)
-}
-```
-
-## Stargazers over time
-
-[![Stargazers over time](https://starchart.cc/adrg/xdg.svg)](https://starchart.cc/adrg/xdg)
-
-## Contributing
-
-Contributions in the form of pull requests, issues or just general feedback,
-are always welcome.
-See [CONTRIBUTING.MD](CONTRIBUTING.md).
-
-**Contributors**:
-[adrg](https://github.com/adrg),
-[wichert](https://github.com/wichert),
-[bouncepaw](https://github.com/bouncepaw),
-[gabriel-vasile](https://github.com/gabriel-vasile),
-[KalleDK](https://github.com/KalleDK),
-[nvkv](https://github.com/nvkv),
-[djdv](https://github.com/djdv).
-
-## References
-
-For more information see:
-* [XDG Base Directory Specification](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html)
-* [XDG user directories](https://wiki.archlinux.org/index.php/XDG_user_directories)
-* [Windows Known Folders](https://docs.microsoft.com/en-us/windows/win32/shell/knownfolderid)
-
-## License
-
-Copyright (c) 2014 Adrian-George Bostan.
-
-This project is licensed under the [MIT license](https://opensource.org/licenses/MIT).
-See [LICENSE](LICENSE) for more details.
diff --git a/tools/vendor/github.com/adrg/xdg/base_dirs.go b/tools/vendor/github.com/adrg/xdg/base_dirs.go
deleted file mode 100644
index a8a3fd55c..000000000
--- a/tools/vendor/github.com/adrg/xdg/base_dirs.go
+++ /dev/null
@@ -1,68 +0,0 @@
-package xdg
-
-import "github.com/adrg/xdg/internal/pathutil"
-
-// XDG Base Directory environment variables.
-const (
- envDataHome = "XDG_DATA_HOME"
- envDataDirs = "XDG_DATA_DIRS"
- envConfigHome = "XDG_CONFIG_HOME"
- envConfigDirs = "XDG_CONFIG_DIRS"
- envStateHome = "XDG_STATE_HOME"
- envCacheHome = "XDG_CACHE_HOME"
- envRuntimeDir = "XDG_RUNTIME_DIR"
-)
-
-type baseDirectories struct {
- dataHome string
- data []string
- configHome string
- config []string
- stateHome string
- cacheHome string
- runtime string
-
- // Non-standard directories.
- fonts []string
- applications []string
-}
-
-func (bd baseDirectories) dataFile(relPath string) (string, error) {
- return pathutil.Create(relPath, append([]string{bd.dataHome}, bd.data...))
-}
-
-func (bd baseDirectories) configFile(relPath string) (string, error) {
- return pathutil.Create(relPath, append([]string{bd.configHome}, bd.config...))
-}
-
-func (bd baseDirectories) stateFile(relPath string) (string, error) {
- return pathutil.Create(relPath, []string{bd.stateHome})
-}
-
-func (bd baseDirectories) cacheFile(relPath string) (string, error) {
- return pathutil.Create(relPath, []string{bd.cacheHome})
-}
-
-func (bd baseDirectories) runtimeFile(relPath string) (string, error) {
- return pathutil.Create(relPath, []string{bd.runtime})
-}
-
-func (bd baseDirectories) searchDataFile(relPath string) (string, error) {
- return pathutil.Search(relPath, append([]string{bd.dataHome}, bd.data...))
-}
-
-func (bd baseDirectories) searchConfigFile(relPath string) (string, error) {
- return pathutil.Search(relPath, append([]string{bd.configHome}, bd.config...))
-}
-
-func (bd baseDirectories) searchStateFile(relPath string) (string, error) {
- return pathutil.Search(relPath, []string{bd.stateHome})
-}
-
-func (bd baseDirectories) searchCacheFile(relPath string) (string, error) {
- return pathutil.Search(relPath, []string{bd.cacheHome})
-}
-
-func (bd baseDirectories) searchRuntimeFile(relPath string) (string, error) {
- return pathutil.Search(relPath, []string{bd.runtime})
-}
diff --git a/tools/vendor/github.com/adrg/xdg/codecov.yml b/tools/vendor/github.com/adrg/xdg/codecov.yml
deleted file mode 100644
index 54ee338fd..000000000
--- a/tools/vendor/github.com/adrg/xdg/codecov.yml
+++ /dev/null
@@ -1,11 +0,0 @@
-coverage:
- status:
- project:
- default:
- target: 90%
- threshold: 1%
- patch:
- default:
- target: 100%
-ignore:
- - "paths_plan9.go"
diff --git a/tools/vendor/github.com/adrg/xdg/doc.go b/tools/vendor/github.com/adrg/xdg/doc.go
deleted file mode 100644
index 7747b183e..000000000
--- a/tools/vendor/github.com/adrg/xdg/doc.go
+++ /dev/null
@@ -1,99 +0,0 @@
-/*
-Package xdg provides an implementation of the XDG Base Directory Specification.
-The specification defines a set of standard paths for storing application files
-including data and configuration files. For portability and flexibility reasons,
-applications should use the XDG defined locations instead of hardcoding paths.
-The package also includes the locations of well known user directories.
-
-The current implementation supports most flavors of Unix, Windows, Mac OS and Plan 9.
-
- For more information regarding the XDG Base Directory Specification see:
- https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
-
- For more information regarding the XDG user directories see:
- https://wiki.archlinux.org/index.php/XDG_user_directories
-
- For more information regarding the Windows Known Folders see:
- https://docs.microsoft.com/en-us/windows/win32/shell/known-folders
-
-Usage
-
-XDG Base Directory
- package main
-
- import (
- "log"
-
- "github.com/adrg/xdg"
- )
-
- func main() {
- // XDG Base Directory paths.
- log.Println("Home data directory:", xdg.DataHome)
- log.Println("Data directories:", xdg.DataDirs)
- log.Println("Home config directory:", xdg.ConfigHome)
- log.Println("Config directories:", xdg.ConfigDirs)
- log.Println("Home state directory:", xdg.StateHome)
- log.Println("Cache directory:", xdg.CacheHome)
- log.Println("Runtime directory:", xdg.RuntimeDir)
-
- // Other common directories.
- log.Println("Home directory:", xdg.Home)
- log.Println("Application directories:", xdg.ApplicationDirs)
- log.Println("Font directories:", xdg.FontDirs)
-
- // Obtain a suitable location for application config files.
- // ConfigFile takes one parameter which must contain the name of the file,
- // but it can also contain a set of parent directories. If the directories
- // don't exist, they will be created relative to the base config directory.
- configFilePath, err := xdg.ConfigFile("appname/config.yaml")
- if err != nil {
- log.Fatal(err)
- }
- log.Println("Save the config file at:", configFilePath)
-
- // For other types of application files use:
- // xdg.DataFile()
- // xdg.StateFile()
- // xdg.CacheFile()
- // xdg.RuntimeFile()
-
- // Finding application config files.
- // SearchConfigFile takes one parameter which must contain the name of
- // the file, but it can also contain a set of parent directories relative
- // to the config search paths (xdg.ConfigHome and xdg.ConfigDirs).
- configFilePath, err = xdg.SearchConfigFile("appname/config.yaml")
- if err != nil {
- log.Fatal(err)
- }
- log.Println("Config file was found at:", configFilePath)
-
- // For other types of application files use:
- // xdg.SearchDataFile()
- // xdg.SearchStateFile()
- // xdg.SearchCacheFile()
- // xdg.SearchRuntimeFile()
- }
-
-XDG user directories
- package main
-
- import (
- "log"
-
- "github.com/adrg/xdg"
- )
-
- func main() {
- // XDG user directories.
- log.Println("Desktop directory:", xdg.UserDirs.Desktop)
- log.Println("Download directory:", xdg.UserDirs.Download)
- log.Println("Documents directory:", xdg.UserDirs.Documents)
- log.Println("Music directory:", xdg.UserDirs.Music)
- log.Println("Pictures directory:", xdg.UserDirs.Pictures)
- log.Println("Videos directory:", xdg.UserDirs.Videos)
- log.Println("Templates directory:", xdg.UserDirs.Templates)
- log.Println("Public directory:", xdg.UserDirs.PublicShare)
- }
-*/
-package xdg
diff --git a/tools/vendor/github.com/adrg/xdg/internal/pathutil/pathutil.go b/tools/vendor/github.com/adrg/xdg/internal/pathutil/pathutil.go
deleted file mode 100644
index 7422342b3..000000000
--- a/tools/vendor/github.com/adrg/xdg/internal/pathutil/pathutil.go
+++ /dev/null
@@ -1,78 +0,0 @@
-package pathutil
-
-import (
- "fmt"
- "os"
- "path/filepath"
- "strings"
-)
-
-// Unique eliminates the duplicate paths from the provided slice and returns
-// the result. The items in the output slice are in the order in which they
-// occur in the input slice. If a `home` location is provided, the paths are
-// expanded using the `ExpandHome` function.
-func Unique(paths []string, home string) []string {
- var (
- uniq []string
- registry = map[string]struct{}{}
- )
-
- for _, p := range paths {
- p = ExpandHome(p, home)
- if p != "" && filepath.IsAbs(p) {
- if _, ok := registry[p]; ok {
- continue
- }
-
- registry[p] = struct{}{}
- uniq = append(uniq, p)
- }
- }
-
- return uniq
-}
-
-// Create returns a suitable location relative to which the file with the
-// specified `name` can be written. The first path from the provided `paths`
-// slice which is successfully created (or already exists) is used as a base
-// path for the file. The `name` parameter should contain the name of the file
-// which is going to be written in the location returned by this function, but
-// it can also contain a set of parent directories, which will be created
-// relative to the selected parent path.
-func Create(name string, paths []string) (string, error) {
- var searchedPaths []string
- for _, p := range paths {
- p = filepath.Join(p, name)
-
- dir := filepath.Dir(p)
- if Exists(dir) {
- return p, nil
- }
- if err := os.MkdirAll(dir, os.ModeDir|0700); err == nil {
- return p, nil
- }
-
- searchedPaths = append(searchedPaths, dir)
- }
-
- return "", fmt.Errorf("could not create any of the following paths: %s",
- strings.Join(searchedPaths, ", "))
-}
-
-// Search searches for the file with the specified `name` in the provided
-// slice of `paths`. The `name` parameter must contain the name of the file,
-// but it can also contain a set of parent directories.
-func Search(name string, paths []string) (string, error) {
- var searchedPaths []string
- for _, p := range paths {
- p = filepath.Join(p, name)
- if Exists(p) {
- return p, nil
- }
-
- searchedPaths = append(searchedPaths, filepath.Dir(p))
- }
-
- return "", fmt.Errorf("could not locate `%s` in any of the following paths: %s",
- filepath.Base(name), strings.Join(searchedPaths, ", "))
-}
diff --git a/tools/vendor/github.com/adrg/xdg/internal/pathutil/pathutil_plan9.go b/tools/vendor/github.com/adrg/xdg/internal/pathutil/pathutil_plan9.go
deleted file mode 100644
index 8ee4e8d2f..000000000
--- a/tools/vendor/github.com/adrg/xdg/internal/pathutil/pathutil_plan9.go
+++ /dev/null
@@ -1,29 +0,0 @@
-package pathutil
-
-import (
- "os"
- "path/filepath"
- "strings"
-)
-
-// Exists returns true if the specified path exists.
-func Exists(path string) bool {
- _, err := os.Stat(path)
- return err == nil || os.IsExist(err)
-}
-
-// ExpandHome substitutes `~` and `$home` at the start of the specified
-// `path` using the provided `home` location.
-func ExpandHome(path, home string) string {
- if path == "" || home == "" {
- return path
- }
- if path[0] == '~' {
- return filepath.Join(home, path[1:])
- }
- if strings.HasPrefix(path, "$home") {
- return filepath.Join(home, path[5:])
- }
-
- return path
-}
diff --git a/tools/vendor/github.com/adrg/xdg/internal/pathutil/pathutil_unix.go b/tools/vendor/github.com/adrg/xdg/internal/pathutil/pathutil_unix.go
deleted file mode 100644
index a014c66ef..000000000
--- a/tools/vendor/github.com/adrg/xdg/internal/pathutil/pathutil_unix.go
+++ /dev/null
@@ -1,32 +0,0 @@
-//go:build aix || darwin || dragonfly || freebsd || (js && wasm) || nacl || linux || netbsd || openbsd || solaris
-// +build aix darwin dragonfly freebsd js,wasm nacl linux netbsd openbsd solaris
-
-package pathutil
-
-import (
- "os"
- "path/filepath"
- "strings"
-)
-
-// Exists returns true if the specified path exists.
-func Exists(path string) bool {
- _, err := os.Stat(path)
- return err == nil || os.IsExist(err)
-}
-
-// ExpandHome substitutes `~` and `$HOME` at the start of the specified
-// `path` using the provided `home` location.
-func ExpandHome(path, home string) string {
- if path == "" || home == "" {
- return path
- }
- if path[0] == '~' {
- return filepath.Join(home, path[1:])
- }
- if strings.HasPrefix(path, "$HOME") {
- return filepath.Join(home, path[5:])
- }
-
- return path
-}
diff --git a/tools/vendor/github.com/adrg/xdg/internal/pathutil/pathutil_windows.go b/tools/vendor/github.com/adrg/xdg/internal/pathutil/pathutil_windows.go
deleted file mode 100644
index 44080e3ab..000000000
--- a/tools/vendor/github.com/adrg/xdg/internal/pathutil/pathutil_windows.go
+++ /dev/null
@@ -1,64 +0,0 @@
-package pathutil
-
-import (
- "os"
- "path/filepath"
- "strings"
-
- "golang.org/x/sys/windows"
-)
-
-// Exists returns true if the specified path exists.
-func Exists(path string) bool {
- fi, err := os.Lstat(path)
- if fi != nil && fi.Mode()&os.ModeSymlink != 0 {
- _, err = filepath.EvalSymlinks(path)
- }
-
- return err == nil || os.IsExist(err)
-}
-
-// ExpandHome substitutes `%USERPROFILE%` at the start of the specified
-// `path` using the provided `home` location.
-func ExpandHome(path, home string) string {
- if path == "" || home == "" {
- return path
- }
- if strings.HasPrefix(path, `%USERPROFILE%`) {
- return filepath.Join(home, path[13:])
- }
-
- return path
-}
-
-// KnownFolder returns the location of the folder with the specified ID.
-// If that fails, the folder location is determined by reading the provided
-// environment variables (the first non-empty read value is returned).
-// If that fails as well, the first non-empty fallback is returned.
-// If all of the above fails, the function returns an empty string.
-func KnownFolder(id *windows.KNOWNFOLDERID, envVars []string, fallbacks []string) string {
- if id != nil {
- flags := []uint32{windows.KF_FLAG_DEFAULT, windows.KF_FLAG_DEFAULT_PATH}
- for _, flag := range flags {
- p, _ := windows.KnownFolderPath(id, flag|windows.KF_FLAG_DONT_VERIFY)
- if p != "" {
- return p
- }
- }
- }
-
- for _, envVar := range envVars {
- p := os.Getenv(envVar)
- if p != "" {
- return p
- }
- }
-
- for _, fallback := range fallbacks {
- if fallback != "" {
- return fallback
- }
- }
-
- return ""
-}
diff --git a/tools/vendor/github.com/adrg/xdg/paths_darwin.go b/tools/vendor/github.com/adrg/xdg/paths_darwin.go
deleted file mode 100644
index bfe9ad9bc..000000000
--- a/tools/vendor/github.com/adrg/xdg/paths_darwin.go
+++ /dev/null
@@ -1,60 +0,0 @@
-package xdg
-
-import (
- "os"
- "path/filepath"
-)
-
-func homeDir() string {
- if home := os.Getenv("HOME"); home != "" {
- return home
- }
-
- return "/"
-}
-
-func initDirs(home string) {
- initBaseDirs(home)
- initUserDirs(home)
-}
-
-func initBaseDirs(home string) {
- homeAppSupport := filepath.Join(home, "Library", "Application Support")
- rootAppSupport := "/Library/Application Support"
-
- // Initialize standard directories.
- baseDirs.dataHome = xdgPath(envDataHome, homeAppSupport)
- baseDirs.data = xdgPaths(envDataDirs, rootAppSupport)
- baseDirs.configHome = xdgPath(envConfigHome, homeAppSupport)
- baseDirs.config = xdgPaths(envConfigDirs,
- filepath.Join(home, "Library", "Preferences"),
- rootAppSupport,
- "/Library/Preferences",
- )
- baseDirs.stateHome = xdgPath(envStateHome, homeAppSupport)
- baseDirs.cacheHome = xdgPath(envCacheHome, filepath.Join(home, "Library", "Caches"))
- baseDirs.runtime = xdgPath(envRuntimeDir, homeAppSupport)
-
- // Initialize non-standard directories.
- baseDirs.applications = []string{
- "/Applications",
- }
-
- baseDirs.fonts = []string{
- filepath.Join(home, "Library/Fonts"),
- "/Library/Fonts",
- "/System/Library/Fonts",
- "/Network/Library/Fonts",
- }
-}
-
-func initUserDirs(home string) {
- UserDirs.Desktop = xdgPath(envDesktopDir, filepath.Join(home, "Desktop"))
- UserDirs.Download = xdgPath(envDownloadDir, filepath.Join(home, "Downloads"))
- UserDirs.Documents = xdgPath(envDocumentsDir, filepath.Join(home, "Documents"))
- UserDirs.Music = xdgPath(envMusicDir, filepath.Join(home, "Music"))
- UserDirs.Pictures = xdgPath(envPicturesDir, filepath.Join(home, "Pictures"))
- UserDirs.Videos = xdgPath(envVideosDir, filepath.Join(home, "Movies"))
- UserDirs.Templates = xdgPath(envTemplatesDir, filepath.Join(home, "Templates"))
- UserDirs.PublicShare = xdgPath(envPublicShareDir, filepath.Join(home, "Public"))
-}
diff --git a/tools/vendor/github.com/adrg/xdg/paths_plan9.go b/tools/vendor/github.com/adrg/xdg/paths_plan9.go
deleted file mode 100644
index 2882f688b..000000000
--- a/tools/vendor/github.com/adrg/xdg/paths_plan9.go
+++ /dev/null
@@ -1,55 +0,0 @@
-package xdg
-
-import (
- "os"
- "path/filepath"
-)
-
-func homeDir() string {
- if home := os.Getenv("home"); home != "" {
- return home
- }
-
- return "/"
-}
-
-func initDirs(home string) {
- initBaseDirs(home)
- initUserDirs(home)
-}
-
-func initBaseDirs(home string) {
- homeLibDir := filepath.Join(home, "lib")
- rootLibDir := "/lib"
-
- // Initialize standard directories.
- baseDirs.dataHome = xdgPath(envDataHome, homeLibDir)
- baseDirs.data = xdgPaths(envDataDirs, rootLibDir)
- baseDirs.configHome = xdgPath(envConfigHome, homeLibDir)
- baseDirs.config = xdgPaths(envConfigDirs, rootLibDir)
- baseDirs.stateHome = xdgPath(envStateHome, filepath.Join(homeLibDir, "state"))
- baseDirs.cacheHome = xdgPath(envCacheHome, filepath.Join(homeLibDir, "cache"))
- baseDirs.runtime = xdgPath(envRuntimeDir, "/tmp")
-
- // Initialize non-standard directories.
- baseDirs.applications = []string{
- filepath.Join(home, "bin"),
- "/bin",
- }
-
- baseDirs.fonts = []string{
- filepath.Join(homeLibDir, "font"),
- "/lib/font",
- }
-}
-
-func initUserDirs(home string) {
- UserDirs.Desktop = xdgPath(envDesktopDir, filepath.Join(home, "desktop"))
- UserDirs.Download = xdgPath(envDownloadDir, filepath.Join(home, "downloads"))
- UserDirs.Documents = xdgPath(envDocumentsDir, filepath.Join(home, "documents"))
- UserDirs.Music = xdgPath(envMusicDir, filepath.Join(home, "music"))
- UserDirs.Pictures = xdgPath(envPicturesDir, filepath.Join(home, "pictures"))
- UserDirs.Videos = xdgPath(envVideosDir, filepath.Join(home, "videos"))
- UserDirs.Templates = xdgPath(envTemplatesDir, filepath.Join(home, "templates"))
- UserDirs.PublicShare = xdgPath(envPublicShareDir, filepath.Join(home, "public"))
-}
diff --git a/tools/vendor/github.com/adrg/xdg/paths_unix.go b/tools/vendor/github.com/adrg/xdg/paths_unix.go
deleted file mode 100644
index ad571dfc8..000000000
--- a/tools/vendor/github.com/adrg/xdg/paths_unix.go
+++ /dev/null
@@ -1,71 +0,0 @@
-//go:build aix || dragonfly || freebsd || (js && wasm) || nacl || linux || netbsd || openbsd || solaris
-// +build aix dragonfly freebsd js,wasm nacl linux netbsd openbsd solaris
-
-package xdg
-
-import (
- "os"
- "path/filepath"
- "strconv"
-
- "github.com/adrg/xdg/internal/pathutil"
-)
-
-func homeDir() string {
- if home := os.Getenv("HOME"); home != "" {
- return home
- }
-
- return "/"
-}
-
-func initDirs(home string) {
- initBaseDirs(home)
- initUserDirs(home)
-}
-
-func initBaseDirs(home string) {
- // Initialize standard directories.
- baseDirs.dataHome = xdgPath(envDataHome, filepath.Join(home, ".local", "share"))
- baseDirs.data = xdgPaths(envDataDirs, "/usr/local/share", "/usr/share")
- baseDirs.configHome = xdgPath(envConfigHome, filepath.Join(home, ".config"))
- baseDirs.config = xdgPaths(envConfigDirs, "/etc/xdg")
- baseDirs.stateHome = xdgPath(envStateHome, filepath.Join(home, ".local", "state"))
- baseDirs.cacheHome = xdgPath(envCacheHome, filepath.Join(home, ".cache"))
- baseDirs.runtime = xdgPath(envRuntimeDir, filepath.Join("/run/user", strconv.Itoa(os.Getuid())))
-
- // Initialize non-standard directories.
- appDirs := []string{
- filepath.Join(baseDirs.dataHome, "applications"),
- filepath.Join(home, ".local/share/applications"),
- "/usr/local/share/applications",
- "/usr/share/applications",
- }
-
- fontDirs := []string{
- filepath.Join(baseDirs.dataHome, "fonts"),
- filepath.Join(home, ".fonts"),
- filepath.Join(home, ".local/share/fonts"),
- "/usr/local/share/fonts",
- "/usr/share/fonts",
- }
-
- for _, dir := range baseDirs.data {
- appDirs = append(appDirs, filepath.Join(dir, "applications"))
- fontDirs = append(fontDirs, filepath.Join(dir, "fonts"))
- }
-
- baseDirs.applications = pathutil.Unique(appDirs, Home)
- baseDirs.fonts = pathutil.Unique(fontDirs, Home)
-}
-
-func initUserDirs(home string) {
- UserDirs.Desktop = xdgPath(envDesktopDir, filepath.Join(home, "Desktop"))
- UserDirs.Download = xdgPath(envDownloadDir, filepath.Join(home, "Downloads"))
- UserDirs.Documents = xdgPath(envDocumentsDir, filepath.Join(home, "Documents"))
- UserDirs.Music = xdgPath(envMusicDir, filepath.Join(home, "Music"))
- UserDirs.Pictures = xdgPath(envPicturesDir, filepath.Join(home, "Pictures"))
- UserDirs.Videos = xdgPath(envVideosDir, filepath.Join(home, "Videos"))
- UserDirs.Templates = xdgPath(envTemplatesDir, filepath.Join(home, "Templates"))
- UserDirs.PublicShare = xdgPath(envPublicShareDir, filepath.Join(home, "Public"))
-}
diff --git a/tools/vendor/github.com/adrg/xdg/paths_windows.go b/tools/vendor/github.com/adrg/xdg/paths_windows.go
deleted file mode 100644
index 722d3e785..000000000
--- a/tools/vendor/github.com/adrg/xdg/paths_windows.go
+++ /dev/null
@@ -1,168 +0,0 @@
-package xdg
-
-import (
- "path/filepath"
-
- "github.com/adrg/xdg/internal/pathutil"
- "golang.org/x/sys/windows"
-)
-
-func homeDir() string {
- return pathutil.KnownFolder(
- windows.FOLDERID_Profile,
- []string{"USERPROFILE"},
- nil,
- )
-}
-
-func initDirs(home string) {
- kf := initKnownFolders(home)
- initBaseDirs(home, kf)
- initUserDirs(home, kf)
-}
-
-func initBaseDirs(home string, kf *knownFolders) {
- // Initialize standard directories.
- baseDirs.dataHome = xdgPath(envDataHome, kf.localAppData)
- baseDirs.data = xdgPaths(envDataDirs, kf.roamingAppData, kf.programData)
- baseDirs.configHome = xdgPath(envConfigHome, kf.localAppData)
- baseDirs.config = xdgPaths(envConfigDirs, kf.programData, kf.roamingAppData)
- baseDirs.stateHome = xdgPath(envStateHome, kf.localAppData)
- baseDirs.cacheHome = xdgPath(envCacheHome, filepath.Join(kf.localAppData, "cache"))
- baseDirs.runtime = xdgPath(envRuntimeDir, kf.localAppData)
-
- // Initialize non-standard directories.
- baseDirs.applications = []string{
- kf.programs,
- kf.commonPrograms,
- }
- baseDirs.fonts = []string{
- kf.fonts,
- filepath.Join(kf.localAppData, "Microsoft", "Windows", "Fonts"),
- }
-}
-
-func initUserDirs(home string, kf *knownFolders) {
- UserDirs.Desktop = xdgPath(envDesktopDir, kf.desktop)
- UserDirs.Download = xdgPath(envDownloadDir, kf.downloads)
- UserDirs.Documents = xdgPath(envDocumentsDir, kf.documents)
- UserDirs.Music = xdgPath(envMusicDir, kf.music)
- UserDirs.Pictures = xdgPath(envPicturesDir, kf.pictures)
- UserDirs.Videos = xdgPath(envVideosDir, kf.videos)
- UserDirs.Templates = xdgPath(envTemplatesDir, kf.templates)
- UserDirs.PublicShare = xdgPath(envPublicShareDir, kf.public)
-}
-
-type knownFolders struct {
- systemDrive string
- systemRoot string
- programData string
- userProfile string
- userProfiles string
- roamingAppData string
- localAppData string
- desktop string
- downloads string
- documents string
- music string
- pictures string
- videos string
- templates string
- public string
- fonts string
- programs string
- commonPrograms string
-}
-
-func initKnownFolders(home string) *knownFolders {
- kf := &knownFolders{
- userProfile: home,
- }
- kf.systemDrive = filepath.VolumeName(pathutil.KnownFolder(
- windows.FOLDERID_Windows,
- []string{"SystemDrive", "SystemRoot", "windir"},
- []string{home, `C:`},
- )) + string(filepath.Separator)
- kf.systemRoot = pathutil.KnownFolder(
- windows.FOLDERID_Windows,
- []string{"SystemRoot", "windir"},
- []string{filepath.Join(kf.systemDrive, "Windows")},
- )
- kf.programData = pathutil.KnownFolder(
- windows.FOLDERID_ProgramData,
- []string{"ProgramData", "ALLUSERSPROFILE"},
- []string{filepath.Join(kf.systemDrive, "ProgramData")},
- )
- kf.userProfiles = pathutil.KnownFolder(
- windows.FOLDERID_UserProfiles,
- nil,
- []string{filepath.Join(kf.systemDrive, "Users")},
- )
- kf.roamingAppData = pathutil.KnownFolder(
- windows.FOLDERID_RoamingAppData,
- []string{"APPDATA"},
- []string{filepath.Join(home, "AppData", "Roaming")},
- )
- kf.localAppData = pathutil.KnownFolder(
- windows.FOLDERID_LocalAppData,
- []string{"LOCALAPPDATA"},
- []string{filepath.Join(home, "AppData", "Local")},
- )
- kf.desktop = pathutil.KnownFolder(
- windows.FOLDERID_Desktop,
- nil,
- []string{filepath.Join(home, "Desktop")},
- )
- kf.downloads = pathutil.KnownFolder(
- windows.FOLDERID_Downloads,
- nil,
- []string{filepath.Join(home, "Downloads")},
- )
- kf.documents = pathutil.KnownFolder(
- windows.FOLDERID_Documents,
- nil,
- []string{filepath.Join(home, "Documents")},
- )
- kf.music = pathutil.KnownFolder(
- windows.FOLDERID_Music,
- nil,
- []string{filepath.Join(home, "Music")},
- )
- kf.pictures = pathutil.KnownFolder(
- windows.FOLDERID_Pictures,
- nil,
- []string{filepath.Join(home, "Pictures")},
- )
- kf.videos = pathutil.KnownFolder(
- windows.FOLDERID_Videos,
- nil,
- []string{filepath.Join(home, "Videos")},
- )
- kf.templates = pathutil.KnownFolder(
- windows.FOLDERID_Templates,
- nil,
- []string{filepath.Join(kf.roamingAppData, "Microsoft", "Windows", "Templates")},
- )
- kf.public = pathutil.KnownFolder(
- windows.FOLDERID_Public,
- []string{"PUBLIC"},
- []string{filepath.Join(kf.userProfiles, "Public")},
- )
- kf.fonts = pathutil.KnownFolder(
- windows.FOLDERID_Fonts,
- nil,
- []string{filepath.Join(kf.systemRoot, "Fonts")},
- )
- kf.programs = pathutil.KnownFolder(
- windows.FOLDERID_Programs,
- nil,
- []string{filepath.Join(kf.roamingAppData, "Microsoft", "Windows", "Start Menu", "Programs")},
- )
- kf.commonPrograms = pathutil.KnownFolder(
- windows.FOLDERID_CommonPrograms,
- nil,
- []string{filepath.Join(kf.programData, "Microsoft", "Windows", "Start Menu", "Programs")},
- )
-
- return kf
-}
diff --git a/tools/vendor/github.com/adrg/xdg/user_dirs.go b/tools/vendor/github.com/adrg/xdg/user_dirs.go
deleted file mode 100644
index 72088748d..000000000
--- a/tools/vendor/github.com/adrg/xdg/user_dirs.go
+++ /dev/null
@@ -1,40 +0,0 @@
-package xdg
-
-// XDG user directories environment variables.
-const (
- envDesktopDir = "XDG_DESKTOP_DIR"
- envDownloadDir = "XDG_DOWNLOAD_DIR"
- envDocumentsDir = "XDG_DOCUMENTS_DIR"
- envMusicDir = "XDG_MUSIC_DIR"
- envPicturesDir = "XDG_PICTURES_DIR"
- envVideosDir = "XDG_VIDEOS_DIR"
- envTemplatesDir = "XDG_TEMPLATES_DIR"
- envPublicShareDir = "XDG_PUBLICSHARE_DIR"
-)
-
-// UserDirectories defines the locations of well known user directories.
-type UserDirectories struct {
- // Desktop defines the location of the user's desktop directory.
- Desktop string
-
- // Download defines a suitable location for user downloaded files.
- Download string
-
- // Documents defines a suitable location for user document files.
- Documents string
-
- // Music defines a suitable location for user audio files.
- Music string
-
- // Pictures defines a suitable location for user image files.
- Pictures string
-
- // VideosDir defines a suitable location for user video files.
- Videos string
-
- // Templates defines a suitable location for user template files.
- Templates string
-
- // PublicShare defines a suitable location for user shared files.
- PublicShare string
-}
diff --git a/tools/vendor/github.com/adrg/xdg/xdg.go b/tools/vendor/github.com/adrg/xdg/xdg.go
deleted file mode 100644
index 3d33ca6e5..000000000
--- a/tools/vendor/github.com/adrg/xdg/xdg.go
+++ /dev/null
@@ -1,218 +0,0 @@
-package xdg
-
-import (
- "os"
- "path/filepath"
-
- "github.com/adrg/xdg/internal/pathutil"
-)
-
-var (
- // Home contains the path of the user's home directory.
- Home string
-
- // DataHome defines the base directory relative to which user-specific
- // data files should be stored. This directory is defined by the
- // $XDG_DATA_HOME environment variable. If the variable is not set,
- // a default equal to $HOME/.local/share should be used.
- DataHome string
-
- // DataDirs defines the preference-ordered set of base directories to
- // search for data files in addition to the DataHome base directory.
- // This set of directories is defined by the $XDG_DATA_DIRS environment
- // variable. If the variable is not set, the default directories
- // to be used are /usr/local/share and /usr/share, in that order. The
- // DataHome directory is considered more important than any of the
- // directories defined by DataDirs. Therefore, user data files should be
- // written relative to the DataHome directory, if possible.
- DataDirs []string
-
- // ConfigHome defines the base directory relative to which user-specific
- // configuration files should be written. This directory is defined by
- // the $XDG_CONFIG_HOME environment variable. If the variable is not
- // not set, a default equal to $HOME/.config should be used.
- ConfigHome string
-
- // ConfigDirs defines the preference-ordered set of base directories to
- // search for configuration files in addition to the ConfigHome base
- // directory. This set of directories is defined by the $XDG_CONFIG_DIRS
- // environment variable. If the variable is not set, a default equal
- // to /etc/xdg should be used. The ConfigHome directory is considered
- // more important than any of the directories defined by ConfigDirs.
- // Therefore, user config files should be written relative to the
- // ConfigHome directory, if possible.
- ConfigDirs []string
-
- // StateHome defines the base directory relative to which user-specific
- // state files should be stored. This directory is defined by the
- // $XDG_STATE_HOME environment variable. If the variable is not set,
- // a default equal to ~/.local/state should be used.
- StateHome string
-
- // CacheHome defines the base directory relative to which user-specific
- // non-essential (cached) data should be written. This directory is
- // defined by the $XDG_CACHE_HOME environment variable. If the variable
- // is not set, a default equal to $HOME/.cache should be used.
- CacheHome string
-
- // RuntimeDir defines the base directory relative to which user-specific
- // non-essential runtime files and other file objects (such as sockets,
- // named pipes, etc.) should be stored. This directory is defined by the
- // $XDG_RUNTIME_DIR environment variable. If the variable is not set,
- // applications should fall back to a replacement directory with similar
- // capabilities. Applications should use this directory for communication
- // and synchronization purposes and should not place larger files in it,
- // since it might reside in runtime memory and cannot necessarily be
- // swapped out to disk.
- RuntimeDir string
-
- // UserDirs defines the locations of well known user directories.
- UserDirs UserDirectories
-
- // FontDirs defines the common locations where font files are stored.
- FontDirs []string
-
- // ApplicationDirs defines the common locations of applications.
- ApplicationDirs []string
-
- // baseDirs defines the locations of base directories.
- baseDirs baseDirectories
-)
-
-func init() {
- Reload()
-}
-
-// Reload refreshes base and user directories by reading the environment.
-// Defaults are applied for XDG variables which are empty or not present
-// in the environment.
-func Reload() {
- // Initialize home directory.
- Home = homeDir()
-
- // Initialize base and user directories.
- initDirs(Home)
-
- // Set standard directories.
- DataHome = baseDirs.dataHome
- DataDirs = baseDirs.data
- ConfigHome = baseDirs.configHome
- ConfigDirs = baseDirs.config
- StateHome = baseDirs.stateHome
- CacheHome = baseDirs.cacheHome
- RuntimeDir = baseDirs.runtime
-
- // Set non-standard directories.
- FontDirs = baseDirs.fonts
- ApplicationDirs = baseDirs.applications
-}
-
-// DataFile returns a suitable location for the specified data file.
-// The relPath parameter must contain the name of the data file, and
-// optionally, a set of parent directories (e.g. appname/app.data).
-// If the specified directories do not exist, they will be created relative
-// to the base data directory. On failure, an error containing the
-// attempted paths is returned.
-func DataFile(relPath string) (string, error) {
- return baseDirs.dataFile(relPath)
-}
-
-// ConfigFile returns a suitable location for the specified config file.
-// The relPath parameter must contain the name of the config file, and
-// optionally, a set of parent directories (e.g. appname/app.yaml).
-// If the specified directories do not exist, they will be created relative
-// to the base config directory. On failure, an error containing the
-// attempted paths is returned.
-func ConfigFile(relPath string) (string, error) {
- return baseDirs.configFile(relPath)
-}
-
-// StateFile returns a suitable location for the specified state file. State
-// files are usually volatile data files, not suitable to be stored relative
-// to the $XDG_DATA_HOME directory.
-// The relPath parameter must contain the name of the state file, and
-// optionally, a set of parent directories (e.g. appname/app.state).
-// If the specified directories do not exist, they will be created relative
-// to the base state directory. On failure, an error containing the
-// attempted paths is returned.
-func StateFile(relPath string) (string, error) {
- return baseDirs.stateFile(relPath)
-}
-
-// CacheFile returns a suitable location for the specified cache file.
-// The relPath parameter must contain the name of the cache file, and
-// optionally, a set of parent directories (e.g. appname/app.cache).
-// If the specified directories do not exist, they will be created relative
-// to the base cache directory. On failure, an error containing the
-// attempted paths is returned.
-func CacheFile(relPath string) (string, error) {
- return baseDirs.cacheFile(relPath)
-}
-
-// RuntimeFile returns a suitable location for the specified runtime file.
-// The relPath parameter must contain the name of the runtime file, and
-// optionally, a set of parent directories (e.g. appname/app.pid).
-// If the specified directories do not exist, they will be created relative
-// to the base runtime directory. On failure, an error containing the
-// attempted paths is returned.
-func RuntimeFile(relPath string) (string, error) {
- return baseDirs.runtimeFile(relPath)
-}
-
-// SearchDataFile searches for specified file in the data search paths.
-// The relPath parameter must contain the name of the data file, and
-// optionally, a set of parent directories (e.g. appname/app.data). If the
-// file cannot be found, an error specifying the searched paths is returned.
-func SearchDataFile(relPath string) (string, error) {
- return baseDirs.searchDataFile(relPath)
-}
-
-// SearchConfigFile searches for the specified file in config search paths.
-// The relPath parameter must contain the name of the config file, and
-// optionally, a set of parent directories (e.g. appname/app.yaml). If the
-// file cannot be found, an error specifying the searched paths is returned.
-func SearchConfigFile(relPath string) (string, error) {
- return baseDirs.searchConfigFile(relPath)
-}
-
-// SearchStateFile searches for the specified file in the state search path.
-// The relPath parameter must contain the name of the state file, and
-// optionally, a set of parent directories (e.g. appname/app.state). If the
-// file cannot be found, an error specifying the searched path is returned.
-func SearchStateFile(relPath string) (string, error) {
- return baseDirs.searchStateFile(relPath)
-}
-
-// SearchCacheFile searches for the specified file in the cache search path.
-// The relPath parameter must contain the name of the cache file, and
-// optionally, a set of parent directories (e.g. appname/app.cache). If the
-// file cannot be found, an error specifying the searched path is returned.
-func SearchCacheFile(relPath string) (string, error) {
- return baseDirs.searchCacheFile(relPath)
-}
-
-// SearchRuntimeFile searches for the specified file in the runtime search path.
-// The relPath parameter must contain the name of the runtime file, and
-// optionally, a set of parent directories (e.g. appname/app.pid). If the
-// file cannot be found, an error specifying the searched path is returned.
-func SearchRuntimeFile(relPath string) (string, error) {
- return baseDirs.searchRuntimeFile(relPath)
-}
-
-func xdgPath(name, defaultPath string) string {
- dir := pathutil.ExpandHome(os.Getenv(name), Home)
- if dir != "" && filepath.IsAbs(dir) {
- return dir
- }
-
- return defaultPath
-}
-
-func xdgPaths(name string, defaultPaths ...string) []string {
- dirs := pathutil.Unique(filepath.SplitList(os.Getenv(name)), Home)
- if len(dirs) != 0 {
- return dirs
- }
-
- return pathutil.Unique(defaultPaths, Home)
-}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn.go
deleted file mode 100644
index a4e2079e6..000000000
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn.go
+++ /dev/null
@@ -1,159 +0,0 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
-// Use of this file is governed by the BSD 3-clause license that
-// can be found in the LICENSE.txt file in the project root.
-
-package antlr
-
-import "sync"
-
-var ATNInvalidAltNumber int
-
-type ATN struct {
- // DecisionToState is the decision points for all rules, subrules, optional
- // blocks, ()+, ()*, etc. Used to build DFA predictors for them.
- DecisionToState []DecisionState
-
- // grammarType is the ATN type and is used for deserializing ATNs from strings.
- grammarType int
-
- // lexerActions is referenced by action transitions in the ATN for lexer ATNs.
- lexerActions []LexerAction
-
- // maxTokenType is the maximum value for any symbol recognized by a transition in the ATN.
- maxTokenType int
-
- modeNameToStartState map[string]*TokensStartState
-
- modeToStartState []*TokensStartState
-
- // ruleToStartState maps from rule index to starting state number.
- ruleToStartState []*RuleStartState
-
- // ruleToStopState maps from rule index to stop state number.
- ruleToStopState []*RuleStopState
-
- // ruleToTokenType maps the rule index to the resulting token type for lexer
- // ATNs. For parser ATNs, it maps the rule index to the generated bypass token
- // type if ATNDeserializationOptions.isGenerateRuleBypassTransitions was
- // specified, and otherwise is nil.
- ruleToTokenType []int
-
- states []ATNState
-
- mu sync.Mutex
- stateMu sync.RWMutex
- edgeMu sync.RWMutex
-}
-
-func NewATN(grammarType int, maxTokenType int) *ATN {
- return &ATN{
- grammarType: grammarType,
- maxTokenType: maxTokenType,
- modeNameToStartState: make(map[string]*TokensStartState),
- }
-}
-
-// NextTokensInContext computes the set of valid tokens that can occur starting
-// in state s. If ctx is nil, the set of tokens will not include what can follow
-// the rule surrounding s. In other words, the set will be restricted to tokens
-// reachable staying within the rule of s.
-func (a *ATN) NextTokensInContext(s ATNState, ctx RuleContext) *IntervalSet {
- return NewLL1Analyzer(a).Look(s, nil, ctx)
-}
-
-// NextTokensNoContext computes the set of valid tokens that can occur starting
-// in s and staying in same rule. Token.EPSILON is in set if we reach end of
-// rule.
-func (a *ATN) NextTokensNoContext(s ATNState) *IntervalSet {
- a.mu.Lock()
- defer a.mu.Unlock()
- iset := s.GetNextTokenWithinRule()
- if iset == nil {
- iset = a.NextTokensInContext(s, nil)
- iset.readOnly = true
- s.SetNextTokenWithinRule(iset)
- }
- return iset
-}
-
-func (a *ATN) NextTokens(s ATNState, ctx RuleContext) *IntervalSet {
- if ctx == nil {
- return a.NextTokensNoContext(s)
- }
-
- return a.NextTokensInContext(s, ctx)
-}
-
-func (a *ATN) addState(state ATNState) {
- if state != nil {
- state.SetATN(a)
- state.SetStateNumber(len(a.states))
- }
-
- a.states = append(a.states, state)
-}
-
-func (a *ATN) removeState(state ATNState) {
- a.states[state.GetStateNumber()] = nil // Just free the memory; don't shift states in the slice
-}
-
-func (a *ATN) defineDecisionState(s DecisionState) int {
- a.DecisionToState = append(a.DecisionToState, s)
- s.setDecision(len(a.DecisionToState) - 1)
-
- return s.getDecision()
-}
-
-func (a *ATN) getDecisionState(decision int) DecisionState {
- if len(a.DecisionToState) == 0 {
- return nil
- }
-
- return a.DecisionToState[decision]
-}
-
-// getExpectedTokens computes the set of input symbols which could follow ATN
-// state number stateNumber in the specified full parse context ctx and returns
-// the set of potentially valid input symbols which could follow the specified
-// state in the specified context. This method considers the complete parser
-// context, but does not evaluate semantic predicates (i.e. all predicates
-// encountered during the calculation are assumed true). If a path in the ATN
-// exists from the starting state to the RuleStopState of the outermost context
-// without Matching any symbols, Token.EOF is added to the returned set.
-//
-// A nil ctx defaults to ParserRuleContext.EMPTY.
-//
-// It panics if the ATN does not contain state stateNumber.
-func (a *ATN) getExpectedTokens(stateNumber int, ctx RuleContext) *IntervalSet {
- if stateNumber < 0 || stateNumber >= len(a.states) {
- panic("Invalid state number.")
- }
-
- s := a.states[stateNumber]
- following := a.NextTokens(s, nil)
-
- if !following.contains(TokenEpsilon) {
- return following
- }
-
- expected := NewIntervalSet()
-
- expected.addSet(following)
- expected.removeOne(TokenEpsilon)
-
- for ctx != nil && ctx.GetInvokingState() >= 0 && following.contains(TokenEpsilon) {
- invokingState := a.states[ctx.GetInvokingState()]
- rt := invokingState.GetTransitions()[0]
-
- following = a.NextTokens(rt.(*RuleTransition).followState, nil)
- expected.addSet(following)
- expected.removeOne(TokenEpsilon)
- ctx = ctx.GetParent().(RuleContext)
- }
-
- if following.contains(TokenEpsilon) {
- expected.addOne(TokenEOF)
- }
-
- return expected
-}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/errors.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/errors.go
deleted file mode 100644
index 2ef74926e..000000000
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/errors.go
+++ /dev/null
@@ -1,241 +0,0 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
-// Use of this file is governed by the BSD 3-clause license that
-// can be found in the LICENSE.txt file in the project root.
-
-package antlr
-
-// The root of the ANTLR exception hierarchy. In general, ANTLR tracks just
-// 3 kinds of errors: prediction errors, failed predicate errors, and
-// mismatched input errors. In each case, the parser knows where it is
-// in the input, where it is in the ATN, the rule invocation stack,
-// and what kind of problem occurred.
-
-type RecognitionException interface {
- GetOffendingToken() Token
- GetMessage() string
- GetInputStream() IntStream
-}
-
-type BaseRecognitionException struct {
- message string
- recognizer Recognizer
- offendingToken Token
- offendingState int
- ctx RuleContext
- input IntStream
-}
-
-func NewBaseRecognitionException(message string, recognizer Recognizer, input IntStream, ctx RuleContext) *BaseRecognitionException {
-
- // todo
- // Error.call(this)
- //
- // if (!!Error.captureStackTrace) {
- // Error.captureStackTrace(this, RecognitionException)
- // } else {
- // stack := NewError().stack
- // }
- // TODO may be able to use - "runtime" func Stack(buf []byte, all bool) int
-
- t := new(BaseRecognitionException)
-
- t.message = message
- t.recognizer = recognizer
- t.input = input
- t.ctx = ctx
- // The current {@link Token} when an error occurred. Since not all streams
- // support accessing symbols by index, we have to track the {@link Token}
- // instance itself.
- t.offendingToken = nil
- // Get the ATN state number the parser was in at the time the error
- // occurred. For {@link NoViableAltException} and
- // {@link LexerNoViableAltException} exceptions, this is the
- // {@link DecisionState} number. For others, it is the state whose outgoing
- // edge we couldn't Match.
- t.offendingState = -1
- if t.recognizer != nil {
- t.offendingState = t.recognizer.GetState()
- }
-
- return t
-}
-
-func (b *BaseRecognitionException) GetMessage() string {
- return b.message
-}
-
-func (b *BaseRecognitionException) GetOffendingToken() Token {
- return b.offendingToken
-}
-
-func (b *BaseRecognitionException) GetInputStream() IntStream {
- return b.input
-}
-
-//
If the state number is not known, b method returns -1.
-
-//
-// Gets the set of input symbols which could potentially follow the
-// previously Matched symbol at the time b exception was panicn.
-//
-//
If the set of expected tokens is not known and could not be computed,
-// b method returns {@code nil}.
-//
-// @return The set of token types that could potentially follow the current
-// state in the ATN, or {@code nil} if the information is not available.
-// /
-func (b *BaseRecognitionException) getExpectedTokens() *IntervalSet {
- if b.recognizer != nil {
- return b.recognizer.GetATN().getExpectedTokens(b.offendingState, b.ctx)
- }
-
- return nil
-}
-
-func (b *BaseRecognitionException) String() string {
- return b.message
-}
-
-type LexerNoViableAltException struct {
- *BaseRecognitionException
-
- startIndex int
- deadEndConfigs ATNConfigSet
-}
-
-func NewLexerNoViableAltException(lexer Lexer, input CharStream, startIndex int, deadEndConfigs ATNConfigSet) *LexerNoViableAltException {
-
- l := new(LexerNoViableAltException)
-
- l.BaseRecognitionException = NewBaseRecognitionException("", lexer, input, nil)
-
- l.startIndex = startIndex
- l.deadEndConfigs = deadEndConfigs
-
- return l
-}
-
-func (l *LexerNoViableAltException) String() string {
- symbol := ""
- if l.startIndex >= 0 && l.startIndex < l.input.Size() {
- symbol = l.input.(CharStream).GetTextFromInterval(NewInterval(l.startIndex, l.startIndex))
- }
- return "LexerNoViableAltException" + symbol
-}
-
-type NoViableAltException struct {
- *BaseRecognitionException
-
- startToken Token
- offendingToken Token
- ctx ParserRuleContext
- deadEndConfigs ATNConfigSet
-}
-
-// Indicates that the parser could not decide which of two or more paths
-// to take based upon the remaining input. It tracks the starting token
-// of the offending input and also knows where the parser was
-// in the various paths when the error. Reported by ReportNoViableAlternative()
-//
-func NewNoViableAltException(recognizer Parser, input TokenStream, startToken Token, offendingToken Token, deadEndConfigs ATNConfigSet, ctx ParserRuleContext) *NoViableAltException {
-
- if ctx == nil {
- ctx = recognizer.GetParserRuleContext()
- }
-
- if offendingToken == nil {
- offendingToken = recognizer.GetCurrentToken()
- }
-
- if startToken == nil {
- startToken = recognizer.GetCurrentToken()
- }
-
- if input == nil {
- input = recognizer.GetInputStream().(TokenStream)
- }
-
- n := new(NoViableAltException)
- n.BaseRecognitionException = NewBaseRecognitionException("", recognizer, input, ctx)
-
- // Which configurations did we try at input.Index() that couldn't Match
- // input.LT(1)?//
- n.deadEndConfigs = deadEndConfigs
- // The token object at the start index the input stream might
- // not be buffering tokens so get a reference to it. (At the
- // time the error occurred, of course the stream needs to keep a
- // buffer all of the tokens but later we might not have access to those.)
- n.startToken = startToken
- n.offendingToken = offendingToken
-
- return n
-}
-
-type InputMisMatchException struct {
- *BaseRecognitionException
-}
-
-// This signifies any kind of mismatched input exceptions such as
-// when the current input does not Match the expected token.
-//
-func NewInputMisMatchException(recognizer Parser) *InputMisMatchException {
-
- i := new(InputMisMatchException)
- i.BaseRecognitionException = NewBaseRecognitionException("", recognizer, recognizer.GetInputStream(), recognizer.GetParserRuleContext())
-
- i.offendingToken = recognizer.GetCurrentToken()
-
- return i
-
-}
-
-// A semantic predicate failed during validation. Validation of predicates
-// occurs when normally parsing the alternative just like Matching a token.
-// Disambiguating predicate evaluation occurs when we test a predicate during
-// prediction.
-
-type FailedPredicateException struct {
- *BaseRecognitionException
-
- ruleIndex int
- predicateIndex int
- predicate string
-}
-
-func NewFailedPredicateException(recognizer Parser, predicate string, message string) *FailedPredicateException {
-
- f := new(FailedPredicateException)
-
- f.BaseRecognitionException = NewBaseRecognitionException(f.formatMessage(predicate, message), recognizer, recognizer.GetInputStream(), recognizer.GetParserRuleContext())
-
- s := recognizer.GetInterpreter().atn.states[recognizer.GetState()]
- trans := s.GetTransitions()[0]
- if trans2, ok := trans.(*PredicateTransition); ok {
- f.ruleIndex = trans2.ruleIndex
- f.predicateIndex = trans2.predIndex
- } else {
- f.ruleIndex = 0
- f.predicateIndex = 0
- }
- f.predicate = predicate
- f.offendingToken = recognizer.GetCurrentToken()
-
- return f
-}
-
-func (f *FailedPredicateException) formatMessage(predicate, message string) string {
- if message != "" {
- return message
- }
-
- return "failed predicate: {" + predicate + "}?"
-}
-
-type ParseCancellationException struct {
-}
-
-func NewParseCancellationException() *ParseCancellationException {
- // Error.call(this)
- // Error.captureStackTrace(this, ParseCancellationException)
- return new(ParseCancellationException)
-}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/parser.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/parser.go
deleted file mode 100644
index 2ab2f5605..000000000
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/parser.go
+++ /dev/null
@@ -1,718 +0,0 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
-// Use of this file is governed by the BSD 3-clause license that
-// can be found in the LICENSE.txt file in the project root.
-
-package antlr
-
-import (
- "fmt"
- "strconv"
-)
-
-type Parser interface {
- Recognizer
-
- GetInterpreter() *ParserATNSimulator
-
- GetTokenStream() TokenStream
- GetTokenFactory() TokenFactory
- GetParserRuleContext() ParserRuleContext
- SetParserRuleContext(ParserRuleContext)
- Consume() Token
- GetParseListeners() []ParseTreeListener
-
- GetErrorHandler() ErrorStrategy
- SetErrorHandler(ErrorStrategy)
- GetInputStream() IntStream
- GetCurrentToken() Token
- GetExpectedTokens() *IntervalSet
- NotifyErrorListeners(string, Token, RecognitionException)
- IsExpectedToken(int) bool
- GetPrecedence() int
- GetRuleInvocationStack(ParserRuleContext) []string
-}
-
-type BaseParser struct {
- *BaseRecognizer
-
- Interpreter *ParserATNSimulator
- BuildParseTrees bool
-
- input TokenStream
- errHandler ErrorStrategy
- precedenceStack IntStack
- ctx ParserRuleContext
-
- tracer *TraceListener
- parseListeners []ParseTreeListener
- _SyntaxErrors int
-}
-
-// p.is all the parsing support code essentially most of it is error
-// recovery stuff.//
-func NewBaseParser(input TokenStream) *BaseParser {
-
- p := new(BaseParser)
-
- p.BaseRecognizer = NewBaseRecognizer()
-
- // The input stream.
- p.input = nil
- // The error handling strategy for the parser. The default value is a new
- // instance of {@link DefaultErrorStrategy}.
- p.errHandler = NewDefaultErrorStrategy()
- p.precedenceStack = make([]int, 0)
- p.precedenceStack.Push(0)
- // The {@link ParserRuleContext} object for the currently executing rule.
- // p.is always non-nil during the parsing process.
- p.ctx = nil
- // Specifies whether or not the parser should construct a parse tree during
- // the parsing process. The default value is {@code true}.
- p.BuildParseTrees = true
- // When {@link //setTrace}{@code (true)} is called, a reference to the
- // {@link TraceListener} is stored here so it can be easily removed in a
- // later call to {@link //setTrace}{@code (false)}. The listener itself is
- // implemented as a parser listener so p.field is not directly used by
- // other parser methods.
- p.tracer = nil
- // The list of {@link ParseTreeListener} listeners registered to receive
- // events during the parse.
- p.parseListeners = nil
- // The number of syntax errors Reported during parsing. p.value is
- // incremented each time {@link //NotifyErrorListeners} is called.
- p._SyntaxErrors = 0
- p.SetInputStream(input)
-
- return p
-}
-
-// p.field maps from the serialized ATN string to the deserialized {@link
-// ATN} with
-// bypass alternatives.
-//
-// @see ATNDeserializationOptions//isGenerateRuleBypassTransitions()
-//
-var bypassAltsAtnCache = make(map[string]int)
-
-// reset the parser's state//
-func (p *BaseParser) reset() {
- if p.input != nil {
- p.input.Seek(0)
- }
- p.errHandler.reset(p)
- p.ctx = nil
- p._SyntaxErrors = 0
- p.SetTrace(nil)
- p.precedenceStack = make([]int, 0)
- p.precedenceStack.Push(0)
- if p.Interpreter != nil {
- p.Interpreter.reset()
- }
-}
-
-func (p *BaseParser) GetErrorHandler() ErrorStrategy {
- return p.errHandler
-}
-
-func (p *BaseParser) SetErrorHandler(e ErrorStrategy) {
- p.errHandler = e
-}
-
-// Match current input symbol against {@code ttype}. If the symbol type
-// Matches, {@link ANTLRErrorStrategy//ReportMatch} and {@link //consume} are
-// called to complete the Match process.
-//
-//
If the symbol type does not Match,
-// {@link ANTLRErrorStrategy//recoverInline} is called on the current error
-// strategy to attempt recovery. If {@link //getBuildParseTree} is
-// {@code true} and the token index of the symbol returned by
-// {@link ANTLRErrorStrategy//recoverInline} is -1, the symbol is added to
-// the parse tree by calling {@link ParserRuleContext//addErrorNode}.
-//
-// @param ttype the token type to Match
-// @return the Matched symbol
-// @panics RecognitionException if the current input symbol did not Match
-// {@code ttype} and the error strategy could not recover from the
-// mismatched symbol
-
-func (p *BaseParser) Match(ttype int) Token {
-
- t := p.GetCurrentToken()
-
- if t.GetTokenType() == ttype {
- p.errHandler.ReportMatch(p)
- p.Consume()
- } else {
- t = p.errHandler.RecoverInline(p)
- if p.BuildParseTrees && t.GetTokenIndex() == -1 {
- // we must have conjured up a Newtoken during single token
- // insertion
- // if it's not the current symbol
- p.ctx.AddErrorNode(t)
- }
- }
-
- return t
-}
-
-// Match current input symbol as a wildcard. If the symbol type Matches
-// (i.e. has a value greater than 0), {@link ANTLRErrorStrategy//ReportMatch}
-// and {@link //consume} are called to complete the Match process.
-//
-//
If the symbol type does not Match,
-// {@link ANTLRErrorStrategy//recoverInline} is called on the current error
-// strategy to attempt recovery. If {@link //getBuildParseTree} is
-// {@code true} and the token index of the symbol returned by
-// {@link ANTLRErrorStrategy//recoverInline} is -1, the symbol is added to
-// the parse tree by calling {@link ParserRuleContext//addErrorNode}.
-//
-// @return the Matched symbol
-// @panics RecognitionException if the current input symbol did not Match
-// a wildcard and the error strategy could not recover from the mismatched
-// symbol
-
-func (p *BaseParser) MatchWildcard() Token {
- t := p.GetCurrentToken()
- if t.GetTokenType() > 0 {
- p.errHandler.ReportMatch(p)
- p.Consume()
- } else {
- t = p.errHandler.RecoverInline(p)
- if p.BuildParseTrees && t.GetTokenIndex() == -1 {
- // we must have conjured up a Newtoken during single token
- // insertion
- // if it's not the current symbol
- p.ctx.AddErrorNode(t)
- }
- }
- return t
-}
-
-func (p *BaseParser) GetParserRuleContext() ParserRuleContext {
- return p.ctx
-}
-
-func (p *BaseParser) SetParserRuleContext(v ParserRuleContext) {
- p.ctx = v
-}
-
-func (p *BaseParser) GetParseListeners() []ParseTreeListener {
- if p.parseListeners == nil {
- return make([]ParseTreeListener, 0)
- }
- return p.parseListeners
-}
-
-// Registers {@code listener} to receive events during the parsing process.
-//
-//
To support output-preserving grammar transformations (including but not
-// limited to left-recursion removal, automated left-factoring, and
-// optimized code generation), calls to listener methods during the parse
-// may differ substantially from calls made by
-// {@link ParseTreeWalker//DEFAULT} used after the parse is complete. In
-// particular, rule entry and exit events may occur in a different order
-// during the parse than after the parser. In addition, calls to certain
-// rule entry methods may be omitted.
-//
-//
With the following specific exceptions, calls to listener events are
-// deterministic, i.e. for identical input the calls to listener
-// methods will be the same.
-//
-//
-//
Alterations to the grammar used to generate code may change the
-// behavior of the listener calls.
-//
Alterations to the command line options passed to ANTLR 4 when
-// generating the parser may change the behavior of the listener calls.
-//
Changing the version of the ANTLR Tool used to generate the parser
-// may change the behavior of the listener calls.
-//
-//
-// @param listener the listener to add
-//
-// @panics nilPointerException if {@code} listener is {@code nil}
-//
-func (p *BaseParser) AddParseListener(listener ParseTreeListener) {
- if listener == nil {
- panic("listener")
- }
- if p.parseListeners == nil {
- p.parseListeners = make([]ParseTreeListener, 0)
- }
- p.parseListeners = append(p.parseListeners, listener)
-}
-
-//
-// Remove {@code listener} from the list of parse listeners.
-//
-//
If {@code listener} is {@code nil} or has not been added as a parse
-// listener, p.method does nothing.
-// @param listener the listener to remove
-//
-func (p *BaseParser) RemoveParseListener(listener ParseTreeListener) {
-
- if p.parseListeners != nil {
-
- idx := -1
- for i, v := range p.parseListeners {
- if v == listener {
- idx = i
- break
- }
- }
-
- if idx == -1 {
- return
- }
-
- // remove the listener from the slice
- p.parseListeners = append(p.parseListeners[0:idx], p.parseListeners[idx+1:]...)
-
- if len(p.parseListeners) == 0 {
- p.parseListeners = nil
- }
- }
-}
-
-// Remove all parse listeners.
-func (p *BaseParser) removeParseListeners() {
- p.parseListeners = nil
-}
-
-// Notify any parse listeners of an enter rule event.
-func (p *BaseParser) TriggerEnterRuleEvent() {
- if p.parseListeners != nil {
- ctx := p.ctx
- for _, listener := range p.parseListeners {
- listener.EnterEveryRule(ctx)
- ctx.EnterRule(listener)
- }
- }
-}
-
-//
-// Notify any parse listeners of an exit rule event.
-//
-// @see //addParseListener
-//
-func (p *BaseParser) TriggerExitRuleEvent() {
- if p.parseListeners != nil {
- // reverse order walk of listeners
- ctx := p.ctx
- l := len(p.parseListeners) - 1
-
- for i := range p.parseListeners {
- listener := p.parseListeners[l-i]
- ctx.ExitRule(listener)
- listener.ExitEveryRule(ctx)
- }
- }
-}
-
-func (p *BaseParser) GetInterpreter() *ParserATNSimulator {
- return p.Interpreter
-}
-
-func (p *BaseParser) GetATN() *ATN {
- return p.Interpreter.atn
-}
-
-func (p *BaseParser) GetTokenFactory() TokenFactory {
- return p.input.GetTokenSource().GetTokenFactory()
-}
-
-// Tell our token source and error strategy about a Newway to create tokens.//
-func (p *BaseParser) setTokenFactory(factory TokenFactory) {
- p.input.GetTokenSource().setTokenFactory(factory)
-}
-
-// The ATN with bypass alternatives is expensive to create so we create it
-// lazily.
-//
-// @panics UnsupportedOperationException if the current parser does not
-// implement the {@link //getSerializedATN()} method.
-//
-func (p *BaseParser) GetATNWithBypassAlts() {
-
- // TODO
- panic("Not implemented!")
-
- // serializedAtn := p.getSerializedATN()
- // if (serializedAtn == nil) {
- // panic("The current parser does not support an ATN with bypass alternatives.")
- // }
- // result := p.bypassAltsAtnCache[serializedAtn]
- // if (result == nil) {
- // deserializationOptions := NewATNDeserializationOptions(nil)
- // deserializationOptions.generateRuleBypassTransitions = true
- // result = NewATNDeserializer(deserializationOptions).deserialize(serializedAtn)
- // p.bypassAltsAtnCache[serializedAtn] = result
- // }
- // return result
-}
-
-// The preferred method of getting a tree pattern. For example, here's a
-// sample use:
-//
-//
-// ParseTree t = parser.expr()
-// ParseTreePattern p = parser.compileParseTreePattern("<ID>+0",
-// MyParser.RULE_expr)
-// ParseTreeMatch m = p.Match(t)
-// String id = m.Get("ID")
-//
-
-func (p *BaseParser) compileParseTreePattern(pattern, patternRuleIndex, lexer Lexer) {
-
- panic("NewParseTreePatternMatcher not implemented!")
- //
- // if (lexer == nil) {
- // if (p.GetTokenStream() != nil) {
- // tokenSource := p.GetTokenStream().GetTokenSource()
- // if _, ok := tokenSource.(ILexer); ok {
- // lexer = tokenSource
- // }
- // }
- // }
- // if (lexer == nil) {
- // panic("Parser can't discover a lexer to use")
- // }
-
- // m := NewParseTreePatternMatcher(lexer, p)
- // return m.compile(pattern, patternRuleIndex)
-}
-
-func (p *BaseParser) GetInputStream() IntStream {
- return p.GetTokenStream()
-}
-
-func (p *BaseParser) SetInputStream(input TokenStream) {
- p.SetTokenStream(input)
-}
-
-func (p *BaseParser) GetTokenStream() TokenStream {
- return p.input
-}
-
-// Set the token stream and reset the parser.//
-func (p *BaseParser) SetTokenStream(input TokenStream) {
- p.input = nil
- p.reset()
- p.input = input
-}
-
-// Match needs to return the current input symbol, which gets put
-// into the label for the associated token ref e.g., x=ID.
-//
-func (p *BaseParser) GetCurrentToken() Token {
- return p.input.LT(1)
-}
-
-func (p *BaseParser) NotifyErrorListeners(msg string, offendingToken Token, err RecognitionException) {
- if offendingToken == nil {
- offendingToken = p.GetCurrentToken()
- }
- p._SyntaxErrors++
- line := offendingToken.GetLine()
- column := offendingToken.GetColumn()
- listener := p.GetErrorListenerDispatch()
- listener.SyntaxError(p, offendingToken, line, column, msg, err)
-}
-
-func (p *BaseParser) Consume() Token {
- o := p.GetCurrentToken()
- if o.GetTokenType() != TokenEOF {
- p.GetInputStream().Consume()
- }
- hasListener := p.parseListeners != nil && len(p.parseListeners) > 0
- if p.BuildParseTrees || hasListener {
- if p.errHandler.InErrorRecoveryMode(p) {
- node := p.ctx.AddErrorNode(o)
- if p.parseListeners != nil {
- for _, l := range p.parseListeners {
- l.VisitErrorNode(node)
- }
- }
-
- } else {
- node := p.ctx.AddTokenNode(o)
- if p.parseListeners != nil {
- for _, l := range p.parseListeners {
- l.VisitTerminal(node)
- }
- }
- }
- // node.invokingState = p.state
- }
-
- return o
-}
-
-func (p *BaseParser) addContextToParseTree() {
- // add current context to parent if we have a parent
- if p.ctx.GetParent() != nil {
- p.ctx.GetParent().(ParserRuleContext).AddChild(p.ctx)
- }
-}
-
-func (p *BaseParser) EnterRule(localctx ParserRuleContext, state, ruleIndex int) {
- p.SetState(state)
- p.ctx = localctx
- p.ctx.SetStart(p.input.LT(1))
- if p.BuildParseTrees {
- p.addContextToParseTree()
- }
- if p.parseListeners != nil {
- p.TriggerEnterRuleEvent()
- }
-}
-
-func (p *BaseParser) ExitRule() {
- p.ctx.SetStop(p.input.LT(-1))
- // trigger event on ctx, before it reverts to parent
- if p.parseListeners != nil {
- p.TriggerExitRuleEvent()
- }
- p.SetState(p.ctx.GetInvokingState())
- if p.ctx.GetParent() != nil {
- p.ctx = p.ctx.GetParent().(ParserRuleContext)
- } else {
- p.ctx = nil
- }
-}
-
-func (p *BaseParser) EnterOuterAlt(localctx ParserRuleContext, altNum int) {
- localctx.SetAltNumber(altNum)
- // if we have Newlocalctx, make sure we replace existing ctx
- // that is previous child of parse tree
- if p.BuildParseTrees && p.ctx != localctx {
- if p.ctx.GetParent() != nil {
- p.ctx.GetParent().(ParserRuleContext).RemoveLastChild()
- p.ctx.GetParent().(ParserRuleContext).AddChild(localctx)
- }
- }
- p.ctx = localctx
-}
-
-// Get the precedence level for the top-most precedence rule.
-//
-// @return The precedence level for the top-most precedence rule, or -1 if
-// the parser context is not nested within a precedence rule.
-
-func (p *BaseParser) GetPrecedence() int {
- if len(p.precedenceStack) == 0 {
- return -1
- }
-
- return p.precedenceStack[len(p.precedenceStack)-1]
-}
-
-func (p *BaseParser) EnterRecursionRule(localctx ParserRuleContext, state, ruleIndex, precedence int) {
- p.SetState(state)
- p.precedenceStack.Push(precedence)
- p.ctx = localctx
- p.ctx.SetStart(p.input.LT(1))
- if p.parseListeners != nil {
- p.TriggerEnterRuleEvent() // simulates rule entry for
- // left-recursive rules
- }
-}
-
-//
-// Like {@link //EnterRule} but for recursive rules.
-
-func (p *BaseParser) PushNewRecursionContext(localctx ParserRuleContext, state, ruleIndex int) {
- previous := p.ctx
- previous.SetParent(localctx)
- previous.SetInvokingState(state)
- previous.SetStop(p.input.LT(-1))
-
- p.ctx = localctx
- p.ctx.SetStart(previous.GetStart())
- if p.BuildParseTrees {
- p.ctx.AddChild(previous)
- }
- if p.parseListeners != nil {
- p.TriggerEnterRuleEvent() // simulates rule entry for
- // left-recursive rules
- }
-}
-
-func (p *BaseParser) UnrollRecursionContexts(parentCtx ParserRuleContext) {
- p.precedenceStack.Pop()
- p.ctx.SetStop(p.input.LT(-1))
- retCtx := p.ctx // save current ctx (return value)
- // unroll so ctx is as it was before call to recursive method
- if p.parseListeners != nil {
- for p.ctx != parentCtx {
- p.TriggerExitRuleEvent()
- p.ctx = p.ctx.GetParent().(ParserRuleContext)
- }
- } else {
- p.ctx = parentCtx
- }
- // hook into tree
- retCtx.SetParent(parentCtx)
- if p.BuildParseTrees && parentCtx != nil {
- // add return ctx into invoking rule's tree
- parentCtx.AddChild(retCtx)
- }
-}
-
-func (p *BaseParser) GetInvokingContext(ruleIndex int) ParserRuleContext {
- ctx := p.ctx
- for ctx != nil {
- if ctx.GetRuleIndex() == ruleIndex {
- return ctx
- }
- ctx = ctx.GetParent().(ParserRuleContext)
- }
- return nil
-}
-
-func (p *BaseParser) Precpred(localctx RuleContext, precedence int) bool {
- return precedence >= p.precedenceStack[len(p.precedenceStack)-1]
-}
-
-func (p *BaseParser) inContext(context ParserRuleContext) bool {
- // TODO: useful in parser?
- return false
-}
-
-//
-// Checks whether or not {@code symbol} can follow the current state in the
-// ATN. The behavior of p.method is equivalent to the following, but is
-// implemented such that the complete context-sensitive follow set does not
-// need to be explicitly constructed.
-//
-//
-//
-// @param symbol the symbol type to check
-// @return {@code true} if {@code symbol} can follow the current state in
-// the ATN, otherwise {@code false}.
-
-func (p *BaseParser) IsExpectedToken(symbol int) bool {
- atn := p.Interpreter.atn
- ctx := p.ctx
- s := atn.states[p.state]
- following := atn.NextTokens(s, nil)
- if following.contains(symbol) {
- return true
- }
- if !following.contains(TokenEpsilon) {
- return false
- }
- for ctx != nil && ctx.GetInvokingState() >= 0 && following.contains(TokenEpsilon) {
- invokingState := atn.states[ctx.GetInvokingState()]
- rt := invokingState.GetTransitions()[0]
- following = atn.NextTokens(rt.(*RuleTransition).followState, nil)
- if following.contains(symbol) {
- return true
- }
- ctx = ctx.GetParent().(ParserRuleContext)
- }
- if following.contains(TokenEpsilon) && symbol == TokenEOF {
- return true
- }
-
- return false
-}
-
-// Computes the set of input symbols which could follow the current parser
-// state and context, as given by {@link //GetState} and {@link //GetContext},
-// respectively.
-//
-// @see ATN//getExpectedTokens(int, RuleContext)
-//
-func (p *BaseParser) GetExpectedTokens() *IntervalSet {
- return p.Interpreter.atn.getExpectedTokens(p.state, p.ctx)
-}
-
-func (p *BaseParser) GetExpectedTokensWithinCurrentRule() *IntervalSet {
- atn := p.Interpreter.atn
- s := atn.states[p.state]
- return atn.NextTokens(s, nil)
-}
-
-// Get a rule's index (i.e., {@code RULE_ruleName} field) or -1 if not found.//
-func (p *BaseParser) GetRuleIndex(ruleName string) int {
- var ruleIndex, ok = p.GetRuleIndexMap()[ruleName]
- if ok {
- return ruleIndex
- }
-
- return -1
-}
-
-// Return List<String> of the rule names in your parser instance
-// leading up to a call to the current rule. You could override if
-// you want more details such as the file/line info of where
-// in the ATN a rule is invoked.
-//
-// this very useful for error messages.
-
-func (p *BaseParser) GetRuleInvocationStack(c ParserRuleContext) []string {
- if c == nil {
- c = p.ctx
- }
- stack := make([]string, 0)
- for c != nil {
- // compute what follows who invoked us
- ruleIndex := c.GetRuleIndex()
- if ruleIndex < 0 {
- stack = append(stack, "n/a")
- } else {
- stack = append(stack, p.GetRuleNames()[ruleIndex])
- }
-
- vp := c.GetParent()
-
- if vp == nil {
- break
- }
-
- c = vp.(ParserRuleContext)
- }
- return stack
-}
-
-// For debugging and other purposes.//
-func (p *BaseParser) GetDFAStrings() string {
- return fmt.Sprint(p.Interpreter.decisionToDFA)
-}
-
-// For debugging and other purposes.//
-func (p *BaseParser) DumpDFA() {
- seenOne := false
- for _, dfa := range p.Interpreter.decisionToDFA {
- if dfa.numStates() > 0 {
- if seenOne {
- fmt.Println()
- }
- fmt.Println("Decision " + strconv.Itoa(dfa.decision) + ":")
- fmt.Print(dfa.String(p.LiteralNames, p.SymbolicNames))
- seenOne = true
- }
- }
-}
-
-func (p *BaseParser) GetSourceName() string {
- return p.GrammarFileName
-}
-
-// During a parse is sometimes useful to listen in on the rule entry and exit
-// events as well as token Matches. p.is for quick and dirty debugging.
-//
-func (p *BaseParser) SetTrace(trace *TraceListener) {
- if trace == nil {
- p.RemoveParseListener(p.tracer)
- p.tracer = nil
- } else {
- if p.tracer != nil {
- p.RemoveParseListener(p.tracer)
- }
- p.tracer = NewTraceListener(p)
- p.AddParseListener(p.tracer)
- }
-}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/token.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/token.go
deleted file mode 100644
index 2d8e99095..000000000
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/token.go
+++ /dev/null
@@ -1,210 +0,0 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
-// Use of this file is governed by the BSD 3-clause license that
-// can be found in the LICENSE.txt file in the project root.
-
-package antlr
-
-import (
- "strconv"
- "strings"
-)
-
-type TokenSourceCharStreamPair struct {
- tokenSource TokenSource
- charStream CharStream
-}
-
-// A token has properties: text, type, line, character position in the line
-// (so we can ignore tabs), token channel, index, and source from which
-// we obtained this token.
-
-type Token interface {
- GetSource() *TokenSourceCharStreamPair
- GetTokenType() int
- GetChannel() int
- GetStart() int
- GetStop() int
- GetLine() int
- GetColumn() int
-
- GetText() string
- SetText(s string)
-
- GetTokenIndex() int
- SetTokenIndex(v int)
-
- GetTokenSource() TokenSource
- GetInputStream() CharStream
-}
-
-type BaseToken struct {
- source *TokenSourceCharStreamPair
- tokenType int // token type of the token
- channel int // The parser ignores everything not on DEFAULT_CHANNEL
- start int // optional return -1 if not implemented.
- stop int // optional return -1 if not implemented.
- tokenIndex int // from 0..n-1 of the token object in the input stream
- line int // line=1..n of the 1st character
- column int // beginning of the line at which it occurs, 0..n-1
- text string // text of the token.
- readOnly bool
-}
-
-const (
- TokenInvalidType = 0
-
- // During lookahead operations, this "token" signifies we hit rule end ATN state
- // and did not follow it despite needing to.
- TokenEpsilon = -2
-
- TokenMinUserTokenType = 1
-
- TokenEOF = -1
-
- // All tokens go to the parser (unless Skip() is called in that rule)
- // on a particular "channel". The parser tunes to a particular channel
- // so that whitespace etc... can go to the parser on a "hidden" channel.
-
- TokenDefaultChannel = 0
-
- // Anything on different channel than DEFAULT_CHANNEL is not parsed
- // by parser.
-
- TokenHiddenChannel = 1
-)
-
-func (b *BaseToken) GetChannel() int {
- return b.channel
-}
-
-func (b *BaseToken) GetStart() int {
- return b.start
-}
-
-func (b *BaseToken) GetStop() int {
- return b.stop
-}
-
-func (b *BaseToken) GetLine() int {
- return b.line
-}
-
-func (b *BaseToken) GetColumn() int {
- return b.column
-}
-
-func (b *BaseToken) GetTokenType() int {
- return b.tokenType
-}
-
-func (b *BaseToken) GetSource() *TokenSourceCharStreamPair {
- return b.source
-}
-
-func (b *BaseToken) GetTokenIndex() int {
- return b.tokenIndex
-}
-
-func (b *BaseToken) SetTokenIndex(v int) {
- b.tokenIndex = v
-}
-
-func (b *BaseToken) GetTokenSource() TokenSource {
- return b.source.tokenSource
-}
-
-func (b *BaseToken) GetInputStream() CharStream {
- return b.source.charStream
-}
-
-type CommonToken struct {
- *BaseToken
-}
-
-func NewCommonToken(source *TokenSourceCharStreamPair, tokenType, channel, start, stop int) *CommonToken {
-
- t := new(CommonToken)
-
- t.BaseToken = new(BaseToken)
-
- t.source = source
- t.tokenType = tokenType
- t.channel = channel
- t.start = start
- t.stop = stop
- t.tokenIndex = -1
- if t.source.tokenSource != nil {
- t.line = source.tokenSource.GetLine()
- t.column = source.tokenSource.GetCharPositionInLine()
- } else {
- t.column = -1
- }
- return t
-}
-
-// An empty {@link Pair} which is used as the default value of
-// {@link //source} for tokens that do not have a source.
-
-//CommonToken.EMPTY_SOURCE = [ nil, nil ]
-
-// Constructs a New{@link CommonToken} as a copy of another {@link Token}.
-//
-//
-// If {@code oldToken} is also a {@link CommonToken} instance, the newly
-// constructed token will share a reference to the {@link //text} field and
-// the {@link Pair} stored in {@link //source}. Otherwise, {@link //text} will
-// be assigned the result of calling {@link //GetText}, and {@link //source}
-// will be constructed from the result of {@link Token//GetTokenSource} and
-// {@link Token//GetInputStream}.
-//
-// @param oldToken The token to copy.
-//
-func (c *CommonToken) clone() *CommonToken {
- t := NewCommonToken(c.source, c.tokenType, c.channel, c.start, c.stop)
- t.tokenIndex = c.GetTokenIndex()
- t.line = c.GetLine()
- t.column = c.GetColumn()
- t.text = c.GetText()
- return t
-}
-
-func (c *CommonToken) GetText() string {
- if c.text != "" {
- return c.text
- }
- input := c.GetInputStream()
- if input == nil {
- return ""
- }
- n := input.Size()
- if c.start < n && c.stop < n {
- return input.GetTextFromInterval(NewInterval(c.start, c.stop))
- }
- return ""
-}
-
-func (c *CommonToken) SetText(text string) {
- c.text = text
-}
-
-func (c *CommonToken) String() string {
- txt := c.GetText()
- if txt != "" {
- txt = strings.Replace(txt, "\n", "\\n", -1)
- txt = strings.Replace(txt, "\r", "\\r", -1)
- txt = strings.Replace(txt, "\t", "\\t", -1)
- } else {
- txt = ""
- }
-
- var ch string
- if c.channel > 0 {
- ch = ",channel=" + strconv.Itoa(c.channel)
- } else {
- ch = ""
- }
-
- return "[@" + strconv.Itoa(c.tokenIndex) + "," + strconv.Itoa(c.start) + ":" + strconv.Itoa(c.stop) + "='" +
- txt + "',<" + strconv.Itoa(c.tokenType) + ">" +
- ch + "," + strconv.Itoa(c.line) + ":" + strconv.Itoa(c.column) + "]"
-}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/tokenstream_rewriter.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/tokenstream_rewriter.go
deleted file mode 100644
index 96a03f02a..000000000
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/tokenstream_rewriter.go
+++ /dev/null
@@ -1,649 +0,0 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
-// Use of this file is governed by the BSD 3-clause license that
-// can be found in the LICENSE.txt file in the project root.
-package antlr
-
-import (
-"bytes"
-"fmt"
-)
-
-
-//
-// Useful for rewriting out a buffered input token stream after doing some
-// augmentation or other manipulations on it.
-
-//
-// You can insert stuff, replace, and delete chunks. Note that the operations
-// are done lazily--only if you convert the buffer to a {@link String} with
-// {@link TokenStream#getText()}. This is very efficient because you are not
-// moving data around all the time. As the buffer of tokens is converted to
-// strings, the {@link #getText()} method(s) scan the input token stream and
-// check to see if there is an operation at the current index. If so, the
-// operation is done and then normal {@link String} rendering continues on the
-// buffer. This is like having multiple Turing machine instruction streams
-// (programs) operating on a single input tape. :)
-//
-
-// This rewriter makes no modifications to the token stream. It does not ask the
-// stream to fill itself up nor does it advance the input cursor. The token
-// stream {@link TokenStream#index()} will return the same value before and
-// after any {@link #getText()} call.
-
-//
-// The rewriter only works on tokens that you have in the buffer and ignores the
-// current input cursor. If you are buffering tokens on-demand, calling
-// {@link #getText()} halfway through the input will only do rewrites for those
-// tokens in the first half of the file.
-
-//
-// Since the operations are done lazily at {@link #getText}-time, operations do
-// not screw up the token index values. That is, an insert operation at token
-// index {@code i} does not change the index values for tokens
-// {@code i}+1..n-1.
-
-//
-// Because operations never actually alter the buffer, you may always get the
-// original token stream back without undoing anything. Since the instructions
-// are queued up, you can easily simulate transactions and roll back any changes
-// if there is an error just by removing instructions. For example,
-
-//
-// CharStream input = new ANTLRFileStream("input");
-// TLexer lex = new TLexer(input);
-// CommonTokenStream tokens = new CommonTokenStream(lex);
-// T parser = new T(tokens);
-// TokenStreamRewriter rewriter = new TokenStreamRewriter(tokens);
-// parser.startRule();
-//
-
-//
-// Then in the rules, you can execute (assuming rewriter is visible):
-
-//
-// Token t,u;
-// ...
-// rewriter.insertAfter(t, "text to put after t");}
-// rewriter.insertAfter(u, "text after u");}
-// System.out.println(rewriter.getText());
-//
-
-//
-// You can also have multiple "instruction streams" and get multiple rewrites
-// from a single pass over the input. Just name the instruction streams and use
-// that name again when printing the buffer. This could be useful for generating
-// a C file and also its header file--all from the same buffer:
-
-//
-// rewriter.insertAfter("pass1", t, "text to put after t");}
-// rewriter.insertAfter("pass2", u, "text after u");}
-// System.out.println(rewriter.getText("pass1"));
-// System.out.println(rewriter.getText("pass2"));
-//
-
-//
-// If you don't use named rewrite streams, a "default" stream is used as the
-// first example shows.
-
-
-
-const(
- Default_Program_Name = "default"
- Program_Init_Size = 100
- Min_Token_Index = 0
-)
-
-// Define the rewrite operation hierarchy
-
-type RewriteOperation interface {
- // Execute the rewrite operation by possibly adding to the buffer.
- // Return the index of the next token to operate on.
- Execute(buffer *bytes.Buffer) int
- String() string
- GetInstructionIndex() int
- GetIndex() int
- GetText() string
- GetOpName() string
- GetTokens() TokenStream
- SetInstructionIndex(val int)
- SetIndex(int)
- SetText(string)
- SetOpName(string)
- SetTokens(TokenStream)
-}
-
-type BaseRewriteOperation struct {
- //Current index of rewrites list
- instruction_index int
- //Token buffer index
- index int
- //Substitution text
- text string
- //Actual operation name
- op_name string
- //Pointer to token steam
- tokens TokenStream
-}
-
-func (op *BaseRewriteOperation)GetInstructionIndex() int{
- return op.instruction_index
-}
-
-func (op *BaseRewriteOperation)GetIndex() int{
- return op.index
-}
-
-func (op *BaseRewriteOperation)GetText() string{
- return op.text
-}
-
-func (op *BaseRewriteOperation)GetOpName() string{
- return op.op_name
-}
-
-func (op *BaseRewriteOperation)GetTokens() TokenStream{
- return op.tokens
-}
-
-func (op *BaseRewriteOperation)SetInstructionIndex(val int){
- op.instruction_index = val
-}
-
-func (op *BaseRewriteOperation)SetIndex(val int) {
- op.index = val
-}
-
-func (op *BaseRewriteOperation)SetText(val string){
- op.text = val
-}
-
-func (op *BaseRewriteOperation)SetOpName(val string){
- op.op_name = val
-}
-
-func (op *BaseRewriteOperation)SetTokens(val TokenStream) {
- op.tokens = val
-}
-
-
-func (op *BaseRewriteOperation) Execute(buffer *bytes.Buffer) int{
- return op.index
-}
-
-func (op *BaseRewriteOperation) String() string {
- return fmt.Sprintf("<%s@%d:\"%s\">",
- op.op_name,
- op.tokens.Get(op.GetIndex()),
- op.text,
- )
-
-}
-
-
-type InsertBeforeOp struct {
- BaseRewriteOperation
-}
-
-func NewInsertBeforeOp(index int, text string, stream TokenStream) *InsertBeforeOp{
- return &InsertBeforeOp{BaseRewriteOperation:BaseRewriteOperation{
- index:index,
- text:text,
- op_name:"InsertBeforeOp",
- tokens:stream,
- }}
-}
-
-func (op *InsertBeforeOp) Execute(buffer *bytes.Buffer) int{
- buffer.WriteString(op.text)
- if op.tokens.Get(op.index).GetTokenType() != TokenEOF{
- buffer.WriteString(op.tokens.Get(op.index).GetText())
- }
- return op.index+1
-}
-
-func (op *InsertBeforeOp) String() string {
- return op.BaseRewriteOperation.String()
-}
-
-// Distinguish between insert after/before to do the "insert afters"
-// first and then the "insert befores" at same index. Implementation
-// of "insert after" is "insert before index+1".
-
-type InsertAfterOp struct {
- BaseRewriteOperation
-}
-
-func NewInsertAfterOp(index int, text string, stream TokenStream) *InsertAfterOp{
- return &InsertAfterOp{BaseRewriteOperation:BaseRewriteOperation{
- index:index+1,
- text:text,
- tokens:stream,
- }}
-}
-
-func (op *InsertAfterOp) Execute(buffer *bytes.Buffer) int {
- buffer.WriteString(op.text)
- if op.tokens.Get(op.index).GetTokenType() != TokenEOF{
- buffer.WriteString(op.tokens.Get(op.index).GetText())
- }
- return op.index+1
-}
-
-func (op *InsertAfterOp) String() string {
- return op.BaseRewriteOperation.String()
-}
-
-// I'm going to try replacing range from x..y with (y-x)+1 ReplaceOp
-// instructions.
-type ReplaceOp struct{
- BaseRewriteOperation
- LastIndex int
-}
-
-func NewReplaceOp(from, to int, text string, stream TokenStream)*ReplaceOp {
- return &ReplaceOp{
- BaseRewriteOperation:BaseRewriteOperation{
- index:from,
- text:text,
- op_name:"ReplaceOp",
- tokens:stream,
- },
- LastIndex:to,
- }
-}
-
-func (op *ReplaceOp)Execute(buffer *bytes.Buffer) int{
- if op.text != ""{
- buffer.WriteString(op.text)
- }
- return op.LastIndex +1
-}
-
-func (op *ReplaceOp) String() string {
- if op.text == "" {
- return fmt.Sprintf("",
- op.tokens.Get(op.index), op.tokens.Get(op.LastIndex))
- }
- return fmt.Sprintf("",
- op.tokens.Get(op.index), op.tokens.Get(op.LastIndex), op.text)
-}
-
-
-type TokenStreamRewriter struct {
- //Our source stream
- tokens TokenStream
- // You may have multiple, named streams of rewrite operations.
- // I'm calling these things "programs."
- // Maps String (name) → rewrite (List)
- programs map[string][]RewriteOperation
- last_rewrite_token_indexes map[string]int
-}
-
-func NewTokenStreamRewriter(tokens TokenStream) *TokenStreamRewriter{
- return &TokenStreamRewriter{
- tokens: tokens,
- programs: map[string][]RewriteOperation{
- Default_Program_Name:make([]RewriteOperation,0, Program_Init_Size),
- },
- last_rewrite_token_indexes: map[string]int{},
- }
-}
-
-func (tsr *TokenStreamRewriter) GetTokenStream() TokenStream{
- return tsr.tokens
-}
-
-// Rollback the instruction stream for a program so that
-// the indicated instruction (via instructionIndex) is no
-// longer in the stream. UNTESTED!
-func (tsr *TokenStreamRewriter) Rollback(program_name string, instruction_index int){
- is, ok := tsr.programs[program_name]
- if ok{
- tsr.programs[program_name] = is[Min_Token_Index:instruction_index]
- }
-}
-
-func (tsr *TokenStreamRewriter) RollbackDefault(instruction_index int){
- tsr.Rollback(Default_Program_Name, instruction_index)
-}
-//Reset the program so that no instructions exist
-func (tsr *TokenStreamRewriter) DeleteProgram(program_name string){
- tsr.Rollback(program_name, Min_Token_Index) //TODO: double test on that cause lower bound is not included
-}
-
-func (tsr *TokenStreamRewriter) DeleteProgramDefault(){
- tsr.DeleteProgram(Default_Program_Name)
-}
-
-func (tsr *TokenStreamRewriter) InsertAfter(program_name string, index int, text string){
- // to insert after, just insert before next index (even if past end)
- var op RewriteOperation = NewInsertAfterOp(index, text, tsr.tokens)
- rewrites := tsr.GetProgram(program_name)
- op.SetInstructionIndex(len(rewrites))
- tsr.AddToProgram(program_name, op)
-}
-
-func (tsr *TokenStreamRewriter) InsertAfterDefault(index int, text string){
- tsr.InsertAfter(Default_Program_Name, index, text)
-}
-
-func (tsr *TokenStreamRewriter) InsertAfterToken(program_name string, token Token, text string){
- tsr.InsertAfter(program_name, token.GetTokenIndex(), text)
-}
-
-func (tsr* TokenStreamRewriter) InsertBefore(program_name string, index int, text string){
- var op RewriteOperation = NewInsertBeforeOp(index, text, tsr.tokens)
- rewrites := tsr.GetProgram(program_name)
- op.SetInstructionIndex(len(rewrites))
- tsr.AddToProgram(program_name, op)
-}
-
-func (tsr *TokenStreamRewriter) InsertBeforeDefault(index int, text string){
- tsr.InsertBefore(Default_Program_Name, index, text)
-}
-
-func (tsr *TokenStreamRewriter) InsertBeforeToken(program_name string,token Token, text string){
- tsr.InsertBefore(program_name, token.GetTokenIndex(), text)
-}
-
-func (tsr *TokenStreamRewriter) Replace(program_name string, from, to int, text string){
- if from > to || from < 0 || to < 0 || to >= tsr.tokens.Size(){
- panic(fmt.Sprintf("replace: range invalid: %d..%d(size=%d)",
- from, to, tsr.tokens.Size()))
- }
- var op RewriteOperation = NewReplaceOp(from, to, text, tsr.tokens)
- rewrites := tsr.GetProgram(program_name)
- op.SetInstructionIndex(len(rewrites))
- tsr.AddToProgram(program_name, op)
-}
-
-func (tsr *TokenStreamRewriter)ReplaceDefault(from, to int, text string) {
- tsr.Replace(Default_Program_Name, from, to, text)
-}
-
-func (tsr *TokenStreamRewriter)ReplaceDefaultPos(index int, text string){
- tsr.ReplaceDefault(index, index, text)
-}
-
-func (tsr *TokenStreamRewriter)ReplaceToken(program_name string, from, to Token, text string){
- tsr.Replace(program_name, from.GetTokenIndex(), to.GetTokenIndex(), text)
-}
-
-func (tsr *TokenStreamRewriter)ReplaceTokenDefault(from, to Token, text string){
- tsr.ReplaceToken(Default_Program_Name, from, to, text)
-}
-
-func (tsr *TokenStreamRewriter)ReplaceTokenDefaultPos(index Token, text string){
- tsr.ReplaceTokenDefault(index, index, text)
-}
-
-func (tsr *TokenStreamRewriter)Delete(program_name string, from, to int){
- tsr.Replace(program_name, from, to, "" )
-}
-
-func (tsr *TokenStreamRewriter)DeleteDefault(from, to int){
- tsr.Delete(Default_Program_Name, from, to)
-}
-
-func (tsr *TokenStreamRewriter)DeleteDefaultPos(index int){
- tsr.DeleteDefault(index,index)
-}
-
-func (tsr *TokenStreamRewriter)DeleteToken(program_name string, from, to Token) {
- tsr.ReplaceToken(program_name, from, to, "")
-}
-
-func (tsr *TokenStreamRewriter)DeleteTokenDefault(from,to Token){
- tsr.DeleteToken(Default_Program_Name, from, to)
-}
-
-func (tsr *TokenStreamRewriter)GetLastRewriteTokenIndex(program_name string)int {
- i, ok := tsr.last_rewrite_token_indexes[program_name]
- if !ok{
- return -1
- }
- return i
-}
-
-func (tsr *TokenStreamRewriter)GetLastRewriteTokenIndexDefault()int{
- return tsr.GetLastRewriteTokenIndex(Default_Program_Name)
-}
-
-func (tsr *TokenStreamRewriter)SetLastRewriteTokenIndex(program_name string, i int){
- tsr.last_rewrite_token_indexes[program_name] = i
-}
-
-func (tsr *TokenStreamRewriter)InitializeProgram(name string)[]RewriteOperation{
- is := make([]RewriteOperation, 0, Program_Init_Size)
- tsr.programs[name] = is
- return is
-}
-
-func (tsr *TokenStreamRewriter)AddToProgram(name string, op RewriteOperation){
- is := tsr.GetProgram(name)
- is = append(is, op)
- tsr.programs[name] = is
-}
-
-func (tsr *TokenStreamRewriter)GetProgram(name string) []RewriteOperation {
- is, ok := tsr.programs[name]
- if !ok{
- is = tsr.InitializeProgram(name)
- }
- return is
-}
-// Return the text from the original tokens altered per the
-// instructions given to this rewriter.
-func (tsr *TokenStreamRewriter)GetTextDefault() string{
- return tsr.GetText(
- Default_Program_Name,
- NewInterval(0, tsr.tokens.Size()-1))
-}
-// Return the text from the original tokens altered per the
-// instructions given to this rewriter.
-func (tsr *TokenStreamRewriter)GetText(program_name string, interval *Interval) string {
- rewrites := tsr.programs[program_name]
- start := interval.Start
- stop := interval.Stop
- // ensure start/end are in range
- stop = min(stop, tsr.tokens.Size()-1)
- start = max(start,0)
- if rewrites == nil || len(rewrites) == 0{
- return tsr.tokens.GetTextFromInterval(interval) // no instructions to execute
- }
- buf := bytes.Buffer{}
- // First, optimize instruction stream
- indexToOp := reduceToSingleOperationPerIndex(rewrites)
- // Walk buffer, executing instructions and emitting tokens
- for i:=start; i<=stop && i= tsr.tokens.Size()-1 {buf.WriteString(op.GetText())}
- }
- }
- return buf.String()
-}
-
-// We need to combine operations and report invalid operations (like
-// overlapping replaces that are not completed nested). Inserts to
-// same index need to be combined etc... Here are the cases:
-//
-// I.i.u I.j.v leave alone, nonoverlapping
-// I.i.u I.i.v combine: Iivu
-//
-// R.i-j.u R.x-y.v | i-j in x-y delete first R
-// R.i-j.u R.i-j.v delete first R
-// R.i-j.u R.x-y.v | x-y in i-j ERROR
-// R.i-j.u R.x-y.v | boundaries overlap ERROR
-//
-// Delete special case of replace (text==null):
-// D.i-j.u D.x-y.v | boundaries overlap combine to max(min)..max(right)
-//
-// I.i.u R.x-y.v | i in (x+1)-y delete I (since insert before
-// we're not deleting i)
-// I.i.u R.x-y.v | i not in (x+1)-y leave alone, nonoverlapping
-// R.x-y.v I.i.u | i in x-y ERROR
-// R.x-y.v I.x.u R.x-y.uv (combine, delete I)
-// R.x-y.v I.i.u | i not in x-y leave alone, nonoverlapping
-//
-// I.i.u = insert u before op @ index i
-// R.x-y.u = replace x-y indexed tokens with u
-//
-// First we need to examine replaces. For any replace op:
-//
-// 1. wipe out any insertions before op within that range.
-// 2. Drop any replace op before that is contained completely within
-// that range.
-// 3. Throw exception upon boundary overlap with any previous replace.
-//
-// Then we can deal with inserts:
-//
-// 1. for any inserts to same index, combine even if not adjacent.
-// 2. for any prior replace with same left boundary, combine this
-// insert with replace and delete this replace.
-// 3. throw exception if index in same range as previous replace
-//
-// Don't actually delete; make op null in list. Easier to walk list.
-// Later we can throw as we add to index → op map.
-//
-// Note that I.2 R.2-2 will wipe out I.2 even though, technically, the
-// inserted stuff would be before the replace range. But, if you
-// add tokens in front of a method body '{' and then delete the method
-// body, I think the stuff before the '{' you added should disappear too.
-//
-// Return a map from token index to operation.
-//
-func reduceToSingleOperationPerIndex(rewrites []RewriteOperation) map[int]RewriteOperation{
- // WALK REPLACES
- for i:=0; i < len(rewrites); i++{
- op := rewrites[i]
- if op == nil{continue}
- rop, ok := op.(*ReplaceOp)
- if !ok{continue}
- // Wipe prior inserts within range
- for j:=0; j rop.index && iop.index <=rop.LastIndex{
- // delete insert as it's a no-op.
- rewrites[iop.instruction_index] = nil
- }
- }
- }
- // Drop any prior replaces contained within
- for j:=0; j=rop.index && prevop.LastIndex <= rop.LastIndex{
- // delete replace as it's a no-op.
- rewrites[prevop.instruction_index] = nil
- continue
- }
- // throw exception unless disjoint or identical
- disjoint := prevop.LastIndex < rop.index || prevop.index > rop.LastIndex
- // Delete special case of replace (text==null):
- // D.i-j.u D.x-y.v | boundaries overlap combine to max(min)..max(right)
- if prevop.text == "" && rop.text == "" && !disjoint{
- rewrites[prevop.instruction_index] = nil
- rop.index = min(prevop.index, rop.index)
- rop.LastIndex = max(prevop.LastIndex, rop.LastIndex)
- println("new rop" + rop.String()) //TODO: remove console write, taken from Java version
- }else if !disjoint{
- panic("replace op boundaries of " + rop.String() + " overlap with previous " + prevop.String())
- }
- }
- }
- }
- // WALK INSERTS
- for i:=0; i < len(rewrites); i++ {
- op := rewrites[i]
- if op == nil{continue}
- //hack to replicate inheritance in composition
- _, iok := rewrites[i].(*InsertBeforeOp)
- _, aok := rewrites[i].(*InsertAfterOp)
- if !iok && !aok{continue}
- iop := rewrites[i]
- // combine current insert with prior if any at same index
- // deviating a bit from TokenStreamRewriter.java - hard to incorporate inheritance logic
- for j:=0; j= rop.index && iop.GetIndex() <= rop.LastIndex{
- panic("insert op "+iop.String()+" within boundaries of previous "+rop.String())
- }
- }
- }
- }
- m := map[int]RewriteOperation{}
- for i:=0; i < len(rewrites); i++{
- op := rewrites[i]
- if op == nil {continue}
- if _, ok := m[op.GetIndex()]; ok{
- panic("should only be one op per index")
- }
- m[op.GetIndex()] = op
- }
- return m
-}
-
-
-/*
- Quick fixing Go lack of overloads
- */
-
-func max(a,b int)int{
- if a>b{
- return a
- }else {
- return b
- }
-}
-func min(a,b int)int{
- if a= len(a.states) {
+ panic("Invalid state number.")
+ }
+
+ s := a.states[stateNumber]
+ following := a.NextTokens(s, nil)
+
+ if !following.contains(TokenEpsilon) {
+ return following
+ }
+
+ expected := NewIntervalSet()
+
+ expected.addSet(following)
+ expected.removeOne(TokenEpsilon)
+
+ for ctx != nil && ctx.GetInvokingState() >= 0 && following.contains(TokenEpsilon) {
+ invokingState := a.states[ctx.GetInvokingState()]
+ rt := invokingState.GetTransitions()[0]
+
+ following = a.NextTokens(rt.(*RuleTransition).followState, nil)
+ expected.addSet(following)
+ expected.removeOne(TokenEpsilon)
+ ctx = ctx.GetParent().(RuleContext)
+ }
+
+ if following.contains(TokenEpsilon) {
+ expected.addOne(TokenEOF)
+ }
+
+ return expected
+}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_config.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_config.go
similarity index 84%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_config.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_config.go
index 97ba417f7..7619fa172 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_config.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_config.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -8,19 +8,14 @@ import (
"fmt"
)
-type comparable interface {
- equals(other interface{}) bool
-}
-
// ATNConfig is a tuple: (ATN state, predicted alt, syntactic, semantic
// context). The syntactic context is a graph-structured stack node whose
// path(s) to the root is the rule invocation(s) chain used to arrive at the
// state. The semantic context is the tree of semantic predicates encountered
// before reaching an ATN state.
type ATNConfig interface {
- comparable
-
- hash() int
+ Equals(o Collectable[ATNConfig]) bool
+ Hash() int
GetState() ATNState
GetAlt() int
@@ -47,7 +42,7 @@ type BaseATNConfig struct {
reachesIntoOuterContext int
}
-func NewBaseATNConfig7(old *BaseATNConfig) *BaseATNConfig { // TODO: Dup
+func NewBaseATNConfig7(old *BaseATNConfig) ATNConfig { // TODO: Dup
return &BaseATNConfig{
state: old.state,
alt: old.alt,
@@ -135,11 +130,16 @@ func (b *BaseATNConfig) SetReachesIntoOuterContext(v int) {
b.reachesIntoOuterContext = v
}
+// Equals is the default comparison function for an ATNConfig when no specialist implementation is required
+// for a collection.
+//
// An ATN configuration is equal to another if both have the same state, they
// predict the same alternative, and syntactic/semantic contexts are the same.
-func (b *BaseATNConfig) equals(o interface{}) bool {
+func (b *BaseATNConfig) Equals(o Collectable[ATNConfig]) bool {
if b == o {
return true
+ } else if o == nil {
+ return false
}
var other, ok = o.(*BaseATNConfig)
@@ -153,30 +153,32 @@ func (b *BaseATNConfig) equals(o interface{}) bool {
if b.context == nil {
equal = other.context == nil
} else {
- equal = b.context.equals(other.context)
+ equal = b.context.Equals(other.context)
}
var (
nums = b.state.GetStateNumber() == other.state.GetStateNumber()
alts = b.alt == other.alt
- cons = b.semanticContext.equals(other.semanticContext)
+ cons = b.semanticContext.Equals(other.semanticContext)
sups = b.precedenceFilterSuppressed == other.precedenceFilterSuppressed
)
return nums && alts && cons && sups && equal
}
-func (b *BaseATNConfig) hash() int {
+// Hash is the default hash function for BaseATNConfig, when no specialist hash function
+// is required for a collection
+func (b *BaseATNConfig) Hash() int {
var c int
if b.context != nil {
- c = b.context.hash()
+ c = b.context.Hash()
}
h := murmurInit(7)
h = murmurUpdate(h, b.state.GetStateNumber())
h = murmurUpdate(h, b.alt)
h = murmurUpdate(h, c)
- h = murmurUpdate(h, b.semanticContext.hash())
+ h = murmurUpdate(h, b.semanticContext.Hash())
return murmurFinish(h, 4)
}
@@ -243,7 +245,9 @@ func NewLexerATNConfig1(state ATNState, alt int, context PredictionContext) *Lex
return &LexerATNConfig{BaseATNConfig: NewBaseATNConfig5(state, alt, context, SemanticContextNone)}
}
-func (l *LexerATNConfig) hash() int {
+// Hash is the default hash function for LexerATNConfig objects, it can be used directly or via
+// the default comparator [ObjEqComparator].
+func (l *LexerATNConfig) Hash() int {
var f int
if l.passedThroughNonGreedyDecision {
f = 1
@@ -253,15 +257,20 @@ func (l *LexerATNConfig) hash() int {
h := murmurInit(7)
h = murmurUpdate(h, l.state.GetStateNumber())
h = murmurUpdate(h, l.alt)
- h = murmurUpdate(h, l.context.hash())
- h = murmurUpdate(h, l.semanticContext.hash())
+ h = murmurUpdate(h, l.context.Hash())
+ h = murmurUpdate(h, l.semanticContext.Hash())
h = murmurUpdate(h, f)
- h = murmurUpdate(h, l.lexerActionExecutor.hash())
+ h = murmurUpdate(h, l.lexerActionExecutor.Hash())
h = murmurFinish(h, 6)
return h
}
-func (l *LexerATNConfig) equals(other interface{}) bool {
+// Equals is the default comparison function for LexerATNConfig objects, it can be used directly or via
+// the default comparator [ObjEqComparator].
+func (l *LexerATNConfig) Equals(other Collectable[ATNConfig]) bool {
+ if l == other {
+ return true
+ }
var othert, ok = other.(*LexerATNConfig)
if l == other {
@@ -275,7 +284,7 @@ func (l *LexerATNConfig) equals(other interface{}) bool {
var b bool
if l.lexerActionExecutor != nil {
- b = !l.lexerActionExecutor.equals(othert.lexerActionExecutor)
+ b = !l.lexerActionExecutor.Equals(othert.lexerActionExecutor)
} else {
b = othert.lexerActionExecutor != nil
}
@@ -284,10 +293,9 @@ func (l *LexerATNConfig) equals(other interface{}) bool {
return false
}
- return l.BaseATNConfig.equals(othert.BaseATNConfig)
+ return l.BaseATNConfig.Equals(othert.BaseATNConfig)
}
-
func checkNonGreedyDecision(source *LexerATNConfig, target ATNState) bool {
var ds, ok = target.(DecisionState)
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_config_set.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_config_set.go
similarity index 81%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_config_set.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_config_set.go
index 49ad4a719..43e9b33f3 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_config_set.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_config_set.go
@@ -1,24 +1,25 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
package antlr
-import "fmt"
+import (
+ "fmt"
+)
type ATNConfigSet interface {
- hash() int
+ Hash() int
+ Equals(o Collectable[ATNConfig]) bool
Add(ATNConfig, *DoubleDict) bool
AddAll([]ATNConfig) bool
- GetStates() Set
+ GetStates() *JStore[ATNState, Comparator[ATNState]]
GetPredicates() []SemanticContext
GetItems() []ATNConfig
OptimizeConfigs(interpreter *BaseATNSimulator)
- Equals(other interface{}) bool
-
Length() int
IsEmpty() bool
Contains(ATNConfig) bool
@@ -57,7 +58,7 @@ type BaseATNConfigSet struct {
// effectively doubles the number of objects associated with ATNConfigs. All
// keys are hashed by (s, i, _, pi), not including the context. Wiped out when
// read-only because a set becomes a DFA state.
- configLookup Set
+ configLookup *JStore[ATNConfig, Comparator[ATNConfig]]
// configs is the added elements.
configs []ATNConfig
@@ -83,7 +84,7 @@ type BaseATNConfigSet struct {
// readOnly is whether it is read-only. Do not
// allow any code to manipulate the set if true because DFA states will point at
- // sets and those must not change. It not protect other fields; conflictingAlts
+ // sets and those must not change. It not, protect other fields; conflictingAlts
// in particular, which is assigned after readOnly.
readOnly bool
@@ -104,7 +105,7 @@ func (b *BaseATNConfigSet) Alts() *BitSet {
func NewBaseATNConfigSet(fullCtx bool) *BaseATNConfigSet {
return &BaseATNConfigSet{
cachedHash: -1,
- configLookup: newArray2DHashSetWithCap(hashATNConfig, equalATNConfigs, 16, 2),
+ configLookup: NewJStore[ATNConfig, Comparator[ATNConfig]](aConfCompInst),
fullCtx: fullCtx,
}
}
@@ -126,9 +127,11 @@ func (b *BaseATNConfigSet) Add(config ATNConfig, mergeCache *DoubleDict) bool {
b.dipsIntoOuterContext = true
}
- existing := b.configLookup.Add(config).(ATNConfig)
+ existing, present := b.configLookup.Put(config)
- if existing == config {
+ // The config was not already in the set
+ //
+ if !present {
b.cachedHash = -1
b.configs = append(b.configs, config) // Track order here
return true
@@ -154,11 +157,14 @@ func (b *BaseATNConfigSet) Add(config ATNConfig, mergeCache *DoubleDict) bool {
return true
}
-func (b *BaseATNConfigSet) GetStates() Set {
- states := newArray2DHashSet(nil, nil)
+func (b *BaseATNConfigSet) GetStates() *JStore[ATNState, Comparator[ATNState]] {
+
+ // states uses the standard comparator provided by the ATNState instance
+ //
+ states := NewJStore[ATNState, Comparator[ATNState]](aStateEqInst)
for i := 0; i < len(b.configs); i++ {
- states.Add(b.configs[i].GetState())
+ states.Put(b.configs[i].GetState())
}
return states
@@ -214,7 +220,34 @@ func (b *BaseATNConfigSet) AddAll(coll []ATNConfig) bool {
return false
}
-func (b *BaseATNConfigSet) Equals(other interface{}) bool {
+// Compare is a hack function just to verify that adding DFAstares to the known
+// set works, so long as comparison of ATNConfigSet s works. For that to work, we
+// need to make sure that the set of ATNConfigs in two sets are equivalent. We can't
+// know the order, so we do this inefficient hack. If this proves the point, then
+// we can change the config set to a better structure.
+func (b *BaseATNConfigSet) Compare(bs *BaseATNConfigSet) bool {
+ if len(b.configs) != len(bs.configs) {
+ return false
+ }
+
+ for _, c := range b.configs {
+ found := false
+ for _, c2 := range bs.configs {
+ if c.Equals(c2) {
+ found = true
+ break
+ }
+ }
+
+ if !found {
+ return false
+ }
+
+ }
+ return true
+}
+
+func (b *BaseATNConfigSet) Equals(other Collectable[ATNConfig]) bool {
if b == other {
return true
} else if _, ok := other.(*BaseATNConfigSet); !ok {
@@ -224,15 +257,15 @@ func (b *BaseATNConfigSet) Equals(other interface{}) bool {
other2 := other.(*BaseATNConfigSet)
return b.configs != nil &&
- // TODO: b.configs.equals(other2.configs) && // TODO: Is b necessary?
b.fullCtx == other2.fullCtx &&
b.uniqueAlt == other2.uniqueAlt &&
b.conflictingAlts == other2.conflictingAlts &&
b.hasSemanticContext == other2.hasSemanticContext &&
- b.dipsIntoOuterContext == other2.dipsIntoOuterContext
+ b.dipsIntoOuterContext == other2.dipsIntoOuterContext &&
+ b.Compare(other2)
}
-func (b *BaseATNConfigSet) hash() int {
+func (b *BaseATNConfigSet) Hash() int {
if b.readOnly {
if b.cachedHash == -1 {
b.cachedHash = b.hashCodeConfigs()
@@ -247,7 +280,7 @@ func (b *BaseATNConfigSet) hash() int {
func (b *BaseATNConfigSet) hashCodeConfigs() int {
h := 1
for _, config := range b.configs {
- h = 31*h + config.hash()
+ h = 31*h + config.Hash()
}
return h
}
@@ -283,7 +316,7 @@ func (b *BaseATNConfigSet) Clear() {
b.configs = make([]ATNConfig, 0)
b.cachedHash = -1
- b.configLookup = newArray2DHashSet(nil, equalATNConfigs)
+ b.configLookup = NewJStore[ATNConfig, Comparator[ATNConfig]](atnConfCompInst)
}
func (b *BaseATNConfigSet) FullContext() bool {
@@ -365,7 +398,8 @@ type OrderedATNConfigSet struct {
func NewOrderedATNConfigSet() *OrderedATNConfigSet {
b := NewBaseATNConfigSet(false)
- b.configLookup = newArray2DHashSet(nil, nil)
+ // This set uses the standard Hash() and Equals() from ATNConfig
+ b.configLookup = NewJStore[ATNConfig, Comparator[ATNConfig]](aConfEqInst)
return &OrderedATNConfigSet{BaseATNConfigSet: b}
}
@@ -375,7 +409,7 @@ func hashATNConfig(i interface{}) int {
hash := 7
hash = 31*hash + o.GetState().GetStateNumber()
hash = 31*hash + o.GetAlt()
- hash = 31*hash + o.GetSemanticContext().hash()
+ hash = 31*hash + o.GetSemanticContext().Hash()
return hash
}
@@ -403,5 +437,5 @@ func equalATNConfigs(a, b interface{}) bool {
return false
}
- return ai.GetSemanticContext().equals(bi.GetSemanticContext())
+ return ai.GetSemanticContext().Equals(bi.GetSemanticContext())
}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_deserialization_options.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_deserialization_options.go
similarity index 96%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_deserialization_options.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_deserialization_options.go
index cb8eafb0b..3c975ec7b 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_deserialization_options.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_deserialization_options.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_deserializer.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_deserializer.go
similarity index 99%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_deserializer.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_deserializer.go
index aea9bbfa9..3888856b4 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_deserializer.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_deserializer.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_simulator.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_simulator.go
similarity index 94%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_simulator.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_simulator.go
index d5454d6d5..41529115f 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_simulator.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_simulator.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_state.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_state.go
similarity index 97%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_state.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_state.go
index 3835bb2e9..1f2a56bc3 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_state.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_state.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -49,7 +49,8 @@ type ATNState interface {
AddTransition(Transition, int)
String() string
- hash() int
+ Hash() int
+ Equals(Collectable[ATNState]) bool
}
type BaseATNState struct {
@@ -123,7 +124,7 @@ func (as *BaseATNState) SetNextTokenWithinRule(v *IntervalSet) {
as.NextTokenWithinRule = v
}
-func (as *BaseATNState) hash() int {
+func (as *BaseATNState) Hash() int {
return as.stateNumber
}
@@ -131,7 +132,7 @@ func (as *BaseATNState) String() string {
return strconv.Itoa(as.stateNumber)
}
-func (as *BaseATNState) equals(other interface{}) bool {
+func (as *BaseATNState) Equals(other Collectable[ATNState]) bool {
if ot, ok := other.(ATNState); ok {
return as.stateNumber == ot.GetStateNumber()
}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_type.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_type.go
similarity index 79%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_type.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_type.go
index a7b48976b..3a515a145 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/atn_type.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/atn_type.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/char_stream.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/char_stream.go
similarity index 82%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/char_stream.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/char_stream.go
index 70c1207f7..c33f0adb5 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/char_stream.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/char_stream.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/common_token_factory.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/common_token_factory.go
similarity index 96%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/common_token_factory.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/common_token_factory.go
index 330ff8f31..1bb0314ea 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/common_token_factory.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/common_token_factory.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/common_token_stream.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/common_token_stream.go
similarity index 98%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/common_token_stream.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/common_token_stream.go
index c90e9b890..c6c9485a2 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/common_token_stream.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/common_token_stream.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -331,10 +331,12 @@ func (c *CommonTokenStream) GetTextFromRuleContext(interval RuleContext) string
func (c *CommonTokenStream) GetTextFromInterval(interval *Interval) string {
c.lazyInit()
- c.Fill()
if interval == nil {
+ c.Fill()
interval = NewInterval(0, len(c.tokens)-1)
+ } else {
+ c.Sync(interval.Stop)
}
start := interval.Start
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/comparators.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/comparators.go
new file mode 100644
index 000000000..9ea320053
--- /dev/null
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/comparators.go
@@ -0,0 +1,147 @@
+package antlr
+
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
+// Use of this file is governed by the BSD 3-clause license that
+// can be found in the LICENSE.txt file in the project root.
+
+// This file contains all the implementations of custom comparators used for generic collections when the
+// Hash() and Equals() funcs supplied by the struct objects themselves need to be overridden. Normally, we would
+// put the comparators in the source file for the struct themselves, but given the organization of this code is
+// sorta kinda based upon the Java code, I found it confusing trying to find out which comparator was where and used by
+// which instantiation of a collection. For instance, an Array2DHashSet in the Java source, when used with ATNConfig
+// collections requires three different comparators depending on what the collection is being used for. Collecting - pun intended -
+// all the comparators here, makes it much easier to see which implementation of hash and equals is used by which collection.
+// It also makes it easy to verify that the Hash() and Equals() functions marry up with the Java implementations.
+
+// ObjEqComparator is the equivalent of the Java ObjectEqualityComparator, which is the default instance of
+// Equality comparator. We do not have inheritance in Go, only interfaces, so we use generics to enforce some
+// type safety and avoid having to implement this for every type that we want to perform comparison on.
+//
+// This comparator works by using the standard Hash() and Equals() methods of the type T that is being compared. Which
+// allows us to use it in any collection instance that does nto require a special hash or equals implementation.
+type ObjEqComparator[T Collectable[T]] struct{}
+
+var (
+ aStateEqInst = &ObjEqComparator[ATNState]{}
+ aConfEqInst = &ObjEqComparator[ATNConfig]{}
+ aConfCompInst = &ATNConfigComparator[ATNConfig]{}
+ atnConfCompInst = &BaseATNConfigComparator[ATNConfig]{}
+ dfaStateEqInst = &ObjEqComparator[*DFAState]{}
+ semctxEqInst = &ObjEqComparator[SemanticContext]{}
+ atnAltCfgEqInst = &ATNAltConfigComparator[ATNConfig]{}
+)
+
+// Equals2 delegates to the Equals() method of type T
+func (c *ObjEqComparator[T]) Equals2(o1, o2 T) bool {
+ return o1.Equals(o2)
+}
+
+// Hash1 delegates to the Hash() method of type T
+func (c *ObjEqComparator[T]) Hash1(o T) int {
+
+ return o.Hash()
+}
+
+type SemCComparator[T Collectable[T]] struct{}
+
+// ATNConfigComparator is used as the compartor for the configLookup field of an ATNConfigSet
+// and has a custom Equals() and Hash() implementation, because equality is not based on the
+// standard Hash() and Equals() methods of the ATNConfig type.
+type ATNConfigComparator[T Collectable[T]] struct {
+}
+
+// Equals2 is a custom comparator for ATNConfigs specifically for configLookup
+func (c *ATNConfigComparator[T]) Equals2(o1, o2 ATNConfig) bool {
+
+ // Same pointer, must be equal, even if both nil
+ //
+ if o1 == o2 {
+ return true
+
+ }
+
+ // If either are nil, but not both, then the result is false
+ //
+ if o1 == nil || o2 == nil {
+ return false
+ }
+
+ return o1.GetState().GetStateNumber() == o2.GetState().GetStateNumber() &&
+ o1.GetAlt() == o2.GetAlt() &&
+ o1.GetSemanticContext().Equals(o2.GetSemanticContext())
+}
+
+// Hash1 is custom hash implementation for ATNConfigs specifically for configLookup
+func (c *ATNConfigComparator[T]) Hash1(o ATNConfig) int {
+ hash := 7
+ hash = 31*hash + o.GetState().GetStateNumber()
+ hash = 31*hash + o.GetAlt()
+ hash = 31*hash + o.GetSemanticContext().Hash()
+ return hash
+}
+
+// ATNAltConfigComparator is used as the comparator for mapping configs to Alt Bitsets
+type ATNAltConfigComparator[T Collectable[T]] struct {
+}
+
+// Equals2 is a custom comparator for ATNConfigs specifically for configLookup
+func (c *ATNAltConfigComparator[T]) Equals2(o1, o2 ATNConfig) bool {
+
+ // Same pointer, must be equal, even if both nil
+ //
+ if o1 == o2 {
+ return true
+
+ }
+
+ // If either are nil, but not both, then the result is false
+ //
+ if o1 == nil || o2 == nil {
+ return false
+ }
+
+ return o1.GetState().GetStateNumber() == o2.GetState().GetStateNumber() &&
+ o1.GetContext().Equals(o2.GetContext())
+}
+
+// Hash1 is custom hash implementation for ATNConfigs specifically for configLookup
+func (c *ATNAltConfigComparator[T]) Hash1(o ATNConfig) int {
+ h := murmurInit(7)
+ h = murmurUpdate(h, o.GetState().GetStateNumber())
+ h = murmurUpdate(h, o.GetContext().Hash())
+ return murmurFinish(h, 2)
+}
+
+// BaseATNConfigComparator is used as the comparator for the configLookup field of a BaseATNConfigSet
+// and has a custom Equals() and Hash() implementation, because equality is not based on the
+// standard Hash() and Equals() methods of the ATNConfig type.
+type BaseATNConfigComparator[T Collectable[T]] struct {
+}
+
+// Equals2 is a custom comparator for ATNConfigs specifically for baseATNConfigSet
+func (c *BaseATNConfigComparator[T]) Equals2(o1, o2 ATNConfig) bool {
+
+ // Same pointer, must be equal, even if both nil
+ //
+ if o1 == o2 {
+ return true
+
+ }
+
+ // If either are nil, but not both, then the result is false
+ //
+ if o1 == nil || o2 == nil {
+ return false
+ }
+
+ return o1.GetState().GetStateNumber() == o2.GetState().GetStateNumber() &&
+ o1.GetAlt() == o2.GetAlt() &&
+ o1.GetSemanticContext().Equals(o2.GetSemanticContext())
+}
+
+// Hash1 is custom hash implementation for ATNConfigs specifically for configLookup, but in fact just
+// delegates to the standard Hash() method of the ATNConfig type.
+func (c *BaseATNConfigComparator[T]) Hash1(o ATNConfig) int {
+
+ return o.Hash()
+}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/dfa.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/dfa.go
similarity index 80%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/dfa.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/dfa.go
index d55a2a87d..bfd43e1f7 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/dfa.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/dfa.go
@@ -1,13 +1,9 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
package antlr
-import (
- "sort"
-)
-
type DFA struct {
// atnStartState is the ATN state in which this was created
atnStartState DecisionState
@@ -15,8 +11,15 @@ type DFA struct {
decision int
// states is all the DFA states. Use Map to get the old state back; Set can only
- // indicate whether it is there.
- states map[int]*DFAState
+ // indicate whether it is there. Go maps implement key hash collisions and so on and are very
+ // good, but the DFAState is an object and can't be used directly as the key as it can in say JAva
+ // amd C#, whereby if the hashcode is the same for two objects, then Equals() is called against them
+ // to see if they really are the same object.
+ //
+ //
+ states *JStore[*DFAState, *ObjEqComparator[*DFAState]]
+
+ numstates int
s0 *DFAState
@@ -29,7 +32,7 @@ func NewDFA(atnStartState DecisionState, decision int) *DFA {
dfa := &DFA{
atnStartState: atnStartState,
decision: decision,
- states: make(map[int]*DFAState),
+ states: NewJStore[*DFAState, *ObjEqComparator[*DFAState]](dfaStateEqInst),
}
if s, ok := atnStartState.(*StarLoopEntryState); ok && s.precedenceRuleDecision {
dfa.precedenceDfa = true
@@ -92,7 +95,8 @@ func (d *DFA) getPrecedenceDfa() bool {
// true or nil otherwise, and d.precedenceDfa is updated.
func (d *DFA) setPrecedenceDfa(precedenceDfa bool) {
if d.getPrecedenceDfa() != precedenceDfa {
- d.setStates(make(map[int]*DFAState))
+ d.states = NewJStore[*DFAState, *ObjEqComparator[*DFAState]](dfaStateEqInst)
+ d.numstates = 0
if precedenceDfa {
precedenceState := NewDFAState(-1, NewBaseATNConfigSet(false))
@@ -117,38 +121,12 @@ func (d *DFA) setS0(s *DFAState) {
d.s0 = s
}
-func (d *DFA) getState(hash int) (*DFAState, bool) {
- s, ok := d.states[hash]
- return s, ok
-}
-
-func (d *DFA) setStates(states map[int]*DFAState) {
- d.states = states
-}
-
-func (d *DFA) setState(hash int, state *DFAState) {
- d.states[hash] = state
-}
-
-func (d *DFA) numStates() int {
- return len(d.states)
-}
-
-type dfaStateList []*DFAState
-
-func (d dfaStateList) Len() int { return len(d) }
-func (d dfaStateList) Less(i, j int) bool { return d[i].stateNumber < d[j].stateNumber }
-func (d dfaStateList) Swap(i, j int) { d[i], d[j] = d[j], d[i] }
-
// sortedStates returns the states in d sorted by their state number.
func (d *DFA) sortedStates() []*DFAState {
- vs := make([]*DFAState, 0, len(d.states))
-
- for _, v := range d.states {
- vs = append(vs, v)
- }
- sort.Sort(dfaStateList(vs))
+ vs := d.states.SortedSlice(func(i, j *DFAState) bool {
+ return i.stateNumber < j.stateNumber
+ })
return vs
}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/dfa_serializer.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/dfa_serializer.go
similarity index 97%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/dfa_serializer.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/dfa_serializer.go
index bf2ccc06c..84d0a31e5 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/dfa_serializer.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/dfa_serializer.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/dfa_state.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/dfa_state.go
similarity index 90%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/dfa_state.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/dfa_state.go
index 970ed1986..c90dec55c 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/dfa_state.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/dfa_state.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -90,16 +90,16 @@ func NewDFAState(stateNumber int, configs ATNConfigSet) *DFAState {
}
// GetAltSet gets the set of all alts mentioned by all ATN configurations in d.
-func (d *DFAState) GetAltSet() Set {
- alts := newArray2DHashSet(nil, nil)
+func (d *DFAState) GetAltSet() []int {
+ var alts []int
if d.configs != nil {
for _, c := range d.configs.GetItems() {
- alts.Add(c.GetAlt())
+ alts = append(alts, c.GetAlt())
}
}
- if alts.Len() == 0 {
+ if len(alts) == 0 {
return nil
}
@@ -130,27 +130,6 @@ func (d *DFAState) setPrediction(v int) {
d.prediction = v
}
-// equals returns whether d equals other. Two DFAStates are equal if their ATN
-// configuration sets are the same. This method is used to see if a state
-// already exists.
-//
-// Because the number of alternatives and number of ATN configurations are
-// finite, there is a finite number of DFA states that can be processed. This is
-// necessary to show that the algorithm terminates.
-//
-// Cannot test the DFA state numbers here because in
-// ParserATNSimulator.addDFAState we need to know if any other state exists that
-// has d exact set of ATN configurations. The stateNumber is irrelevant.
-func (d *DFAState) equals(other interface{}) bool {
- if d == other {
- return true
- } else if _, ok := other.(*DFAState); !ok {
- return false
- }
-
- return d.configs.Equals(other.(*DFAState).configs)
-}
-
func (d *DFAState) String() string {
var s string
if d.isAcceptState {
@@ -164,8 +143,27 @@ func (d *DFAState) String() string {
return fmt.Sprintf("%d:%s%s", d.stateNumber, fmt.Sprint(d.configs), s)
}
-func (d *DFAState) hash() int {
+func (d *DFAState) Hash() int {
h := murmurInit(7)
- h = murmurUpdate(h, d.configs.hash())
+ h = murmurUpdate(h, d.configs.Hash())
return murmurFinish(h, 1)
}
+
+// Equals returns whether d equals other. Two DFAStates are equal if their ATN
+// configuration sets are the same. This method is used to see if a state
+// already exists.
+//
+// Because the number of alternatives and number of ATN configurations are
+// finite, there is a finite number of DFA states that can be processed. This is
+// necessary to show that the algorithm terminates.
+//
+// Cannot test the DFA state numbers here because in
+// ParserATNSimulator.addDFAState we need to know if any other state exists that
+// has d exact set of ATN configurations. The stateNumber is irrelevant.
+func (d *DFAState) Equals(o Collectable[*DFAState]) bool {
+ if d == o {
+ return true
+ }
+
+ return d.configs.Equals(o.(*DFAState).configs)
+}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/diagnostic_error_listener.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/diagnostic_error_listener.go
similarity index 98%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/diagnostic_error_listener.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/diagnostic_error_listener.go
index 1fec43d9d..c55bcc19b 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/diagnostic_error_listener.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/diagnostic_error_listener.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -87,7 +87,6 @@ func (d *DiagnosticErrorListener) getDecisionDescription(recognizer Parser, dfa
return strconv.Itoa(decision) + " (" + ruleName + ")"
}
-//
// Computes the set of conflicting or ambiguous alternatives from a
// configuration set, if that information was not already provided by the
// parser.
@@ -97,7 +96,6 @@ func (d *DiagnosticErrorListener) getDecisionDescription(recognizer Parser, dfa
// @param configs The conflicting or ambiguous configuration set.
// @return Returns {@code ReportedAlts} if it is not {@code nil}, otherwise
// returns the set of alternatives represented in {@code configs}.
-//
func (d *DiagnosticErrorListener) getConflictingAlts(ReportedAlts *BitSet, set ATNConfigSet) *BitSet {
if ReportedAlts != nil {
return ReportedAlts
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/error_listener.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/error_listener.go
similarity index 98%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/error_listener.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/error_listener.go
index 028e1a9d7..f679f0dcd 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/error_listener.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/error_listener.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -48,12 +48,9 @@ func NewConsoleErrorListener() *ConsoleErrorListener {
return new(ConsoleErrorListener)
}
-//
// Provides a default instance of {@link ConsoleErrorListener}.
-//
var ConsoleErrorListenerINSTANCE = NewConsoleErrorListener()
-//
// {@inheritDoc}
//
//
@@ -64,7 +61,6 @@ var ConsoleErrorListenerINSTANCE = NewConsoleErrorListener()
//
// line line:charPositionInLinemsg
//
-//
func (c *ConsoleErrorListener) SyntaxError(recognizer Recognizer, offendingSymbol interface{}, line, column int, msg string, e RecognitionException) {
fmt.Fprintln(os.Stderr, "line "+strconv.Itoa(line)+":"+strconv.Itoa(column)+" "+msg)
}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/error_strategy.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/error_strategy.go
similarity index 99%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/error_strategy.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/error_strategy.go
index c4080dbfd..5c0a637ba 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/error_strategy.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/error_strategy.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -23,7 +23,6 @@ type ErrorStrategy interface {
// This is the default implementation of {@link ANTLRErrorStrategy} used for
// error Reporting and recovery in ANTLR parsers.
-//
type DefaultErrorStrategy struct {
errorRecoveryMode bool
lastErrorIndex int
@@ -61,12 +60,10 @@ func (d *DefaultErrorStrategy) reset(recognizer Parser) {
d.endErrorCondition(recognizer)
}
-//
// This method is called to enter error recovery mode when a recognition
// exception is Reported.
//
// @param recognizer the parser instance
-//
func (d *DefaultErrorStrategy) beginErrorCondition(recognizer Parser) {
d.errorRecoveryMode = true
}
@@ -75,28 +72,23 @@ func (d *DefaultErrorStrategy) InErrorRecoveryMode(recognizer Parser) bool {
return d.errorRecoveryMode
}
-//
// This method is called to leave error recovery mode after recovering from
// a recognition exception.
//
// @param recognizer
-//
func (d *DefaultErrorStrategy) endErrorCondition(recognizer Parser) {
d.errorRecoveryMode = false
d.lastErrorStates = nil
d.lastErrorIndex = -1
}
-//
// {@inheritDoc}
//
//
The default implementation simply calls {@link //endErrorCondition}.
The default implementation returns immediately if the handler is already
@@ -114,7 +106,6 @@ func (d *DefaultErrorStrategy) ReportMatch(recognizer Parser) {
//
All other types: calls {@link Parser//NotifyErrorListeners} to Report
// the exception
//
-//
func (d *DefaultErrorStrategy) ReportError(recognizer Parser, e RecognitionException) {
// if we've already Reported an error and have not Matched a token
// yet successfully, don't Report any errors.
@@ -142,7 +133,6 @@ func (d *DefaultErrorStrategy) ReportError(recognizer Parser, e RecognitionExcep
//
The default implementation reSynchronizes the parser by consuming tokens
// until we find one in the reSynchronization set--loosely the set of tokens
// that can follow the current rule.
-//
func (d *DefaultErrorStrategy) Recover(recognizer Parser, e RecognitionException) {
if d.lastErrorIndex == recognizer.GetInputStream().Index() &&
@@ -206,7 +196,6 @@ func (d *DefaultErrorStrategy) Recover(recognizer Parser, e RecognitionException
// compare token set at the start of the loop and at each iteration. If for
// some reason speed is suffering for you, you can turn off d
// functionality by simply overriding d method as a blank { }.
-//
func (d *DefaultErrorStrategy) Sync(recognizer Parser) {
// If already recovering, don't try to Sync
if d.InErrorRecoveryMode(recognizer) {
@@ -247,7 +236,6 @@ func (d *DefaultErrorStrategy) Sync(recognizer Parser) {
//
// @param recognizer the parser instance
// @param e the recognition exception
-//
func (d *DefaultErrorStrategy) ReportNoViableAlternative(recognizer Parser, e *NoViableAltException) {
tokens := recognizer.GetTokenStream()
var input string
@@ -264,7 +252,6 @@ func (d *DefaultErrorStrategy) ReportNoViableAlternative(recognizer Parser, e *N
recognizer.NotifyErrorListeners(msg, e.offendingToken, e)
}
-//
// This is called by {@link //ReportError} when the exception is an
// {@link InputMisMatchException}.
//
@@ -272,14 +259,12 @@ func (d *DefaultErrorStrategy) ReportNoViableAlternative(recognizer Parser, e *N
//
// @param recognizer the parser instance
// @param e the recognition exception
-//
func (this *DefaultErrorStrategy) ReportInputMisMatch(recognizer Parser, e *InputMisMatchException) {
msg := "mismatched input " + this.GetTokenErrorDisplay(e.offendingToken) +
" expecting " + e.getExpectedTokens().StringVerbose(recognizer.GetLiteralNames(), recognizer.GetSymbolicNames(), false)
recognizer.NotifyErrorListeners(msg, e.offendingToken, e)
}
-//
// This is called by {@link //ReportError} when the exception is a
// {@link FailedPredicateException}.
//
@@ -287,7 +272,6 @@ func (this *DefaultErrorStrategy) ReportInputMisMatch(recognizer Parser, e *Inpu
//
// @param recognizer the parser instance
// @param e the recognition exception
-//
func (d *DefaultErrorStrategy) ReportFailedPredicate(recognizer Parser, e *FailedPredicateException) {
ruleName := recognizer.GetRuleNames()[recognizer.GetParserRuleContext().GetRuleIndex()]
msg := "rule " + ruleName + " " + e.message
@@ -310,7 +294,6 @@ func (d *DefaultErrorStrategy) ReportFailedPredicate(recognizer Parser, e *Faile
// {@link Parser//NotifyErrorListeners}.
//
// @param recognizer the parser instance
-//
func (d *DefaultErrorStrategy) ReportUnwantedToken(recognizer Parser) {
if d.InErrorRecoveryMode(recognizer) {
return
@@ -339,7 +322,6 @@ func (d *DefaultErrorStrategy) ReportUnwantedToken(recognizer Parser) {
// {@link Parser//NotifyErrorListeners}.
//
// @param recognizer the parser instance
-//
func (d *DefaultErrorStrategy) ReportMissingToken(recognizer Parser) {
if d.InErrorRecoveryMode(recognizer) {
return
@@ -392,15 +374,14 @@ func (d *DefaultErrorStrategy) ReportMissingToken(recognizer Parser) {
// derivation:
//
//
-// => ID '=' '(' INT ')' ('+' atom)* ''
+// => ID '=' '(' INT ')' ('+' atom)* â€
// ^
//
//
-// The attempt to Match {@code ')'} will fail when it sees {@code ''} and
-// call {@link //recoverInline}. To recover, it sees that {@code LA(1)==''}
+// The attempt to Match {@code ')'} will fail when it sees {@code â€} and
+// call {@link //recoverInline}. To recover, it sees that {@code LA(1)==â€}
// is in the set of tokens that can follow the {@code ')'} token reference
// in rule {@code atom}. It can assume that you forgot the {@code ')'}.
-//
func (d *DefaultErrorStrategy) RecoverInline(recognizer Parser) Token {
// SINGLE TOKEN DELETION
MatchedSymbol := d.SingleTokenDeletion(recognizer)
@@ -418,7 +399,6 @@ func (d *DefaultErrorStrategy) RecoverInline(recognizer Parser) Token {
panic(NewInputMisMatchException(recognizer))
}
-//
// This method implements the single-token insertion inline error recovery
// strategy. It is called by {@link //recoverInline} if the single-token
// deletion strategy fails to recover from the mismatched input. If this
@@ -434,7 +414,6 @@ func (d *DefaultErrorStrategy) RecoverInline(recognizer Parser) Token {
// @param recognizer the parser instance
// @return {@code true} if single-token insertion is a viable recovery
// strategy for the current mismatched input, otherwise {@code false}
-//
func (d *DefaultErrorStrategy) SingleTokenInsertion(recognizer Parser) bool {
currentSymbolType := recognizer.GetTokenStream().LA(1)
// if current token is consistent with what could come after current
@@ -469,7 +448,6 @@ func (d *DefaultErrorStrategy) SingleTokenInsertion(recognizer Parser) bool {
// @return the successfully Matched {@link Token} instance if single-token
// deletion successfully recovers from the mismatched input, otherwise
// {@code nil}
-//
func (d *DefaultErrorStrategy) SingleTokenDeletion(recognizer Parser) Token {
NextTokenType := recognizer.GetTokenStream().LA(2)
expecting := d.GetExpectedTokens(recognizer)
@@ -507,7 +485,6 @@ func (d *DefaultErrorStrategy) SingleTokenDeletion(recognizer Parser) Token {
// a CommonToken of the appropriate type. The text will be the token.
// If you change what tokens must be created by the lexer,
// override d method to create the appropriate tokens.
-//
func (d *DefaultErrorStrategy) GetMissingSymbol(recognizer Parser) Token {
currentSymbol := recognizer.GetCurrentToken()
expecting := d.GetExpectedTokens(recognizer)
@@ -546,7 +523,6 @@ func (d *DefaultErrorStrategy) GetExpectedTokens(recognizer Parser) *IntervalSet
// the token). This is better than forcing you to override a method in
// your token objects because you don't have to go modify your lexer
// so that it creates a NewJava type.
-//
func (d *DefaultErrorStrategy) GetTokenErrorDisplay(t Token) string {
if t == nil {
return ""
@@ -578,7 +554,7 @@ func (d *DefaultErrorStrategy) escapeWSAndQuote(s string) string {
// from within the rule i.e., the FIRST computation done by
// ANTLR stops at the end of a rule.
//
-// EXAMPLE
+// # EXAMPLE
//
// When you find a "no viable alt exception", the input is not
// consistent with any of the alternatives for rule r. The best
@@ -597,7 +573,6 @@ func (d *DefaultErrorStrategy) escapeWSAndQuote(s string) string {
// c : ID
// | INT
//
-//
// At each rule invocation, the set of tokens that could follow
// that rule is pushed on a stack. Here are the various
// context-sensitive follow sets:
@@ -660,7 +635,6 @@ func (d *DefaultErrorStrategy) escapeWSAndQuote(s string) string {
//
// Like Grosch I implement context-sensitive FOLLOW sets that are combined
// at run-time upon error to avoid overhead during parsing.
-//
func (d *DefaultErrorStrategy) getErrorRecoverySet(recognizer Parser) *IntervalSet {
atn := recognizer.GetInterpreter().atn
ctx := recognizer.GetParserRuleContext()
@@ -733,7 +707,6 @@ func NewBailErrorStrategy() *BailErrorStrategy {
// in a {@link ParseCancellationException} so it is not caught by the
// rule func catches. Use {@link Exception//getCause()} to get the
// original {@link RecognitionException}.
-//
func (b *BailErrorStrategy) Recover(recognizer Parser, e RecognitionException) {
context := recognizer.GetParserRuleContext()
for context != nil {
@@ -749,7 +722,6 @@ func (b *BailErrorStrategy) Recover(recognizer Parser, e RecognitionException) {
// Make sure we don't attempt to recover inline if the parser
// successfully recovers, it won't panic an exception.
-//
func (b *BailErrorStrategy) RecoverInline(recognizer Parser) Token {
b.Recover(recognizer, NewInputMisMatchException(recognizer))
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/errors.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/errors.go
new file mode 100644
index 000000000..3954c1378
--- /dev/null
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/errors.go
@@ -0,0 +1,238 @@
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
+// Use of this file is governed by the BSD 3-clause license that
+// can be found in the LICENSE.txt file in the project root.
+
+package antlr
+
+// The root of the ANTLR exception hierarchy. In general, ANTLR tracks just
+// 3 kinds of errors: prediction errors, failed predicate errors, and
+// mismatched input errors. In each case, the parser knows where it is
+// in the input, where it is in the ATN, the rule invocation stack,
+// and what kind of problem occurred.
+
+type RecognitionException interface {
+ GetOffendingToken() Token
+ GetMessage() string
+ GetInputStream() IntStream
+}
+
+type BaseRecognitionException struct {
+ message string
+ recognizer Recognizer
+ offendingToken Token
+ offendingState int
+ ctx RuleContext
+ input IntStream
+}
+
+func NewBaseRecognitionException(message string, recognizer Recognizer, input IntStream, ctx RuleContext) *BaseRecognitionException {
+
+ // todo
+ // Error.call(this)
+ //
+ // if (!!Error.captureStackTrace) {
+ // Error.captureStackTrace(this, RecognitionException)
+ // } else {
+ // stack := NewError().stack
+ // }
+ // TODO may be able to use - "runtime" func Stack(buf []byte, all bool) int
+
+ t := new(BaseRecognitionException)
+
+ t.message = message
+ t.recognizer = recognizer
+ t.input = input
+ t.ctx = ctx
+ // The current {@link Token} when an error occurred. Since not all streams
+ // support accessing symbols by index, we have to track the {@link Token}
+ // instance itself.
+ t.offendingToken = nil
+ // Get the ATN state number the parser was in at the time the error
+ // occurred. For {@link NoViableAltException} and
+ // {@link LexerNoViableAltException} exceptions, this is the
+ // {@link DecisionState} number. For others, it is the state whose outgoing
+ // edge we couldn't Match.
+ t.offendingState = -1
+ if t.recognizer != nil {
+ t.offendingState = t.recognizer.GetState()
+ }
+
+ return t
+}
+
+func (b *BaseRecognitionException) GetMessage() string {
+ return b.message
+}
+
+func (b *BaseRecognitionException) GetOffendingToken() Token {
+ return b.offendingToken
+}
+
+func (b *BaseRecognitionException) GetInputStream() IntStream {
+ return b.input
+}
+
+//
If the state number is not known, b method returns -1.
+
+// Gets the set of input symbols which could potentially follow the
+// previously Matched symbol at the time b exception was panicn.
+//
+//
If the set of expected tokens is not known and could not be computed,
+// b method returns {@code nil}.
+//
+// @return The set of token types that could potentially follow the current
+// state in the ATN, or {@code nil} if the information is not available.
+// /
+func (b *BaseRecognitionException) getExpectedTokens() *IntervalSet {
+ if b.recognizer != nil {
+ return b.recognizer.GetATN().getExpectedTokens(b.offendingState, b.ctx)
+ }
+
+ return nil
+}
+
+func (b *BaseRecognitionException) String() string {
+ return b.message
+}
+
+type LexerNoViableAltException struct {
+ *BaseRecognitionException
+
+ startIndex int
+ deadEndConfigs ATNConfigSet
+}
+
+func NewLexerNoViableAltException(lexer Lexer, input CharStream, startIndex int, deadEndConfigs ATNConfigSet) *LexerNoViableAltException {
+
+ l := new(LexerNoViableAltException)
+
+ l.BaseRecognitionException = NewBaseRecognitionException("", lexer, input, nil)
+
+ l.startIndex = startIndex
+ l.deadEndConfigs = deadEndConfigs
+
+ return l
+}
+
+func (l *LexerNoViableAltException) String() string {
+ symbol := ""
+ if l.startIndex >= 0 && l.startIndex < l.input.Size() {
+ symbol = l.input.(CharStream).GetTextFromInterval(NewInterval(l.startIndex, l.startIndex))
+ }
+ return "LexerNoViableAltException" + symbol
+}
+
+type NoViableAltException struct {
+ *BaseRecognitionException
+
+ startToken Token
+ offendingToken Token
+ ctx ParserRuleContext
+ deadEndConfigs ATNConfigSet
+}
+
+// Indicates that the parser could not decide which of two or more paths
+// to take based upon the remaining input. It tracks the starting token
+// of the offending input and also knows where the parser was
+// in the various paths when the error. Reported by ReportNoViableAlternative()
+func NewNoViableAltException(recognizer Parser, input TokenStream, startToken Token, offendingToken Token, deadEndConfigs ATNConfigSet, ctx ParserRuleContext) *NoViableAltException {
+
+ if ctx == nil {
+ ctx = recognizer.GetParserRuleContext()
+ }
+
+ if offendingToken == nil {
+ offendingToken = recognizer.GetCurrentToken()
+ }
+
+ if startToken == nil {
+ startToken = recognizer.GetCurrentToken()
+ }
+
+ if input == nil {
+ input = recognizer.GetInputStream().(TokenStream)
+ }
+
+ n := new(NoViableAltException)
+ n.BaseRecognitionException = NewBaseRecognitionException("", recognizer, input, ctx)
+
+ // Which configurations did we try at input.Index() that couldn't Match
+ // input.LT(1)?//
+ n.deadEndConfigs = deadEndConfigs
+ // The token object at the start index the input stream might
+ // not be buffering tokens so get a reference to it. (At the
+ // time the error occurred, of course the stream needs to keep a
+ // buffer all of the tokens but later we might not have access to those.)
+ n.startToken = startToken
+ n.offendingToken = offendingToken
+
+ return n
+}
+
+type InputMisMatchException struct {
+ *BaseRecognitionException
+}
+
+// This signifies any kind of mismatched input exceptions such as
+// when the current input does not Match the expected token.
+func NewInputMisMatchException(recognizer Parser) *InputMisMatchException {
+
+ i := new(InputMisMatchException)
+ i.BaseRecognitionException = NewBaseRecognitionException("", recognizer, recognizer.GetInputStream(), recognizer.GetParserRuleContext())
+
+ i.offendingToken = recognizer.GetCurrentToken()
+
+ return i
+
+}
+
+// A semantic predicate failed during validation. Validation of predicates
+// occurs when normally parsing the alternative just like Matching a token.
+// Disambiguating predicate evaluation occurs when we test a predicate during
+// prediction.
+
+type FailedPredicateException struct {
+ *BaseRecognitionException
+
+ ruleIndex int
+ predicateIndex int
+ predicate string
+}
+
+func NewFailedPredicateException(recognizer Parser, predicate string, message string) *FailedPredicateException {
+
+ f := new(FailedPredicateException)
+
+ f.BaseRecognitionException = NewBaseRecognitionException(f.formatMessage(predicate, message), recognizer, recognizer.GetInputStream(), recognizer.GetParserRuleContext())
+
+ s := recognizer.GetInterpreter().atn.states[recognizer.GetState()]
+ trans := s.GetTransitions()[0]
+ if trans2, ok := trans.(*PredicateTransition); ok {
+ f.ruleIndex = trans2.ruleIndex
+ f.predicateIndex = trans2.predIndex
+ } else {
+ f.ruleIndex = 0
+ f.predicateIndex = 0
+ }
+ f.predicate = predicate
+ f.offendingToken = recognizer.GetCurrentToken()
+
+ return f
+}
+
+func (f *FailedPredicateException) formatMessage(predicate, message string) string {
+ if message != "" {
+ return message
+ }
+
+ return "failed predicate: {" + predicate + "}?"
+}
+
+type ParseCancellationException struct {
+}
+
+func NewParseCancellationException() *ParseCancellationException {
+ // Error.call(this)
+ // Error.captureStackTrace(this, ParseCancellationException)
+ return new(ParseCancellationException)
+}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/file_stream.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/file_stream.go
similarity index 92%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/file_stream.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/file_stream.go
index 842170c08..bd6ad5efe 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/file_stream.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/file_stream.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/input_stream.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/input_stream.go
similarity index 96%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/input_stream.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/input_stream.go
index 5ff270f53..a8b889ced 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/input_stream.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/input_stream.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/int_stream.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/int_stream.go
similarity index 82%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/int_stream.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/int_stream.go
index 438e0ea6e..4778878bd 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/int_stream.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/int_stream.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/interval_set.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/interval_set.go
similarity index 98%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/interval_set.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/interval_set.go
index 1e9393adb..c1e155e81 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/interval_set.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/interval_set.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -223,6 +223,10 @@ func (i *IntervalSet) StringVerbose(literalNames []string, symbolicNames []strin
return i.toIndexString()
}
+func (i *IntervalSet) GetIntervals() []*Interval {
+ return i.intervals
+}
+
func (i *IntervalSet) toCharString() string {
names := make([]string, len(i.intervals))
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/jcollect.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/jcollect.go
new file mode 100644
index 000000000..e5a74f0c6
--- /dev/null
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/jcollect.go
@@ -0,0 +1,198 @@
+package antlr
+
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
+// Use of this file is governed by the BSD 3-clause license that
+// can be found in the LICENSE.txt file in the project root.
+
+import (
+ "sort"
+)
+
+// Collectable is an interface that a struct should implement if it is to be
+// usable as a key in these collections.
+type Collectable[T any] interface {
+ Hash() int
+ Equals(other Collectable[T]) bool
+}
+
+type Comparator[T any] interface {
+ Hash1(o T) int
+ Equals2(T, T) bool
+}
+
+// JStore implements a container that allows the use of a struct to calculate the key
+// for a collection of values akin to map. This is not meant to be a full-blown HashMap but just
+// serve the needs of the ANTLR Go runtime.
+//
+// For ease of porting the logic of the runtime from the master target (Java), this collection
+// operates in a similar way to Java, in that it can use any struct that supplies a Hash() and Equals()
+// function as the key. The values are stored in a standard go map which internally is a form of hashmap
+// itself, the key for the go map is the hash supplied by the key object. The collection is able to deal with
+// hash conflicts by using a simple slice of values associated with the hash code indexed bucket. That isn't
+// particularly efficient, but it is simple, and it works. As this is specifically for the ANTLR runtime, and
+// we understand the requirements, then this is fine - this is not a general purpose collection.
+type JStore[T any, C Comparator[T]] struct {
+ store map[int][]T
+ len int
+ comparator Comparator[T]
+}
+
+func NewJStore[T any, C Comparator[T]](comparator Comparator[T]) *JStore[T, C] {
+
+ if comparator == nil {
+ panic("comparator cannot be nil")
+ }
+
+ s := &JStore[T, C]{
+ store: make(map[int][]T, 1),
+ comparator: comparator,
+ }
+ return s
+}
+
+// Put will store given value in the collection. Note that the key for storage is generated from
+// the value itself - this is specifically because that is what ANTLR needs - this would not be useful
+// as any kind of general collection.
+//
+// If the key has a hash conflict, then the value will be added to the slice of values associated with the
+// hash, unless the value is already in the slice, in which case the existing value is returned. Value equivalence is
+// tested by calling the equals() method on the key.
+//
+// # If the given value is already present in the store, then the existing value is returned as v and exists is set to true
+//
+// If the given value is not present in the store, then the value is added to the store and returned as v and exists is set to false.
+func (s *JStore[T, C]) Put(value T) (v T, exists bool) { //nolint:ireturn
+
+ kh := s.comparator.Hash1(value)
+
+ for _, v1 := range s.store[kh] {
+ if s.comparator.Equals2(value, v1) {
+ return v1, true
+ }
+ }
+ s.store[kh] = append(s.store[kh], value)
+ s.len++
+ return value, false
+}
+
+// Get will return the value associated with the key - the type of the key is the same type as the value
+// which would not generally be useful, but this is a specific thing for ANTLR where the key is
+// generated using the object we are going to store.
+func (s *JStore[T, C]) Get(key T) (T, bool) { //nolint:ireturn
+
+ kh := s.comparator.Hash1(key)
+
+ for _, v := range s.store[kh] {
+ if s.comparator.Equals2(key, v) {
+ return v, true
+ }
+ }
+ return key, false
+}
+
+// Contains returns true if the given key is present in the store
+func (s *JStore[T, C]) Contains(key T) bool { //nolint:ireturn
+
+ _, present := s.Get(key)
+ return present
+}
+
+func (s *JStore[T, C]) SortedSlice(less func(i, j T) bool) []T {
+ vs := make([]T, 0, len(s.store))
+ for _, v := range s.store {
+ vs = append(vs, v...)
+ }
+ sort.Slice(vs, func(i, j int) bool {
+ return less(vs[i], vs[j])
+ })
+
+ return vs
+}
+
+func (s *JStore[T, C]) Each(f func(T) bool) {
+ for _, e := range s.store {
+ for _, v := range e {
+ f(v)
+ }
+ }
+}
+
+func (s *JStore[T, C]) Len() int {
+ return s.len
+}
+
+func (s *JStore[T, C]) Values() []T {
+ vs := make([]T, 0, len(s.store))
+ for _, e := range s.store {
+ for _, v := range e {
+ vs = append(vs, v)
+ }
+ }
+ return vs
+}
+
+type entry[K, V any] struct {
+ key K
+ val V
+}
+
+type JMap[K, V any, C Comparator[K]] struct {
+ store map[int][]*entry[K, V]
+ len int
+ comparator Comparator[K]
+}
+
+func NewJMap[K, V any, C Comparator[K]](comparator Comparator[K]) *JMap[K, V, C] {
+ return &JMap[K, V, C]{
+ store: make(map[int][]*entry[K, V], 1),
+ comparator: comparator,
+ }
+}
+
+func (m *JMap[K, V, C]) Put(key K, val V) {
+ kh := m.comparator.Hash1(key)
+
+ m.store[kh] = append(m.store[kh], &entry[K, V]{key, val})
+ m.len++
+}
+
+func (m *JMap[K, V, C]) Values() []V {
+ vs := make([]V, 0, len(m.store))
+ for _, e := range m.store {
+ for _, v := range e {
+ vs = append(vs, v.val)
+ }
+ }
+ return vs
+}
+
+func (m *JMap[K, V, C]) Get(key K) (V, bool) {
+
+ var none V
+ kh := m.comparator.Hash1(key)
+ for _, e := range m.store[kh] {
+ if m.comparator.Equals2(e.key, key) {
+ return e.val, true
+ }
+ }
+ return none, false
+}
+
+func (m *JMap[K, V, C]) Len() int {
+ return len(m.store)
+}
+
+func (m *JMap[K, V, C]) Delete(key K) {
+ kh := m.comparator.Hash1(key)
+ for i, e := range m.store[kh] {
+ if m.comparator.Equals2(e.key, key) {
+ m.store[kh] = append(m.store[kh][:i], m.store[kh][i+1:]...)
+ m.len--
+ return
+ }
+ }
+}
+
+func (m *JMap[K, V, C]) Clear() {
+ m.store = make(map[int][]*entry[K, V])
+}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/lexer.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/lexer.go
similarity index 98%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/lexer.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/lexer.go
index b04f04572..6533f0516 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/lexer.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/lexer.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -232,8 +232,6 @@ func (b *BaseLexer) NextToken() Token {
}
return b.token
}
-
- return nil
}
// Instruct the lexer to Skip creating a token for current lexer rule
@@ -342,7 +340,7 @@ func (b *BaseLexer) GetCharIndex() int {
}
// Return the text Matched so far for the current token or any text override.
-//Set the complete text of l token it wipes any previous changes to the text.
+// Set the complete text of l token it wipes any previous changes to the text.
func (b *BaseLexer) GetText() string {
if b.text != "" {
return b.text
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/lexer_action.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/lexer_action.go
similarity index 91%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/lexer_action.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/lexer_action.go
index 5a325be13..111656c29 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/lexer_action.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/lexer_action.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -21,8 +21,8 @@ type LexerAction interface {
getActionType() int
getIsPositionDependent() bool
execute(lexer Lexer)
- hash() int
- equals(other LexerAction) bool
+ Hash() int
+ Equals(other LexerAction) bool
}
type BaseLexerAction struct {
@@ -51,15 +51,14 @@ func (b *BaseLexerAction) getIsPositionDependent() bool {
return b.isPositionDependent
}
-func (b *BaseLexerAction) hash() int {
+func (b *BaseLexerAction) Hash() int {
return b.actionType
}
-func (b *BaseLexerAction) equals(other LexerAction) bool {
+func (b *BaseLexerAction) Equals(other LexerAction) bool {
return b == other
}
-//
// Implements the {@code Skip} lexer action by calling {@link Lexer//Skip}.
//
//
The {@code Skip} command does not have any parameters, so l action is
@@ -85,7 +84,8 @@ func (l *LexerSkipAction) String() string {
return "skip"
}
-// Implements the {@code type} lexer action by calling {@link Lexer//setType}
+// Implements the {@code type} lexer action by calling {@link Lexer//setType}
+//
// with the assigned type.
type LexerTypeAction struct {
*BaseLexerAction
@@ -104,14 +104,14 @@ func (l *LexerTypeAction) execute(lexer Lexer) {
lexer.SetType(l.thetype)
}
-func (l *LexerTypeAction) hash() int {
+func (l *LexerTypeAction) Hash() int {
h := murmurInit(0)
h = murmurUpdate(h, l.actionType)
h = murmurUpdate(h, l.thetype)
return murmurFinish(h, 2)
}
-func (l *LexerTypeAction) equals(other LexerAction) bool {
+func (l *LexerTypeAction) Equals(other LexerAction) bool {
if l == other {
return true
} else if _, ok := other.(*LexerTypeAction); !ok {
@@ -148,14 +148,14 @@ func (l *LexerPushModeAction) execute(lexer Lexer) {
lexer.PushMode(l.mode)
}
-func (l *LexerPushModeAction) hash() int {
+func (l *LexerPushModeAction) Hash() int {
h := murmurInit(0)
h = murmurUpdate(h, l.actionType)
h = murmurUpdate(h, l.mode)
return murmurFinish(h, 2)
}
-func (l *LexerPushModeAction) equals(other LexerAction) bool {
+func (l *LexerPushModeAction) Equals(other LexerAction) bool {
if l == other {
return true
} else if _, ok := other.(*LexerPushModeAction); !ok {
@@ -245,14 +245,14 @@ func (l *LexerModeAction) execute(lexer Lexer) {
lexer.SetMode(l.mode)
}
-func (l *LexerModeAction) hash() int {
+func (l *LexerModeAction) Hash() int {
h := murmurInit(0)
h = murmurUpdate(h, l.actionType)
h = murmurUpdate(h, l.mode)
return murmurFinish(h, 2)
}
-func (l *LexerModeAction) equals(other LexerAction) bool {
+func (l *LexerModeAction) Equals(other LexerAction) bool {
if l == other {
return true
} else if _, ok := other.(*LexerModeAction); !ok {
@@ -303,7 +303,7 @@ func (l *LexerCustomAction) execute(lexer Lexer) {
lexer.Action(nil, l.ruleIndex, l.actionIndex)
}
-func (l *LexerCustomAction) hash() int {
+func (l *LexerCustomAction) Hash() int {
h := murmurInit(0)
h = murmurUpdate(h, l.actionType)
h = murmurUpdate(h, l.ruleIndex)
@@ -311,13 +311,14 @@ func (l *LexerCustomAction) hash() int {
return murmurFinish(h, 3)
}
-func (l *LexerCustomAction) equals(other LexerAction) bool {
+func (l *LexerCustomAction) Equals(other LexerAction) bool {
if l == other {
return true
} else if _, ok := other.(*LexerCustomAction); !ok {
return false
} else {
- return l.ruleIndex == other.(*LexerCustomAction).ruleIndex && l.actionIndex == other.(*LexerCustomAction).actionIndex
+ return l.ruleIndex == other.(*LexerCustomAction).ruleIndex &&
+ l.actionIndex == other.(*LexerCustomAction).actionIndex
}
}
@@ -344,14 +345,14 @@ func (l *LexerChannelAction) execute(lexer Lexer) {
lexer.SetChannel(l.channel)
}
-func (l *LexerChannelAction) hash() int {
+func (l *LexerChannelAction) Hash() int {
h := murmurInit(0)
h = murmurUpdate(h, l.actionType)
h = murmurUpdate(h, l.channel)
return murmurFinish(h, 2)
}
-func (l *LexerChannelAction) equals(other LexerAction) bool {
+func (l *LexerChannelAction) Equals(other LexerAction) bool {
if l == other {
return true
} else if _, ok := other.(*LexerChannelAction); !ok {
@@ -412,10 +413,10 @@ func (l *LexerIndexedCustomAction) execute(lexer Lexer) {
l.lexerAction.execute(lexer)
}
-func (l *LexerIndexedCustomAction) hash() int {
+func (l *LexerIndexedCustomAction) Hash() int {
h := murmurInit(0)
h = murmurUpdate(h, l.offset)
- h = murmurUpdate(h, l.lexerAction.hash())
+ h = murmurUpdate(h, l.lexerAction.Hash())
return murmurFinish(h, 2)
}
@@ -425,6 +426,7 @@ func (l *LexerIndexedCustomAction) equals(other LexerAction) bool {
} else if _, ok := other.(*LexerIndexedCustomAction); !ok {
return false
} else {
- return l.offset == other.(*LexerIndexedCustomAction).offset && l.lexerAction == other.(*LexerIndexedCustomAction).lexerAction
+ return l.offset == other.(*LexerIndexedCustomAction).offset &&
+ l.lexerAction.Equals(other.(*LexerIndexedCustomAction).lexerAction)
}
}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/lexer_action_executor.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/lexer_action_executor.go
similarity index 88%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/lexer_action_executor.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/lexer_action_executor.go
index 056941dd6..be1ba7a7e 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/lexer_action_executor.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/lexer_action_executor.go
@@ -1,9 +1,11 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
package antlr
+import "golang.org/x/exp/slices"
+
// Represents an executor for a sequence of lexer actions which traversed during
// the Matching operation of a lexer rule (token).
//
@@ -12,8 +14,8 @@ package antlr
// not cause bloating of the {@link DFA} created for the lexer.
type LexerActionExecutor struct {
- lexerActions []LexerAction
- cachedHash int
+ lexerActions []LexerAction
+ cachedHash int
}
func NewLexerActionExecutor(lexerActions []LexerAction) *LexerActionExecutor {
@@ -30,7 +32,7 @@ func NewLexerActionExecutor(lexerActions []LexerAction) *LexerActionExecutor {
// of the performance-critical {@link LexerATNConfig//hashCode} operation.
l.cachedHash = murmurInit(57)
for _, a := range lexerActions {
- l.cachedHash = murmurUpdate(l.cachedHash, a.hash())
+ l.cachedHash = murmurUpdate(l.cachedHash, a.Hash())
}
return l
@@ -151,14 +153,17 @@ func (l *LexerActionExecutor) execute(lexer Lexer, input CharStream, startIndex
}
}
-func (l *LexerActionExecutor) hash() int {
+func (l *LexerActionExecutor) Hash() int {
if l == nil {
+ // TODO: Why is this here? l should not be nil
return 61
}
+
+ // TODO: This is created from the action itself when the struct is created - will this be an issue at some point? Java uses the runtime assign hashcode
return l.cachedHash
}
-func (l *LexerActionExecutor) equals(other interface{}) bool {
+func (l *LexerActionExecutor) Equals(other interface{}) bool {
if l == other {
return true
}
@@ -169,5 +174,13 @@ func (l *LexerActionExecutor) equals(other interface{}) bool {
if othert == nil {
return false
}
- return l.cachedHash == othert.cachedHash && &l.lexerActions == &othert.lexerActions
+ if l.cachedHash != othert.cachedHash {
+ return false
+ }
+ if len(l.lexerActions) != len(othert.lexerActions) {
+ return false
+ }
+ return slices.EqualFunc(l.lexerActions, othert.lexerActions, func(i, j LexerAction) bool {
+ return i.Equals(j)
+ })
}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/lexer_atn_simulator.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/lexer_atn_simulator.go
similarity index 98%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/lexer_atn_simulator.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/lexer_atn_simulator.go
index dc05153ea..c573b7521 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/lexer_atn_simulator.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/lexer_atn_simulator.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -591,19 +591,24 @@ func (l *LexerATNSimulator) addDFAState(configs ATNConfigSet, suppressEdge bool)
proposed.lexerActionExecutor = firstConfigWithRuleStopState.(*LexerATNConfig).lexerActionExecutor
proposed.setPrediction(l.atn.ruleToTokenType[firstConfigWithRuleStopState.GetState().GetRuleIndex()])
}
- hash := proposed.hash()
dfa := l.decisionToDFA[l.mode]
l.atn.stateMu.Lock()
defer l.atn.stateMu.Unlock()
- existing, ok := dfa.getState(hash)
- if ok {
+ existing, present := dfa.states.Get(proposed)
+ if present {
+
+ // This state was already present, so just return it.
+ //
proposed = existing
} else {
- proposed.stateNumber = dfa.numStates()
+
+ // We need to add the new state
+ //
+ proposed.stateNumber = dfa.states.Len()
configs.SetReadOnly(true)
proposed.configs = configs
- dfa.setState(hash, proposed)
+ dfa.states.Put(proposed)
}
if !suppressEdge {
dfa.setS0(proposed)
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/ll1_analyzer.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/ll1_analyzer.go
similarity index 87%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/ll1_analyzer.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/ll1_analyzer.go
index 6ffb37de6..76689615a 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/ll1_analyzer.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/ll1_analyzer.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -14,14 +14,15 @@ func NewLL1Analyzer(atn *ATN) *LL1Analyzer {
return la
}
-//* Special value added to the lookahead sets to indicate that we hit
-// a predicate during analysis if {@code seeThruPreds==false}.
-///
+// - Special value added to the lookahead sets to indicate that we hit
+// a predicate during analysis if {@code seeThruPreds==false}.
+//
+// /
const (
LL1AnalyzerHitPred = TokenInvalidType
)
-//*
+// *
// Calculates the SLL(1) expected lookahead set for each outgoing transition
// of an {@link ATNState}. The returned array has one element for each
// outgoing transition in {@code s}. If the closure from transition
@@ -38,7 +39,7 @@ func (la *LL1Analyzer) getDecisionLookahead(s ATNState) []*IntervalSet {
look := make([]*IntervalSet, count)
for alt := 0; alt < count; alt++ {
look[alt] = NewIntervalSet()
- lookBusy := newArray2DHashSet(nil, nil)
+ lookBusy := NewJStore[ATNConfig, Comparator[ATNConfig]](aConfEqInst)
seeThruPreds := false // fail to get lookahead upon pred
la.look1(s.GetTransitions()[alt].getTarget(), nil, BasePredictionContextEMPTY, look[alt], lookBusy, NewBitSet(), seeThruPreds, false)
// Wipe out lookahead for la alternative if we found nothing
@@ -50,7 +51,7 @@ func (la *LL1Analyzer) getDecisionLookahead(s ATNState) []*IntervalSet {
return look
}
-//*
+// *
// Compute set of tokens that can follow {@code s} in the ATN in the
// specified {@code ctx}.
//
@@ -67,7 +68,7 @@ func (la *LL1Analyzer) getDecisionLookahead(s ATNState) []*IntervalSet {
//
// @return The set of tokens that can follow {@code s} in the ATN in the
// specified {@code ctx}.
-///
+// /
func (la *LL1Analyzer) Look(s, stopState ATNState, ctx RuleContext) *IntervalSet {
r := NewIntervalSet()
seeThruPreds := true // ignore preds get all lookahead
@@ -75,7 +76,7 @@ func (la *LL1Analyzer) Look(s, stopState ATNState, ctx RuleContext) *IntervalSet
if ctx != nil {
lookContext = predictionContextFromRuleContext(s.GetATN(), ctx)
}
- la.look1(s, stopState, lookContext, r, newArray2DHashSet(nil, nil), NewBitSet(), seeThruPreds, true)
+ la.look1(s, stopState, lookContext, r, NewJStore[ATNConfig, Comparator[ATNConfig]](aConfEqInst), NewBitSet(), seeThruPreds, true)
return r
}
@@ -109,14 +110,14 @@ func (la *LL1Analyzer) Look(s, stopState ATNState, ctx RuleContext) *IntervalSet
// outermost context is reached. This parameter has no effect if {@code ctx}
// is {@code nil}.
-func (la *LL1Analyzer) look2(s, stopState ATNState, ctx PredictionContext, look *IntervalSet, lookBusy Set, calledRuleStack *BitSet, seeThruPreds, addEOF bool, i int) {
+func (la *LL1Analyzer) look2(s, stopState ATNState, ctx PredictionContext, look *IntervalSet, lookBusy *JStore[ATNConfig, Comparator[ATNConfig]], calledRuleStack *BitSet, seeThruPreds, addEOF bool, i int) {
returnState := la.atn.states[ctx.getReturnState(i)]
la.look1(returnState, stopState, ctx.GetParent(i), look, lookBusy, calledRuleStack, seeThruPreds, addEOF)
}
-func (la *LL1Analyzer) look1(s, stopState ATNState, ctx PredictionContext, look *IntervalSet, lookBusy Set, calledRuleStack *BitSet, seeThruPreds, addEOF bool) {
+func (la *LL1Analyzer) look1(s, stopState ATNState, ctx PredictionContext, look *IntervalSet, lookBusy *JStore[ATNConfig, Comparator[ATNConfig]], calledRuleStack *BitSet, seeThruPreds, addEOF bool) {
c := NewBaseATNConfig6(s, 0, ctx)
@@ -124,8 +125,11 @@ func (la *LL1Analyzer) look1(s, stopState ATNState, ctx PredictionContext, look
return
}
- lookBusy.Add(c)
+ _, present := lookBusy.Put(c)
+ if present {
+ return
+ }
if s == stopState {
if ctx == nil {
look.addOne(TokenEpsilon)
@@ -198,7 +202,7 @@ func (la *LL1Analyzer) look1(s, stopState ATNState, ctx PredictionContext, look
}
}
-func (la *LL1Analyzer) look3(stopState ATNState, ctx PredictionContext, look *IntervalSet, lookBusy Set, calledRuleStack *BitSet, seeThruPreds, addEOF bool, t1 *RuleTransition) {
+func (la *LL1Analyzer) look3(stopState ATNState, ctx PredictionContext, look *IntervalSet, lookBusy *JStore[ATNConfig, Comparator[ATNConfig]], calledRuleStack *BitSet, seeThruPreds, addEOF bool, t1 *RuleTransition) {
newContext := SingletonBasePredictionContextCreate(ctx, t1.followState.GetStateNumber())
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/parser.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/parser.go
new file mode 100644
index 000000000..d26bf0639
--- /dev/null
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/parser.go
@@ -0,0 +1,708 @@
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
+// Use of this file is governed by the BSD 3-clause license that
+// can be found in the LICENSE.txt file in the project root.
+
+package antlr
+
+import (
+ "fmt"
+ "strconv"
+)
+
+type Parser interface {
+ Recognizer
+
+ GetInterpreter() *ParserATNSimulator
+
+ GetTokenStream() TokenStream
+ GetTokenFactory() TokenFactory
+ GetParserRuleContext() ParserRuleContext
+ SetParserRuleContext(ParserRuleContext)
+ Consume() Token
+ GetParseListeners() []ParseTreeListener
+
+ GetErrorHandler() ErrorStrategy
+ SetErrorHandler(ErrorStrategy)
+ GetInputStream() IntStream
+ GetCurrentToken() Token
+ GetExpectedTokens() *IntervalSet
+ NotifyErrorListeners(string, Token, RecognitionException)
+ IsExpectedToken(int) bool
+ GetPrecedence() int
+ GetRuleInvocationStack(ParserRuleContext) []string
+}
+
+type BaseParser struct {
+ *BaseRecognizer
+
+ Interpreter *ParserATNSimulator
+ BuildParseTrees bool
+
+ input TokenStream
+ errHandler ErrorStrategy
+ precedenceStack IntStack
+ ctx ParserRuleContext
+
+ tracer *TraceListener
+ parseListeners []ParseTreeListener
+ _SyntaxErrors int
+}
+
+// p.is all the parsing support code essentially most of it is error
+// recovery stuff.//
+func NewBaseParser(input TokenStream) *BaseParser {
+
+ p := new(BaseParser)
+
+ p.BaseRecognizer = NewBaseRecognizer()
+
+ // The input stream.
+ p.input = nil
+ // The error handling strategy for the parser. The default value is a new
+ // instance of {@link DefaultErrorStrategy}.
+ p.errHandler = NewDefaultErrorStrategy()
+ p.precedenceStack = make([]int, 0)
+ p.precedenceStack.Push(0)
+ // The {@link ParserRuleContext} object for the currently executing rule.
+ // p.is always non-nil during the parsing process.
+ p.ctx = nil
+ // Specifies whether or not the parser should construct a parse tree during
+ // the parsing process. The default value is {@code true}.
+ p.BuildParseTrees = true
+ // When {@link //setTrace}{@code (true)} is called, a reference to the
+ // {@link TraceListener} is stored here so it can be easily removed in a
+ // later call to {@link //setTrace}{@code (false)}. The listener itself is
+ // implemented as a parser listener so p.field is not directly used by
+ // other parser methods.
+ p.tracer = nil
+ // The list of {@link ParseTreeListener} listeners registered to receive
+ // events during the parse.
+ p.parseListeners = nil
+ // The number of syntax errors Reported during parsing. p.value is
+ // incremented each time {@link //NotifyErrorListeners} is called.
+ p._SyntaxErrors = 0
+ p.SetInputStream(input)
+
+ return p
+}
+
+// p.field maps from the serialized ATN string to the deserialized {@link
+// ATN} with
+// bypass alternatives.
+//
+// @see ATNDeserializationOptions//isGenerateRuleBypassTransitions()
+var bypassAltsAtnCache = make(map[string]int)
+
+// reset the parser's state//
+func (p *BaseParser) reset() {
+ if p.input != nil {
+ p.input.Seek(0)
+ }
+ p.errHandler.reset(p)
+ p.ctx = nil
+ p._SyntaxErrors = 0
+ p.SetTrace(nil)
+ p.precedenceStack = make([]int, 0)
+ p.precedenceStack.Push(0)
+ if p.Interpreter != nil {
+ p.Interpreter.reset()
+ }
+}
+
+func (p *BaseParser) GetErrorHandler() ErrorStrategy {
+ return p.errHandler
+}
+
+func (p *BaseParser) SetErrorHandler(e ErrorStrategy) {
+ p.errHandler = e
+}
+
+// Match current input symbol against {@code ttype}. If the symbol type
+// Matches, {@link ANTLRErrorStrategy//ReportMatch} and {@link //consume} are
+// called to complete the Match process.
+//
+//
If the symbol type does not Match,
+// {@link ANTLRErrorStrategy//recoverInline} is called on the current error
+// strategy to attempt recovery. If {@link //getBuildParseTree} is
+// {@code true} and the token index of the symbol returned by
+// {@link ANTLRErrorStrategy//recoverInline} is -1, the symbol is added to
+// the parse tree by calling {@link ParserRuleContext//addErrorNode}.
+//
+// @param ttype the token type to Match
+// @return the Matched symbol
+// @panics RecognitionException if the current input symbol did not Match
+// {@code ttype} and the error strategy could not recover from the
+// mismatched symbol
+
+func (p *BaseParser) Match(ttype int) Token {
+
+ t := p.GetCurrentToken()
+
+ if t.GetTokenType() == ttype {
+ p.errHandler.ReportMatch(p)
+ p.Consume()
+ } else {
+ t = p.errHandler.RecoverInline(p)
+ if p.BuildParseTrees && t.GetTokenIndex() == -1 {
+ // we must have conjured up a Newtoken during single token
+ // insertion
+ // if it's not the current symbol
+ p.ctx.AddErrorNode(t)
+ }
+ }
+
+ return t
+}
+
+// Match current input symbol as a wildcard. If the symbol type Matches
+// (i.e. has a value greater than 0), {@link ANTLRErrorStrategy//ReportMatch}
+// and {@link //consume} are called to complete the Match process.
+//
+//
If the symbol type does not Match,
+// {@link ANTLRErrorStrategy//recoverInline} is called on the current error
+// strategy to attempt recovery. If {@link //getBuildParseTree} is
+// {@code true} and the token index of the symbol returned by
+// {@link ANTLRErrorStrategy//recoverInline} is -1, the symbol is added to
+// the parse tree by calling {@link ParserRuleContext//addErrorNode}.
+//
+// @return the Matched symbol
+// @panics RecognitionException if the current input symbol did not Match
+// a wildcard and the error strategy could not recover from the mismatched
+// symbol
+
+func (p *BaseParser) MatchWildcard() Token {
+ t := p.GetCurrentToken()
+ if t.GetTokenType() > 0 {
+ p.errHandler.ReportMatch(p)
+ p.Consume()
+ } else {
+ t = p.errHandler.RecoverInline(p)
+ if p.BuildParseTrees && t.GetTokenIndex() == -1 {
+ // we must have conjured up a Newtoken during single token
+ // insertion
+ // if it's not the current symbol
+ p.ctx.AddErrorNode(t)
+ }
+ }
+ return t
+}
+
+func (p *BaseParser) GetParserRuleContext() ParserRuleContext {
+ return p.ctx
+}
+
+func (p *BaseParser) SetParserRuleContext(v ParserRuleContext) {
+ p.ctx = v
+}
+
+func (p *BaseParser) GetParseListeners() []ParseTreeListener {
+ if p.parseListeners == nil {
+ return make([]ParseTreeListener, 0)
+ }
+ return p.parseListeners
+}
+
+// Registers {@code listener} to receive events during the parsing process.
+//
+//
To support output-preserving grammar transformations (including but not
+// limited to left-recursion removal, automated left-factoring, and
+// optimized code generation), calls to listener methods during the parse
+// may differ substantially from calls made by
+// {@link ParseTreeWalker//DEFAULT} used after the parse is complete. In
+// particular, rule entry and exit events may occur in a different order
+// during the parse than after the parser. In addition, calls to certain
+// rule entry methods may be omitted.
+//
+//
With the following specific exceptions, calls to listener events are
+// deterministic, i.e. for identical input the calls to listener
+// methods will be the same.
+//
+//
+//
Alterations to the grammar used to generate code may change the
+// behavior of the listener calls.
+//
Alterations to the command line options passed to ANTLR 4 when
+// generating the parser may change the behavior of the listener calls.
+//
Changing the version of the ANTLR Tool used to generate the parser
+// may change the behavior of the listener calls.
+//
+//
+// @param listener the listener to add
+//
+// @panics nilPointerException if {@code} listener is {@code nil}
+func (p *BaseParser) AddParseListener(listener ParseTreeListener) {
+ if listener == nil {
+ panic("listener")
+ }
+ if p.parseListeners == nil {
+ p.parseListeners = make([]ParseTreeListener, 0)
+ }
+ p.parseListeners = append(p.parseListeners, listener)
+}
+
+// Remove {@code listener} from the list of parse listeners.
+//
+//
If {@code listener} is {@code nil} or has not been added as a parse
+// listener, p.method does nothing.
+// @param listener the listener to remove
+func (p *BaseParser) RemoveParseListener(listener ParseTreeListener) {
+
+ if p.parseListeners != nil {
+
+ idx := -1
+ for i, v := range p.parseListeners {
+ if v == listener {
+ idx = i
+ break
+ }
+ }
+
+ if idx == -1 {
+ return
+ }
+
+ // remove the listener from the slice
+ p.parseListeners = append(p.parseListeners[0:idx], p.parseListeners[idx+1:]...)
+
+ if len(p.parseListeners) == 0 {
+ p.parseListeners = nil
+ }
+ }
+}
+
+// Remove all parse listeners.
+func (p *BaseParser) removeParseListeners() {
+ p.parseListeners = nil
+}
+
+// Notify any parse listeners of an enter rule event.
+func (p *BaseParser) TriggerEnterRuleEvent() {
+ if p.parseListeners != nil {
+ ctx := p.ctx
+ for _, listener := range p.parseListeners {
+ listener.EnterEveryRule(ctx)
+ ctx.EnterRule(listener)
+ }
+ }
+}
+
+// Notify any parse listeners of an exit rule event.
+//
+// @see //addParseListener
+func (p *BaseParser) TriggerExitRuleEvent() {
+ if p.parseListeners != nil {
+ // reverse order walk of listeners
+ ctx := p.ctx
+ l := len(p.parseListeners) - 1
+
+ for i := range p.parseListeners {
+ listener := p.parseListeners[l-i]
+ ctx.ExitRule(listener)
+ listener.ExitEveryRule(ctx)
+ }
+ }
+}
+
+func (p *BaseParser) GetInterpreter() *ParserATNSimulator {
+ return p.Interpreter
+}
+
+func (p *BaseParser) GetATN() *ATN {
+ return p.Interpreter.atn
+}
+
+func (p *BaseParser) GetTokenFactory() TokenFactory {
+ return p.input.GetTokenSource().GetTokenFactory()
+}
+
+// Tell our token source and error strategy about a Newway to create tokens.//
+func (p *BaseParser) setTokenFactory(factory TokenFactory) {
+ p.input.GetTokenSource().setTokenFactory(factory)
+}
+
+// The ATN with bypass alternatives is expensive to create so we create it
+// lazily.
+//
+// @panics UnsupportedOperationException if the current parser does not
+// implement the {@link //getSerializedATN()} method.
+func (p *BaseParser) GetATNWithBypassAlts() {
+
+ // TODO
+ panic("Not implemented!")
+
+ // serializedAtn := p.getSerializedATN()
+ // if (serializedAtn == nil) {
+ // panic("The current parser does not support an ATN with bypass alternatives.")
+ // }
+ // result := p.bypassAltsAtnCache[serializedAtn]
+ // if (result == nil) {
+ // deserializationOptions := NewATNDeserializationOptions(nil)
+ // deserializationOptions.generateRuleBypassTransitions = true
+ // result = NewATNDeserializer(deserializationOptions).deserialize(serializedAtn)
+ // p.bypassAltsAtnCache[serializedAtn] = result
+ // }
+ // return result
+}
+
+// The preferred method of getting a tree pattern. For example, here's a
+// sample use:
+//
+//
+// ParseTree t = parser.expr()
+// ParseTreePattern p = parser.compileParseTreePattern("<ID>+0",
+// MyParser.RULE_expr)
+// ParseTreeMatch m = p.Match(t)
+// String id = m.Get("ID")
+//
+
+func (p *BaseParser) compileParseTreePattern(pattern, patternRuleIndex, lexer Lexer) {
+
+ panic("NewParseTreePatternMatcher not implemented!")
+ //
+ // if (lexer == nil) {
+ // if (p.GetTokenStream() != nil) {
+ // tokenSource := p.GetTokenStream().GetTokenSource()
+ // if _, ok := tokenSource.(ILexer); ok {
+ // lexer = tokenSource
+ // }
+ // }
+ // }
+ // if (lexer == nil) {
+ // panic("Parser can't discover a lexer to use")
+ // }
+
+ // m := NewParseTreePatternMatcher(lexer, p)
+ // return m.compile(pattern, patternRuleIndex)
+}
+
+func (p *BaseParser) GetInputStream() IntStream {
+ return p.GetTokenStream()
+}
+
+func (p *BaseParser) SetInputStream(input TokenStream) {
+ p.SetTokenStream(input)
+}
+
+func (p *BaseParser) GetTokenStream() TokenStream {
+ return p.input
+}
+
+// Set the token stream and reset the parser.//
+func (p *BaseParser) SetTokenStream(input TokenStream) {
+ p.input = nil
+ p.reset()
+ p.input = input
+}
+
+// Match needs to return the current input symbol, which gets put
+// into the label for the associated token ref e.g., x=ID.
+func (p *BaseParser) GetCurrentToken() Token {
+ return p.input.LT(1)
+}
+
+func (p *BaseParser) NotifyErrorListeners(msg string, offendingToken Token, err RecognitionException) {
+ if offendingToken == nil {
+ offendingToken = p.GetCurrentToken()
+ }
+ p._SyntaxErrors++
+ line := offendingToken.GetLine()
+ column := offendingToken.GetColumn()
+ listener := p.GetErrorListenerDispatch()
+ listener.SyntaxError(p, offendingToken, line, column, msg, err)
+}
+
+func (p *BaseParser) Consume() Token {
+ o := p.GetCurrentToken()
+ if o.GetTokenType() != TokenEOF {
+ p.GetInputStream().Consume()
+ }
+ hasListener := p.parseListeners != nil && len(p.parseListeners) > 0
+ if p.BuildParseTrees || hasListener {
+ if p.errHandler.InErrorRecoveryMode(p) {
+ node := p.ctx.AddErrorNode(o)
+ if p.parseListeners != nil {
+ for _, l := range p.parseListeners {
+ l.VisitErrorNode(node)
+ }
+ }
+
+ } else {
+ node := p.ctx.AddTokenNode(o)
+ if p.parseListeners != nil {
+ for _, l := range p.parseListeners {
+ l.VisitTerminal(node)
+ }
+ }
+ }
+ // node.invokingState = p.state
+ }
+
+ return o
+}
+
+func (p *BaseParser) addContextToParseTree() {
+ // add current context to parent if we have a parent
+ if p.ctx.GetParent() != nil {
+ p.ctx.GetParent().(ParserRuleContext).AddChild(p.ctx)
+ }
+}
+
+func (p *BaseParser) EnterRule(localctx ParserRuleContext, state, ruleIndex int) {
+ p.SetState(state)
+ p.ctx = localctx
+ p.ctx.SetStart(p.input.LT(1))
+ if p.BuildParseTrees {
+ p.addContextToParseTree()
+ }
+ if p.parseListeners != nil {
+ p.TriggerEnterRuleEvent()
+ }
+}
+
+func (p *BaseParser) ExitRule() {
+ p.ctx.SetStop(p.input.LT(-1))
+ // trigger event on ctx, before it reverts to parent
+ if p.parseListeners != nil {
+ p.TriggerExitRuleEvent()
+ }
+ p.SetState(p.ctx.GetInvokingState())
+ if p.ctx.GetParent() != nil {
+ p.ctx = p.ctx.GetParent().(ParserRuleContext)
+ } else {
+ p.ctx = nil
+ }
+}
+
+func (p *BaseParser) EnterOuterAlt(localctx ParserRuleContext, altNum int) {
+ localctx.SetAltNumber(altNum)
+ // if we have Newlocalctx, make sure we replace existing ctx
+ // that is previous child of parse tree
+ if p.BuildParseTrees && p.ctx != localctx {
+ if p.ctx.GetParent() != nil {
+ p.ctx.GetParent().(ParserRuleContext).RemoveLastChild()
+ p.ctx.GetParent().(ParserRuleContext).AddChild(localctx)
+ }
+ }
+ p.ctx = localctx
+}
+
+// Get the precedence level for the top-most precedence rule.
+//
+// @return The precedence level for the top-most precedence rule, or -1 if
+// the parser context is not nested within a precedence rule.
+
+func (p *BaseParser) GetPrecedence() int {
+ if len(p.precedenceStack) == 0 {
+ return -1
+ }
+
+ return p.precedenceStack[len(p.precedenceStack)-1]
+}
+
+func (p *BaseParser) EnterRecursionRule(localctx ParserRuleContext, state, ruleIndex, precedence int) {
+ p.SetState(state)
+ p.precedenceStack.Push(precedence)
+ p.ctx = localctx
+ p.ctx.SetStart(p.input.LT(1))
+ if p.parseListeners != nil {
+ p.TriggerEnterRuleEvent() // simulates rule entry for
+ // left-recursive rules
+ }
+}
+
+//
+// Like {@link //EnterRule} but for recursive rules.
+
+func (p *BaseParser) PushNewRecursionContext(localctx ParserRuleContext, state, ruleIndex int) {
+ previous := p.ctx
+ previous.SetParent(localctx)
+ previous.SetInvokingState(state)
+ previous.SetStop(p.input.LT(-1))
+
+ p.ctx = localctx
+ p.ctx.SetStart(previous.GetStart())
+ if p.BuildParseTrees {
+ p.ctx.AddChild(previous)
+ }
+ if p.parseListeners != nil {
+ p.TriggerEnterRuleEvent() // simulates rule entry for
+ // left-recursive rules
+ }
+}
+
+func (p *BaseParser) UnrollRecursionContexts(parentCtx ParserRuleContext) {
+ p.precedenceStack.Pop()
+ p.ctx.SetStop(p.input.LT(-1))
+ retCtx := p.ctx // save current ctx (return value)
+ // unroll so ctx is as it was before call to recursive method
+ if p.parseListeners != nil {
+ for p.ctx != parentCtx {
+ p.TriggerExitRuleEvent()
+ p.ctx = p.ctx.GetParent().(ParserRuleContext)
+ }
+ } else {
+ p.ctx = parentCtx
+ }
+ // hook into tree
+ retCtx.SetParent(parentCtx)
+ if p.BuildParseTrees && parentCtx != nil {
+ // add return ctx into invoking rule's tree
+ parentCtx.AddChild(retCtx)
+ }
+}
+
+func (p *BaseParser) GetInvokingContext(ruleIndex int) ParserRuleContext {
+ ctx := p.ctx
+ for ctx != nil {
+ if ctx.GetRuleIndex() == ruleIndex {
+ return ctx
+ }
+ ctx = ctx.GetParent().(ParserRuleContext)
+ }
+ return nil
+}
+
+func (p *BaseParser) Precpred(localctx RuleContext, precedence int) bool {
+ return precedence >= p.precedenceStack[len(p.precedenceStack)-1]
+}
+
+func (p *BaseParser) inContext(context ParserRuleContext) bool {
+ // TODO: useful in parser?
+ return false
+}
+
+//
+// Checks whether or not {@code symbol} can follow the current state in the
+// ATN. The behavior of p.method is equivalent to the following, but is
+// implemented such that the complete context-sensitive follow set does not
+// need to be explicitly constructed.
+//
+//
+//
+// @param symbol the symbol type to check
+// @return {@code true} if {@code symbol} can follow the current state in
+// the ATN, otherwise {@code false}.
+
+func (p *BaseParser) IsExpectedToken(symbol int) bool {
+ atn := p.Interpreter.atn
+ ctx := p.ctx
+ s := atn.states[p.state]
+ following := atn.NextTokens(s, nil)
+ if following.contains(symbol) {
+ return true
+ }
+ if !following.contains(TokenEpsilon) {
+ return false
+ }
+ for ctx != nil && ctx.GetInvokingState() >= 0 && following.contains(TokenEpsilon) {
+ invokingState := atn.states[ctx.GetInvokingState()]
+ rt := invokingState.GetTransitions()[0]
+ following = atn.NextTokens(rt.(*RuleTransition).followState, nil)
+ if following.contains(symbol) {
+ return true
+ }
+ ctx = ctx.GetParent().(ParserRuleContext)
+ }
+ if following.contains(TokenEpsilon) && symbol == TokenEOF {
+ return true
+ }
+
+ return false
+}
+
+// Computes the set of input symbols which could follow the current parser
+// state and context, as given by {@link //GetState} and {@link //GetContext},
+// respectively.
+//
+// @see ATN//getExpectedTokens(int, RuleContext)
+func (p *BaseParser) GetExpectedTokens() *IntervalSet {
+ return p.Interpreter.atn.getExpectedTokens(p.state, p.ctx)
+}
+
+func (p *BaseParser) GetExpectedTokensWithinCurrentRule() *IntervalSet {
+ atn := p.Interpreter.atn
+ s := atn.states[p.state]
+ return atn.NextTokens(s, nil)
+}
+
+// Get a rule's index (i.e., {@code RULE_ruleName} field) or -1 if not found.//
+func (p *BaseParser) GetRuleIndex(ruleName string) int {
+ var ruleIndex, ok = p.GetRuleIndexMap()[ruleName]
+ if ok {
+ return ruleIndex
+ }
+
+ return -1
+}
+
+// Return List<String> of the rule names in your parser instance
+// leading up to a call to the current rule. You could override if
+// you want more details such as the file/line info of where
+// in the ATN a rule is invoked.
+//
+// this very useful for error messages.
+
+func (p *BaseParser) GetRuleInvocationStack(c ParserRuleContext) []string {
+ if c == nil {
+ c = p.ctx
+ }
+ stack := make([]string, 0)
+ for c != nil {
+ // compute what follows who invoked us
+ ruleIndex := c.GetRuleIndex()
+ if ruleIndex < 0 {
+ stack = append(stack, "n/a")
+ } else {
+ stack = append(stack, p.GetRuleNames()[ruleIndex])
+ }
+
+ vp := c.GetParent()
+
+ if vp == nil {
+ break
+ }
+
+ c = vp.(ParserRuleContext)
+ }
+ return stack
+}
+
+// For debugging and other purposes.//
+func (p *BaseParser) GetDFAStrings() string {
+ return fmt.Sprint(p.Interpreter.decisionToDFA)
+}
+
+// For debugging and other purposes.//
+func (p *BaseParser) DumpDFA() {
+ seenOne := false
+ for _, dfa := range p.Interpreter.decisionToDFA {
+ if dfa.states.Len() > 0 {
+ if seenOne {
+ fmt.Println()
+ }
+ fmt.Println("Decision " + strconv.Itoa(dfa.decision) + ":")
+ fmt.Print(dfa.String(p.LiteralNames, p.SymbolicNames))
+ seenOne = true
+ }
+ }
+}
+
+func (p *BaseParser) GetSourceName() string {
+ return p.GrammarFileName
+}
+
+// During a parse is sometimes useful to listen in on the rule entry and exit
+// events as well as token Matches. p.is for quick and dirty debugging.
+func (p *BaseParser) SetTrace(trace *TraceListener) {
+ if trace == nil {
+ p.RemoveParseListener(p.tracer)
+ p.tracer = nil
+ } else {
+ if p.tracer != nil {
+ p.RemoveParseListener(p.tracer)
+ }
+ p.tracer = NewTraceListener(p)
+ p.AddParseListener(p.tracer)
+ }
+}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/parser_atn_simulator.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/parser_atn_simulator.go
similarity index 94%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/parser_atn_simulator.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/parser_atn_simulator.go
index 888d51297..8bcc46a0d 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/parser_atn_simulator.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/parser_atn_simulator.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -11,11 +11,11 @@ import (
)
var (
- ParserATNSimulatorDebug = false
- ParserATNSimulatorListATNDecisions = false
- ParserATNSimulatorDFADebug = false
- ParserATNSimulatorRetryDebug = false
- TurnOffLRLoopEntryBranchOpt = false
+ ParserATNSimulatorDebug = false
+ ParserATNSimulatorTraceATNSim = false
+ ParserATNSimulatorDFADebug = false
+ ParserATNSimulatorRetryDebug = false
+ TurnOffLRLoopEntryBranchOpt = false
)
type ParserATNSimulator struct {
@@ -70,8 +70,8 @@ func (p *ParserATNSimulator) reset() {
}
func (p *ParserATNSimulator) AdaptivePredict(input TokenStream, decision int, outerContext ParserRuleContext) int {
- if ParserATNSimulatorDebug || ParserATNSimulatorListATNDecisions {
- fmt.Println("AdaptivePredict decision " + strconv.Itoa(decision) +
+ if ParserATNSimulatorDebug || ParserATNSimulatorTraceATNSim {
+ fmt.Println("adaptivePredict decision " + strconv.Itoa(decision) +
" exec LA(1)==" + p.getLookaheadName(input) +
" line " + strconv.Itoa(input.LT(1).GetLine()) + ":" +
strconv.Itoa(input.LT(1).GetColumn()))
@@ -111,15 +111,15 @@ func (p *ParserATNSimulator) AdaptivePredict(input TokenStream, decision int, ou
if s0 == nil {
if outerContext == nil {
- outerContext = RuleContextEmpty
+ outerContext = ParserRuleContextEmpty
}
- if ParserATNSimulatorDebug || ParserATNSimulatorListATNDecisions {
+ if ParserATNSimulatorDebug {
fmt.Println("predictATN decision " + strconv.Itoa(dfa.decision) +
" exec LA(1)==" + p.getLookaheadName(input) +
", outerContext=" + outerContext.String(p.parser.GetRuleNames(), nil))
}
fullCtx := false
- s0Closure := p.computeStartState(dfa.atnStartState, RuleContextEmpty, fullCtx)
+ s0Closure := p.computeStartState(dfa.atnStartState, ParserRuleContextEmpty, fullCtx)
p.atn.stateMu.Lock()
if dfa.getPrecedenceDfa() {
@@ -174,17 +174,18 @@ func (p *ParserATNSimulator) AdaptivePredict(input TokenStream, decision int, ou
// Reporting insufficient predicates
// cover these cases:
-// dead end
-// single alt
-// single alt + preds
-// conflict
-// conflict + preds
//
+// dead end
+// single alt
+// single alt + preds
+// conflict
+// conflict + preds
func (p *ParserATNSimulator) execATN(dfa *DFA, s0 *DFAState, input TokenStream, startIndex int, outerContext ParserRuleContext) int {
- if ParserATNSimulatorDebug || ParserATNSimulatorListATNDecisions {
+ if ParserATNSimulatorDebug || ParserATNSimulatorTraceATNSim {
fmt.Println("execATN decision " + strconv.Itoa(dfa.decision) +
- " exec LA(1)==" + p.getLookaheadName(input) +
+ ", DFA state " + s0.String() +
+ ", LA(1)==" + p.getLookaheadName(input) +
" line " + strconv.Itoa(input.LT(1).GetLine()) + ":" + strconv.Itoa(input.LT(1).GetColumn()))
}
@@ -277,8 +278,6 @@ func (p *ParserATNSimulator) execATN(dfa *DFA, s0 *DFAState, input TokenStream,
t = input.LA(1)
}
}
-
- panic("Should not have reached p state")
}
// Get an existing target state for an edge in the DFA. If the target state
@@ -384,7 +383,7 @@ func (p *ParserATNSimulator) predicateDFAState(dfaState *DFAState, decisionState
// comes back with reach.uniqueAlt set to a valid alt
func (p *ParserATNSimulator) execATNWithFullContext(dfa *DFA, D *DFAState, s0 ATNConfigSet, input TokenStream, startIndex int, outerContext ParserRuleContext) int {
- if ParserATNSimulatorDebug || ParserATNSimulatorListATNDecisions {
+ if ParserATNSimulatorDebug || ParserATNSimulatorTraceATNSim {
fmt.Println("execATNWithFullContext " + s0.String())
}
@@ -492,9 +491,6 @@ func (p *ParserATNSimulator) execATNWithFullContext(dfa *DFA, D *DFAState, s0 AT
}
func (p *ParserATNSimulator) computeReachSet(closure ATNConfigSet, t int, fullCtx bool) ATNConfigSet {
- if ParserATNSimulatorDebug {
- fmt.Println("in computeReachSet, starting closure: " + closure.String())
- }
if p.mergeCache == nil {
p.mergeCache = NewDoubleDict()
}
@@ -570,7 +566,7 @@ func (p *ParserATNSimulator) computeReachSet(closure ATNConfigSet, t int, fullCt
//
if reach == nil {
reach = NewBaseATNConfigSet(fullCtx)
- closureBusy := newArray2DHashSet(nil, nil)
+ closureBusy := NewJStore[ATNConfig, Comparator[ATNConfig]](aConfEqInst)
treatEOFAsEpsilon := t == TokenEOF
amount := len(intermediate.configs)
for k := 0; k < amount; k++ {
@@ -610,6 +606,11 @@ func (p *ParserATNSimulator) computeReachSet(closure ATNConfigSet, t int, fullCt
reach.Add(skippedStopStates[l], p.mergeCache)
}
}
+
+ if ParserATNSimulatorTraceATNSim {
+ fmt.Println("computeReachSet " + closure.String() + " -> " + reach.String())
+ }
+
if len(reach.GetItems()) == 0 {
return nil
}
@@ -617,7 +618,6 @@ func (p *ParserATNSimulator) computeReachSet(closure ATNConfigSet, t int, fullCt
return reach
}
-//
// Return a configuration set containing only the configurations from
// {@code configs} which are in a {@link RuleStopState}. If all
// configurations in {@code configs} are already in a rule stop state, p
@@ -636,7 +636,6 @@ func (p *ParserATNSimulator) computeReachSet(closure ATNConfigSet, t int, fullCt
// @return {@code configs} if all configurations in {@code configs} are in a
// rule stop state, otherwise return a Newconfiguration set containing only
// the configurations from {@code configs} which are in a rule stop state
-//
func (p *ParserATNSimulator) removeAllConfigsNotInRuleStopState(configs ATNConfigSet, lookToEndOfRule bool) ATNConfigSet {
if PredictionModeallConfigsInRuleStopStates(configs) {
return configs
@@ -662,16 +661,20 @@ func (p *ParserATNSimulator) computeStartState(a ATNState, ctx RuleContext, full
// always at least the implicit call to start rule
initialContext := predictionContextFromRuleContext(p.atn, ctx)
configs := NewBaseATNConfigSet(fullCtx)
+ if ParserATNSimulatorDebug || ParserATNSimulatorTraceATNSim {
+ fmt.Println("computeStartState from ATN state " + a.String() +
+ " initialContext=" + initialContext.String())
+ }
+
for i := 0; i < len(a.GetTransitions()); i++ {
target := a.GetTransitions()[i].getTarget()
c := NewBaseATNConfig6(target, i+1, initialContext)
- closureBusy := newArray2DHashSet(nil, nil)
+ closureBusy := NewJStore[ATNConfig, Comparator[ATNConfig]](atnConfCompInst)
p.closure(c, configs, closureBusy, true, fullCtx, false)
}
return configs
}
-//
// This method transforms the start state computed by
// {@link //computeStartState} to the special start state used by a
// precedence DFA for a particular precedence value. The transformation
@@ -726,7 +729,6 @@ func (p *ParserATNSimulator) computeStartState(a ATNState, ctx RuleContext, full
// @return The transformed configuration set representing the start state
// for a precedence DFA at a particular precedence level (determined by
// calling {@link Parser//getPrecedence}).
-//
func (p *ParserATNSimulator) applyPrecedenceFilter(configs ATNConfigSet) ATNConfigSet {
statesFromAlt1 := make(map[int]PredictionContext)
@@ -760,7 +762,7 @@ func (p *ParserATNSimulator) applyPrecedenceFilter(configs ATNConfigSet) ATNConf
// (basically a graph subtraction algorithm).
if !config.getPrecedenceFilterSuppressed() {
context := statesFromAlt1[config.GetState().GetStateNumber()]
- if context != nil && context.equals(config.GetContext()) {
+ if context != nil && context.Equals(config.GetContext()) {
// eliminated
continue
}
@@ -824,7 +826,6 @@ func (p *ParserATNSimulator) getPredicatePredictions(ambigAlts *BitSet, altToPre
return pairs
}
-//
// This method is used to improve the localization of error messages by
// choosing an alternative rather than panicing a
// {@link NoViableAltException} in particular prediction scenarios where the
@@ -869,7 +870,6 @@ func (p *ParserATNSimulator) getPredicatePredictions(ambigAlts *BitSet, altToPre
// @return The value to return from {@link //AdaptivePredict}, or
// {@link ATN//INVALID_ALT_NUMBER} if a suitable alternative was not
// identified and {@link //AdaptivePredict} should Report an error instead.
-//
func (p *ParserATNSimulator) getSynValidOrSemInvalidAltThatFinishedDecisionEntryRule(configs ATNConfigSet, outerContext ParserRuleContext) int {
cfgs := p.splitAccordingToSemanticValidity(configs, outerContext)
semValidConfigs := cfgs[0]
@@ -938,11 +938,11 @@ func (p *ParserATNSimulator) splitAccordingToSemanticValidity(configs ATNConfigS
}
// Look through a list of predicate/alt pairs, returning alts for the
-// pairs that win. A {@code NONE} predicate indicates an alt containing an
-// unpredicated config which behaves as "always true." If !complete
-// then we stop at the first predicate that evaluates to true. This
-// includes pairs with nil predicates.
//
+// pairs that win. A {@code NONE} predicate indicates an alt containing an
+// unpredicated config which behaves as "always true." If !complete
+// then we stop at the first predicate that evaluates to true. This
+// includes pairs with nil predicates.
func (p *ParserATNSimulator) evalSemanticContext(predPredictions []*PredPrediction, outerContext ParserRuleContext, complete bool) *BitSet {
predictions := NewBitSet()
for i := 0; i < len(predPredictions); i++ {
@@ -972,16 +972,16 @@ func (p *ParserATNSimulator) evalSemanticContext(predPredictions []*PredPredicti
return predictions
}
-func (p *ParserATNSimulator) closure(config ATNConfig, configs ATNConfigSet, closureBusy Set, collectPredicates, fullCtx, treatEOFAsEpsilon bool) {
+func (p *ParserATNSimulator) closure(config ATNConfig, configs ATNConfigSet, closureBusy *JStore[ATNConfig, Comparator[ATNConfig]], collectPredicates, fullCtx, treatEOFAsEpsilon bool) {
initialDepth := 0
p.closureCheckingStopState(config, configs, closureBusy, collectPredicates,
fullCtx, initialDepth, treatEOFAsEpsilon)
}
-func (p *ParserATNSimulator) closureCheckingStopState(config ATNConfig, configs ATNConfigSet, closureBusy Set, collectPredicates, fullCtx bool, depth int, treatEOFAsEpsilon bool) {
- if ParserATNSimulatorDebug {
+func (p *ParserATNSimulator) closureCheckingStopState(config ATNConfig, configs ATNConfigSet, closureBusy *JStore[ATNConfig, Comparator[ATNConfig]], collectPredicates, fullCtx bool, depth int, treatEOFAsEpsilon bool) {
+ if ParserATNSimulatorTraceATNSim {
fmt.Println("closure(" + config.String() + ")")
- fmt.Println("configs(" + configs.String() + ")")
+ //fmt.Println("configs(" + configs.String() + ")")
if config.GetReachesIntoOuterContext() > 50 {
panic("problem")
}
@@ -1031,7 +1031,7 @@ func (p *ParserATNSimulator) closureCheckingStopState(config ATNConfig, configs
}
// Do the actual work of walking epsilon edges//
-func (p *ParserATNSimulator) closureWork(config ATNConfig, configs ATNConfigSet, closureBusy Set, collectPredicates, fullCtx bool, depth int, treatEOFAsEpsilon bool) {
+func (p *ParserATNSimulator) closureWork(config ATNConfig, configs ATNConfigSet, closureBusy *JStore[ATNConfig, Comparator[ATNConfig]], collectPredicates, fullCtx bool, depth int, treatEOFAsEpsilon bool) {
state := config.GetState()
// optimization
if !state.GetEpsilonOnlyTransitions() {
@@ -1066,7 +1066,8 @@ func (p *ParserATNSimulator) closureWork(config ATNConfig, configs ATNConfigSet,
c.SetReachesIntoOuterContext(c.GetReachesIntoOuterContext() + 1)
- if closureBusy.Add(c) != c {
+ _, present := closureBusy.Put(c)
+ if present {
// avoid infinite recursion for right-recursive rules
continue
}
@@ -1077,9 +1078,13 @@ func (p *ParserATNSimulator) closureWork(config ATNConfig, configs ATNConfigSet,
fmt.Println("dips into outer ctx: " + c.String())
}
} else {
- if !t.getIsEpsilon() && closureBusy.Add(c) != c {
- // avoid infinite recursion for EOF* and EOF+
- continue
+
+ if !t.getIsEpsilon() {
+ _, present := closureBusy.Put(c)
+ if present {
+ // avoid infinite recursion for EOF* and EOF+
+ continue
+ }
}
if _, ok := t.(*RuleTransition); ok {
// latch when newDepth goes negative - once we step out of the entry context we can't return
@@ -1104,7 +1109,16 @@ func (p *ParserATNSimulator) canDropLoopEntryEdgeInLeftRecursiveRule(config ATNC
// left-recursion elimination. For efficiency, also check if
// the context has an empty stack case. If so, it would mean
// global FOLLOW so we can't perform optimization
- if startLoop, ok := _p.(StarLoopEntryState); !ok || !startLoop.precedenceRuleDecision || config.GetContext().isEmpty() || config.GetContext().hasEmptyPath() {
+ if _p.GetStateType() != ATNStateStarLoopEntry {
+ return false
+ }
+ startLoop, ok := _p.(*StarLoopEntryState)
+ if !ok {
+ return false
+ }
+ if !startLoop.precedenceRuleDecision ||
+ config.GetContext().isEmpty() ||
+ config.GetContext().hasEmptyPath() {
return false
}
@@ -1117,8 +1131,8 @@ func (p *ParserATNSimulator) canDropLoopEntryEdgeInLeftRecursiveRule(config ATNC
return false
}
}
-
- decisionStartState := _p.(BlockStartState).GetTransitions()[0].getTarget().(BlockStartState)
+ x := _p.GetTransitions()[0].getTarget()
+ decisionStartState := x.(BlockStartState)
blockEndStateNum := decisionStartState.getEndState().stateNumber
blockEndState := p.atn.states[blockEndStateNum].(*BlockEndState)
@@ -1355,13 +1369,12 @@ func (p *ParserATNSimulator) GetTokenName(t int) string {
return "EOF"
}
- if p.parser != nil && p.parser.GetLiteralNames() != nil {
- if t >= len(p.parser.GetLiteralNames()) {
- fmt.Println(strconv.Itoa(t) + " ttype out of range: " + strings.Join(p.parser.GetLiteralNames(), ","))
- // fmt.Println(p.parser.GetInputStream().(TokenStream).GetAllText()) // p seems incorrect
- } else {
- return p.parser.GetLiteralNames()[t] + "<" + strconv.Itoa(t) + ">"
- }
+ if p.parser != nil && p.parser.GetLiteralNames() != nil && t < len(p.parser.GetLiteralNames()) {
+ return p.parser.GetLiteralNames()[t] + "<" + strconv.Itoa(t) + ">"
+ }
+
+ if p.parser != nil && p.parser.GetLiteralNames() != nil && t < len(p.parser.GetSymbolicNames()) {
+ return p.parser.GetSymbolicNames()[t] + "<" + strconv.Itoa(t) + ">"
}
return strconv.Itoa(t)
@@ -1372,9 +1385,9 @@ func (p *ParserATNSimulator) getLookaheadName(input TokenStream) string {
}
// Used for debugging in AdaptivePredict around execATN but I cut
-// it out for clarity now that alg. works well. We can leave p
-// "dead" code for a bit.
//
+// it out for clarity now that alg. works well. We can leave p
+// "dead" code for a bit.
func (p *ParserATNSimulator) dumpDeadEndConfigs(nvae *NoViableAltException) {
panic("Not implemented")
@@ -1421,7 +1434,6 @@ func (p *ParserATNSimulator) getUniqueAlt(configs ATNConfigSet) int {
return alt
}
-//
// Add an edge to the DFA, if possible. This method calls
// {@link //addDFAState} to ensure the {@code to} state is present in the
// DFA. If {@code from} is {@code nil}, or if {@code t} is outside the
@@ -1440,7 +1452,6 @@ func (p *ParserATNSimulator) getUniqueAlt(configs ATNConfigSet) int {
// @return If {@code to} is {@code nil}, p method returns {@code nil}
// otherwise p method returns the result of calling {@link //addDFAState}
// on {@code to}
-//
func (p *ParserATNSimulator) addDFAEdge(dfa *DFA, from *DFAState, t int, to *DFAState) *DFAState {
if ParserATNSimulatorDebug {
fmt.Println("EDGE " + from.String() + " -> " + to.String() + " upon " + p.GetTokenName(t))
@@ -1472,7 +1483,6 @@ func (p *ParserATNSimulator) addDFAEdge(dfa *DFA, from *DFAState, t int, to *DFA
return to
}
-//
// Add state {@code D} to the DFA if it is not already present, and return
// the actual instance stored in the DFA. If a state equivalent to {@code D}
// is already in the DFA, the existing state is returned. Otherwise p
@@ -1486,25 +1496,30 @@ func (p *ParserATNSimulator) addDFAEdge(dfa *DFA, from *DFAState, t int, to *DFA
// @return The state stored in the DFA. This will be either the existing
// state if {@code D} is already in the DFA, or {@code D} itself if the
// state was not already present.
-//
func (p *ParserATNSimulator) addDFAState(dfa *DFA, d *DFAState) *DFAState {
if d == ATNSimulatorError {
return d
}
- hash := d.hash()
- existing, ok := dfa.getState(hash)
- if ok {
+ existing, present := dfa.states.Get(d)
+ if present {
+ if ParserATNSimulatorTraceATNSim {
+ fmt.Print("addDFAState " + d.String() + " exists")
+ }
return existing
}
- d.stateNumber = dfa.numStates()
+
+ // The state was not present, so update it with configs
+ //
+ d.stateNumber = dfa.states.Len()
if !d.configs.ReadOnly() {
d.configs.OptimizeConfigs(p.BaseATNSimulator)
d.configs.SetReadOnly(true)
}
- dfa.setState(hash, d)
- if ParserATNSimulatorDebug {
- fmt.Println("adding NewDFA state: " + d.String())
+ dfa.states.Put(d)
+ if ParserATNSimulatorTraceATNSim {
+ fmt.Println("addDFAState new " + d.String())
}
+
return d
}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/parser_rule_context.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/parser_rule_context.go
similarity index 98%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/parser_rule_context.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/parser_rule_context.go
index 49cd10c5f..1c8cee747 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/parser_rule_context.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/parser_rule_context.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -340,7 +340,7 @@ func (prc *BaseParserRuleContext) String(ruleNames []string, stop RuleContext) s
return s
}
-var RuleContextEmpty = NewBaseParserRuleContext(nil, -1)
+var ParserRuleContextEmpty = NewBaseParserRuleContext(nil, -1)
type InterpreterRuleContext interface {
ParserRuleContext
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/prediction_context.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/prediction_context.go
similarity index 81%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/prediction_context.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/prediction_context.go
index 9fdfd52b2..ba62af361 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/prediction_context.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/prediction_context.go
@@ -1,10 +1,12 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
package antlr
import (
+ "fmt"
+ "golang.org/x/exp/slices"
"strconv"
)
@@ -26,10 +28,10 @@ var (
)
type PredictionContext interface {
- hash() int
+ Hash() int
+ Equals(interface{}) bool
GetParent(int) PredictionContext
getReturnState(int) int
- equals(PredictionContext) bool
length() int
isEmpty() bool
hasEmptyPath() bool
@@ -53,7 +55,7 @@ func (b *BasePredictionContext) isEmpty() bool {
func calculateHash(parent PredictionContext, returnState int) int {
h := murmurInit(1)
- h = murmurUpdate(h, parent.hash())
+ h = murmurUpdate(h, parent.Hash())
h = murmurUpdate(h, returnState)
return murmurFinish(h, 2)
}
@@ -86,7 +88,6 @@ func NewPredictionContextCache() *PredictionContextCache {
// Add a context to the cache and return it. If the context already exists,
// return that one instead and do not add a Newcontext to the cache.
// Protect shared cache from unsafe thread access.
-//
func (p *PredictionContextCache) add(ctx PredictionContext) PredictionContext {
if ctx == BasePredictionContextEMPTY {
return BasePredictionContextEMPTY
@@ -160,28 +161,28 @@ func (b *BaseSingletonPredictionContext) hasEmptyPath() bool {
return b.returnState == BasePredictionContextEmptyReturnState
}
-func (b *BaseSingletonPredictionContext) equals(other PredictionContext) bool {
+func (b *BaseSingletonPredictionContext) Hash() int {
+ return b.cachedHash
+}
+
+func (b *BaseSingletonPredictionContext) Equals(other interface{}) bool {
if b == other {
return true
- } else if _, ok := other.(*BaseSingletonPredictionContext); !ok {
+ }
+ if _, ok := other.(*BaseSingletonPredictionContext); !ok {
return false
- } else if b.hash() != other.hash() {
- return false // can't be same if hash is different
}
otherP := other.(*BaseSingletonPredictionContext)
- if b.returnState != other.getReturnState(0) {
+ if b.returnState != otherP.getReturnState(0) {
return false
- } else if b.parentCtx == nil {
+ }
+ if b.parentCtx == nil {
return otherP.parentCtx == nil
}
- return b.parentCtx.equals(otherP.parentCtx)
-}
-
-func (b *BaseSingletonPredictionContext) hash() int {
- return b.cachedHash
+ return b.parentCtx.Equals(otherP.parentCtx)
}
func (b *BaseSingletonPredictionContext) String() string {
@@ -215,7 +216,7 @@ func NewEmptyPredictionContext() *EmptyPredictionContext {
p := new(EmptyPredictionContext)
p.BaseSingletonPredictionContext = NewBaseSingletonPredictionContext(nil, BasePredictionContextEmptyReturnState)
-
+ p.cachedHash = calculateEmptyHash()
return p
}
@@ -231,7 +232,11 @@ func (e *EmptyPredictionContext) getReturnState(index int) int {
return e.returnState
}
-func (e *EmptyPredictionContext) equals(other PredictionContext) bool {
+func (e *EmptyPredictionContext) Hash() int {
+ return e.cachedHash
+}
+
+func (e *EmptyPredictionContext) Equals(other interface{}) bool {
return e == other
}
@@ -254,7 +259,7 @@ func NewArrayPredictionContext(parents []PredictionContext, returnStates []int)
hash := murmurInit(1)
for _, parent := range parents {
- hash = murmurUpdate(hash, parent.hash())
+ hash = murmurUpdate(hash, parent.Hash())
}
for _, returnState := range returnStates {
@@ -298,18 +303,31 @@ func (a *ArrayPredictionContext) getReturnState(index int) int {
return a.returnStates[index]
}
-func (a *ArrayPredictionContext) equals(other PredictionContext) bool {
- if _, ok := other.(*ArrayPredictionContext); !ok {
+// Equals is the default comparison function for ArrayPredictionContext when no specialized
+// implementation is needed for a collection
+func (a *ArrayPredictionContext) Equals(o interface{}) bool {
+ if a == o {
+ return true
+ }
+ other, ok := o.(*ArrayPredictionContext)
+ if !ok {
return false
- } else if a.cachedHash != other.hash() {
+ }
+ if a.cachedHash != other.Hash() {
return false // can't be same if hash is different
- } else {
- otherP := other.(*ArrayPredictionContext)
- return &a.returnStates == &otherP.returnStates && &a.parents == &otherP.parents
}
+
+ // Must compare the actual array elements and not just the array address
+ //
+ return slices.Equal(a.returnStates, other.returnStates) &&
+ slices.EqualFunc(a.parents, other.parents, func(x, y PredictionContext) bool {
+ return x.Equals(y)
+ })
}
-func (a *ArrayPredictionContext) hash() int {
+// Hash is the default hash function for ArrayPredictionContext when no specialized
+// implementation is needed for a collection
+func (a *ArrayPredictionContext) Hash() int {
return a.BasePredictionContext.cachedHash
}
@@ -343,11 +361,11 @@ func (a *ArrayPredictionContext) String() string {
// /
func predictionContextFromRuleContext(a *ATN, outerContext RuleContext) PredictionContext {
if outerContext == nil {
- outerContext = RuleContextEmpty
+ outerContext = ParserRuleContextEmpty
}
// if we are in RuleContext of start rule, s, then BasePredictionContext
// is EMPTY. Nobody called us. (if we are empty, return empty)
- if outerContext.GetParent() == nil || outerContext == RuleContextEmpty {
+ if outerContext.GetParent() == nil || outerContext == ParserRuleContextEmpty {
return BasePredictionContextEMPTY
}
// If we have a parent, convert it to a BasePredictionContext graph
@@ -359,11 +377,20 @@ func predictionContextFromRuleContext(a *ATN, outerContext RuleContext) Predicti
}
func merge(a, b PredictionContext, rootIsWildcard bool, mergeCache *DoubleDict) PredictionContext {
- // share same graph if both same
- if a == b {
+
+ // Share same graph if both same
+ //
+ if a == b || a.Equals(b) {
return a
}
+ // In Java, EmptyPredictionContext inherits from SingletonPredictionContext, and so the test
+ // in java for SingletonPredictionContext will succeed and a new ArrayPredictionContext will be created
+ // from it.
+ // In go, EmptyPredictionContext does not equate to SingletonPredictionContext and so that conversion
+ // will fail. We need to test for both Empty and Singleton and create an ArrayPredictionContext from
+ // either of them.
+
ac, ok1 := a.(*BaseSingletonPredictionContext)
bc, ok2 := b.(*BaseSingletonPredictionContext)
@@ -380,17 +407,32 @@ func merge(a, b PredictionContext, rootIsWildcard bool, mergeCache *DoubleDict)
return b
}
}
- // convert singleton so both are arrays to normalize
- if _, ok := a.(*BaseSingletonPredictionContext); ok {
- a = NewArrayPredictionContext([]PredictionContext{a.GetParent(0)}, []int{a.getReturnState(0)})
+
+ // Convert Singleton or Empty so both are arrays to normalize - We should not use the existing parameters
+ // here.
+ //
+ // TODO: I think that maybe the Prediction Context structs should be redone as there is a chance we will see this mess again - maybe redo the logic here
+
+ var arp, arb *ArrayPredictionContext
+ var ok bool
+ if arp, ok = a.(*ArrayPredictionContext); ok {
+ } else if _, ok = a.(*BaseSingletonPredictionContext); ok {
+ arp = NewArrayPredictionContext([]PredictionContext{a.GetParent(0)}, []int{a.getReturnState(0)})
+ } else if _, ok = a.(*EmptyPredictionContext); ok {
+ arp = NewArrayPredictionContext([]PredictionContext{}, []int{})
}
- if _, ok := b.(*BaseSingletonPredictionContext); ok {
- b = NewArrayPredictionContext([]PredictionContext{b.GetParent(0)}, []int{b.getReturnState(0)})
+
+ if arb, ok = b.(*ArrayPredictionContext); ok {
+ } else if _, ok = b.(*BaseSingletonPredictionContext); ok {
+ arb = NewArrayPredictionContext([]PredictionContext{b.GetParent(0)}, []int{b.getReturnState(0)})
+ } else if _, ok = b.(*EmptyPredictionContext); ok {
+ arb = NewArrayPredictionContext([]PredictionContext{}, []int{})
}
- return mergeArrays(a.(*ArrayPredictionContext), b.(*ArrayPredictionContext), rootIsWildcard, mergeCache)
+
+ // Both arp and arb
+ return mergeArrays(arp, arb, rootIsWildcard, mergeCache)
}
-//
// Merge two {@link SingletonBasePredictionContext} instances.
//
//
Stack tops equal, parents merge is same return left graph.
@@ -423,11 +465,11 @@ func merge(a, b PredictionContext, rootIsWildcard bool, mergeCache *DoubleDict)
// /
func mergeSingletons(a, b *BaseSingletonPredictionContext, rootIsWildcard bool, mergeCache *DoubleDict) PredictionContext {
if mergeCache != nil {
- previous := mergeCache.Get(a.hash(), b.hash())
+ previous := mergeCache.Get(a.Hash(), b.Hash())
if previous != nil {
return previous.(PredictionContext)
}
- previous = mergeCache.Get(b.hash(), a.hash())
+ previous = mergeCache.Get(b.Hash(), a.Hash())
if previous != nil {
return previous.(PredictionContext)
}
@@ -436,7 +478,7 @@ func mergeSingletons(a, b *BaseSingletonPredictionContext, rootIsWildcard bool,
rootMerge := mergeRoot(a, b, rootIsWildcard)
if rootMerge != nil {
if mergeCache != nil {
- mergeCache.set(a.hash(), b.hash(), rootMerge)
+ mergeCache.set(a.Hash(), b.Hash(), rootMerge)
}
return rootMerge
}
@@ -456,7 +498,7 @@ func mergeSingletons(a, b *BaseSingletonPredictionContext, rootIsWildcard bool,
// Newjoined parent so create Newsingleton pointing to it, a'
spc := SingletonBasePredictionContextCreate(parent, a.returnState)
if mergeCache != nil {
- mergeCache.set(a.hash(), b.hash(), spc)
+ mergeCache.set(a.Hash(), b.Hash(), spc)
}
return spc
}
@@ -478,7 +520,7 @@ func mergeSingletons(a, b *BaseSingletonPredictionContext, rootIsWildcard bool,
parents := []PredictionContext{singleParent, singleParent}
apc := NewArrayPredictionContext(parents, payloads)
if mergeCache != nil {
- mergeCache.set(a.hash(), b.hash(), apc)
+ mergeCache.set(a.Hash(), b.Hash(), apc)
}
return apc
}
@@ -494,12 +536,11 @@ func mergeSingletons(a, b *BaseSingletonPredictionContext, rootIsWildcard bool,
}
apc := NewArrayPredictionContext(parents, payloads)
if mergeCache != nil {
- mergeCache.set(a.hash(), b.hash(), apc)
+ mergeCache.set(a.Hash(), b.Hash(), apc)
}
return apc
}
-//
// Handle case where at least one of {@code a} or {@code b} is
// {@link //EMPTY}. In the following diagrams, the symbol {@code $} is used
// to represent {@link //EMPTY}.
@@ -561,7 +602,6 @@ func mergeRoot(a, b SingletonPredictionContext, rootIsWildcard bool) PredictionC
return nil
}
-//
// Merge two {@link ArrayBasePredictionContext} instances.
//
//
Different tops, different parents.
@@ -583,12 +623,18 @@ func mergeRoot(a, b SingletonPredictionContext, rootIsWildcard bool) PredictionC
// /
func mergeArrays(a, b *ArrayPredictionContext, rootIsWildcard bool, mergeCache *DoubleDict) PredictionContext {
if mergeCache != nil {
- previous := mergeCache.Get(a.hash(), b.hash())
+ previous := mergeCache.Get(a.Hash(), b.Hash())
if previous != nil {
+ if ParserATNSimulatorTraceATNSim {
+ fmt.Println("mergeArrays a=" + a.String() + ",b=" + b.String() + " -> previous")
+ }
return previous.(PredictionContext)
}
- previous = mergeCache.Get(b.hash(), a.hash())
+ previous = mergeCache.Get(b.Hash(), a.Hash())
if previous != nil {
+ if ParserATNSimulatorTraceATNSim {
+ fmt.Println("mergeArrays a=" + a.String() + ",b=" + b.String() + " -> previous")
+ }
return previous.(PredictionContext)
}
}
@@ -608,7 +654,7 @@ func mergeArrays(a, b *ArrayPredictionContext, rootIsWildcard bool, mergeCache *
payload := a.returnStates[i]
// $+$ = $
bothDollars := payload == BasePredictionContextEmptyReturnState && aParent == nil && bParent == nil
- axAX := (aParent != nil && bParent != nil && aParent == bParent) // ax+ax
+ axAX := aParent != nil && bParent != nil && aParent == bParent // ax+ax
// ->
// ax
if bothDollars || axAX {
@@ -651,7 +697,7 @@ func mergeArrays(a, b *ArrayPredictionContext, rootIsWildcard bool, mergeCache *
if k == 1 { // for just one merged element, return singleton top
pc := SingletonBasePredictionContextCreate(mergedParents[0], mergedReturnStates[0])
if mergeCache != nil {
- mergeCache.set(a.hash(), b.hash(), pc)
+ mergeCache.set(a.Hash(), b.Hash(), pc)
}
return pc
}
@@ -663,27 +709,36 @@ func mergeArrays(a, b *ArrayPredictionContext, rootIsWildcard bool, mergeCache *
// if we created same array as a or b, return that instead
// TODO: track whether this is possible above during merge sort for speed
+ // TODO: In go, I do not think we can just do M == xx as M is a brand new allocation. This could be causing allocation problems
if M == a {
if mergeCache != nil {
- mergeCache.set(a.hash(), b.hash(), a)
+ mergeCache.set(a.Hash(), b.Hash(), a)
+ }
+ if ParserATNSimulatorTraceATNSim {
+ fmt.Println("mergeArrays a=" + a.String() + ",b=" + b.String() + " -> a")
}
return a
}
if M == b {
if mergeCache != nil {
- mergeCache.set(a.hash(), b.hash(), b)
+ mergeCache.set(a.Hash(), b.Hash(), b)
+ }
+ if ParserATNSimulatorTraceATNSim {
+ fmt.Println("mergeArrays a=" + a.String() + ",b=" + b.String() + " -> b")
}
return b
}
combineCommonParents(mergedParents)
if mergeCache != nil {
- mergeCache.set(a.hash(), b.hash(), M)
+ mergeCache.set(a.Hash(), b.Hash(), M)
+ }
+ if ParserATNSimulatorTraceATNSim {
+ fmt.Println("mergeArrays a=" + a.String() + ",b=" + b.String() + " -> " + M.String())
}
return M
}
-//
// Make pass over all M {@code parents} merge any {@code equals()}
// ones.
// /
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/prediction_mode.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/prediction_mode.go
similarity index 95%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/prediction_mode.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/prediction_mode.go
index 15718f912..7b9b72fab 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/prediction_mode.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/prediction_mode.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -70,7 +70,6 @@ const (
PredictionModeLLExactAmbigDetection = 2
)
-//
// Computes the SLL prediction termination condition.
//
//
@@ -108,9 +107,9 @@ const (
// The single-alt-state thing lets prediction continue upon rules like
// (otherwise, it would admit defeat too soon):
//
-//
{@code [12|1|[], 6|2|[], 12|2|[]]. s : (ID | ID ID?) '' }
+//
{@code [12|1|[], 6|2|[], 12|2|[]]. s : (ID | ID ID?) †}
//
-//
When the ATN simulation reaches the state before {@code ''}, it has a
+//
When the ATN simulation reaches the state before {@code â€}, it has a
// DFA state that looks like: {@code [12|1|[], 6|2|[], 12|2|[]]}. Naturally
// {@code 12|1|[]} and {@code 12|2|[]} conflict, but we cannot stop
// processing this node because alternative to has another way to continue,
@@ -152,16 +151,15 @@ const (
//
//
Before testing these configurations against others, we have to merge
// {@code x} and {@code x'} (without modifying the existing configurations).
-// For example, we test {@code (x+x')==x''} when looking for conflicts in
+// For example, we test {@code (x+x')==xâ€} when looking for conflicts in
// the following configurations.
If the configuration set has predicates (as indicated by
// {@link ATNConfigSet//hasSemanticContext}), this algorithm makes a copy of
// the configurations to strip out all of the predicates so that a standard
// {@link ATNConfigSet} will merge everything ignoring predicates.
-//
func PredictionModehasSLLConflictTerminatingPrediction(mode int, configs ATNConfigSet) bool {
// Configs in rule stop states indicate reaching the end of the decision
// rule (local context) or end of start rule (full context). If all
@@ -229,7 +227,6 @@ func PredictionModeallConfigsInRuleStopStates(configs ATNConfigSet) bool {
return true
}
-//
// Full LL prediction termination.
//
//
Can we stop looking ahead during ATN simulation or is there some
@@ -334,7 +331,7 @@ func PredictionModeallConfigsInRuleStopStates(configs ATNConfigSet) bool {
//
//
//
//
@@ -369,31 +366,26 @@ func PredictionModeallConfigsInRuleStopStates(configs ATNConfigSet) bool {
// two or one and three so we keep going. We can only stop prediction when
// we need exact ambiguity detection when the sets look like
// {@code A={{1,2}}} or {@code {{1,2},{1,2}}}, etc...
-//
func PredictionModeresolvesToJustOneViableAlt(altsets []*BitSet) int {
return PredictionModegetSingleViableAlt(altsets)
}
-//
// Determines if every alternative subset in {@code altsets} contains more
// than one alternative.
//
// @param altsets a collection of alternative subsets
// @return {@code true} if every {@link BitSet} in {@code altsets} has
// {@link BitSet//cardinality cardinality} > 1, otherwise {@code false}
-//
func PredictionModeallSubsetsConflict(altsets []*BitSet) bool {
return !PredictionModehasNonConflictingAltSet(altsets)
}
-//
// Determines if any single alternative subset in {@code altsets} contains
// exactly one alternative.
//
// @param altsets a collection of alternative subsets
// @return {@code true} if {@code altsets} contains a {@link BitSet} with
// {@link BitSet//cardinality cardinality} 1, otherwise {@code false}
-//
func PredictionModehasNonConflictingAltSet(altsets []*BitSet) bool {
for i := 0; i < len(altsets); i++ {
alts := altsets[i]
@@ -404,14 +396,12 @@ func PredictionModehasNonConflictingAltSet(altsets []*BitSet) bool {
return false
}
-//
// Determines if any single alternative subset in {@code altsets} contains
// more than one alternative.
//
// @param altsets a collection of alternative subsets
// @return {@code true} if {@code altsets} contains a {@link BitSet} with
// {@link BitSet//cardinality cardinality} > 1, otherwise {@code false}
-//
func PredictionModehasConflictingAltSet(altsets []*BitSet) bool {
for i := 0; i < len(altsets); i++ {
alts := altsets[i]
@@ -422,13 +412,11 @@ func PredictionModehasConflictingAltSet(altsets []*BitSet) bool {
return false
}
-//
// Determines if every alternative subset in {@code altsets} is equivalent.
//
// @param altsets a collection of alternative subsets
// @return {@code true} if every member of {@code altsets} is equal to the
// others, otherwise {@code false}
-//
func PredictionModeallSubsetsEqual(altsets []*BitSet) bool {
var first *BitSet
@@ -444,13 +432,11 @@ func PredictionModeallSubsetsEqual(altsets []*BitSet) bool {
return true
}
-//
// Returns the unique alternative predicted by all alternative subsets in
// {@code altsets}. If no such alternative exists, this method returns
// {@link ATN//INVALID_ALT_NUMBER}.
//
// @param altsets a collection of alternative subsets
-//
func PredictionModegetUniqueAlt(altsets []*BitSet) int {
all := PredictionModeGetAlts(altsets)
if all.length() == 1 {
@@ -466,7 +452,6 @@ func PredictionModegetUniqueAlt(altsets []*BitSet) int {
//
// @param altsets a collection of alternative subsets
// @return the set of represented alternatives in {@code altsets}
-//
func PredictionModeGetAlts(altsets []*BitSet) *BitSet {
all := NewBitSet()
for _, alts := range altsets {
@@ -475,44 +460,35 @@ func PredictionModeGetAlts(altsets []*BitSet) *BitSet {
return all
}
-//
-// This func gets the conflicting alt subsets from a configuration set.
+// PredictionModegetConflictingAltSubsets gets the conflicting alt subsets from a configuration set.
// For each configuration {@code c} in {@code configs}:
//
//
// map[c] U= c.{@link ATNConfig//alt alt} // map hash/equals uses s and x, not
// alt and not pred
//
-//
func PredictionModegetConflictingAltSubsets(configs ATNConfigSet) []*BitSet {
- configToAlts := make(map[int]*BitSet)
+ configToAlts := NewJMap[ATNConfig, *BitSet, *ATNAltConfigComparator[ATNConfig]](atnAltCfgEqInst)
for _, c := range configs.GetItems() {
- key := 31 * c.GetState().GetStateNumber() + c.GetContext().hash()
- alts, ok := configToAlts[key]
+ alts, ok := configToAlts.Get(c)
if !ok {
alts = NewBitSet()
- configToAlts[key] = alts
+ configToAlts.Put(c, alts)
}
alts.add(c.GetAlt())
}
- values := make([]*BitSet, 0, 10)
- for _, v := range configToAlts {
- values = append(values, v)
- }
- return values
+ return configToAlts.Values()
}
-//
-// Get a map from state to alt subset from a configuration set. For each
+// PredictionModeGetStateToAltMap gets a map from state to alt subset from a configuration set. For each
// configuration {@code c} in {@code configs}:
//
//
-//
func PredictionModeGetStateToAltMap(configs ATNConfigSet) *AltDict {
m := NewAltDict()
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/recognizer.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/recognizer.go
similarity index 92%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/recognizer.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/recognizer.go
index 93efcf355..bfe542d09 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/recognizer.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/recognizer.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -49,7 +49,7 @@ var tokenTypeMapCache = make(map[string]int)
var ruleIndexMapCache = make(map[string]int)
func (b *BaseRecognizer) checkVersion(toolVersion string) {
- runtimeVersion := "4.10.1"
+ runtimeVersion := "4.12.0"
if runtimeVersion != toolVersion {
fmt.Println("ANTLR runtime and generated code versions disagree: " + runtimeVersion + "!=" + toolVersion)
}
@@ -108,7 +108,6 @@ func (b *BaseRecognizer) SetState(v int) {
// Get a map from rule names to rule indexes.
//
//
Used for XPath and tree pattern compilation.
-//
func (b *BaseRecognizer) GetRuleIndexMap() map[string]int {
panic("Method not defined!")
@@ -171,18 +170,18 @@ func (b *BaseRecognizer) GetErrorHeader(e RecognitionException) string {
}
// How should a token be displayed in an error message? The default
-// is to display just the text, but during development you might
-// want to have a lot of information spit out. Override in that case
-// to use t.String() (which, for CommonToken, dumps everything about
-// the token). This is better than forcing you to override a method in
-// your token objects because you don't have to go modify your lexer
-// so that it creates a NewJava type.
+//
+// is to display just the text, but during development you might
+// want to have a lot of information spit out. Override in that case
+// to use t.String() (which, for CommonToken, dumps everything about
+// the token). This is better than forcing you to override a method in
+// your token objects because you don't have to go modify your lexer
+// so that it creates a NewJava type.
//
// @deprecated This method is not called by the ANTLR 4 Runtime. Specific
// implementations of {@link ANTLRErrorStrategy} may provide a similar
// feature when necessary. For example, see
// {@link DefaultErrorStrategy//GetTokenErrorDisplay}.
-//
func (b *BaseRecognizer) GetTokenErrorDisplay(t Token) string {
if t == nil {
return ""
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/rule_context.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/rule_context.go
similarity index 97%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/rule_context.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/rule_context.go
index 600cf8c06..210699ba2 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/rule_context.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/rule_context.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/semantic_context.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/semantic_context.go
similarity index 85%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/semantic_context.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/semantic_context.go
index 9ada43077..a702e99de 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/semantic_context.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/semantic_context.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -18,12 +18,12 @@ import (
//
type SemanticContext interface {
- comparable
+ Equals(other Collectable[SemanticContext]) bool
+ Hash() int
evaluate(parser Recognizer, outerContext RuleContext) bool
evalPrecedence(parser Recognizer, outerContext RuleContext) SemanticContext
- hash() int
String() string
}
@@ -78,7 +78,7 @@ func NewPredicate(ruleIndex, predIndex int, isCtxDependent bool) *Predicate {
//The default {@link SemanticContext}, which is semantically equivalent to
//a predicate of the form {@code {true}?}.
-var SemanticContextNone SemanticContext = NewPredicate(-1, -1, false)
+var SemanticContextNone = NewPredicate(-1, -1, false)
func (p *Predicate) evalPrecedence(parser Recognizer, outerContext RuleContext) SemanticContext {
return p
@@ -95,7 +95,7 @@ func (p *Predicate) evaluate(parser Recognizer, outerContext RuleContext) bool {
return parser.Sempred(localctx, p.ruleIndex, p.predIndex)
}
-func (p *Predicate) equals(other interface{}) bool {
+func (p *Predicate) Equals(other Collectable[SemanticContext]) bool {
if p == other {
return true
} else if _, ok := other.(*Predicate); !ok {
@@ -107,7 +107,7 @@ func (p *Predicate) equals(other interface{}) bool {
}
}
-func (p *Predicate) hash() int {
+func (p *Predicate) Hash() int {
h := murmurInit(0)
h = murmurUpdate(h, p.ruleIndex)
h = murmurUpdate(h, p.predIndex)
@@ -151,17 +151,22 @@ func (p *PrecedencePredicate) compareTo(other *PrecedencePredicate) int {
return p.precedence - other.precedence
}
-func (p *PrecedencePredicate) equals(other interface{}) bool {
- if p == other {
- return true
- } else if _, ok := other.(*PrecedencePredicate); !ok {
+func (p *PrecedencePredicate) Equals(other Collectable[SemanticContext]) bool {
+
+ var op *PrecedencePredicate
+ var ok bool
+ if op, ok = other.(*PrecedencePredicate); !ok {
return false
- } else {
- return p.precedence == other.(*PrecedencePredicate).precedence
}
+
+ if p == op {
+ return true
+ }
+
+ return p.precedence == other.(*PrecedencePredicate).precedence
}
-func (p *PrecedencePredicate) hash() int {
+func (p *PrecedencePredicate) Hash() int {
h := uint32(1)
h = 31*h + uint32(p.precedence)
return int(h)
@@ -171,10 +176,10 @@ func (p *PrecedencePredicate) String() string {
return "{" + strconv.Itoa(p.precedence) + ">=prec}?"
}
-func PrecedencePredicatefilterPrecedencePredicates(set Set) []*PrecedencePredicate {
+func PrecedencePredicatefilterPrecedencePredicates(set *JStore[SemanticContext, Comparator[SemanticContext]]) []*PrecedencePredicate {
result := make([]*PrecedencePredicate, 0)
- set.Each(func(v interface{}) bool {
+ set.Each(func(v SemanticContext) bool {
if c2, ok := v.(*PrecedencePredicate); ok {
result = append(result, c2)
}
@@ -193,21 +198,21 @@ type AND struct {
func NewAND(a, b SemanticContext) *AND {
- operands := newArray2DHashSet(nil, nil)
+ operands := NewJStore[SemanticContext, Comparator[SemanticContext]](semctxEqInst)
if aa, ok := a.(*AND); ok {
for _, o := range aa.opnds {
- operands.Add(o)
+ operands.Put(o)
}
} else {
- operands.Add(a)
+ operands.Put(a)
}
if ba, ok := b.(*AND); ok {
for _, o := range ba.opnds {
- operands.Add(o)
+ operands.Put(o)
}
} else {
- operands.Add(b)
+ operands.Put(b)
}
precedencePredicates := PrecedencePredicatefilterPrecedencePredicates(operands)
if len(precedencePredicates) > 0 {
@@ -220,7 +225,7 @@ func NewAND(a, b SemanticContext) *AND {
}
}
- operands.Add(reduced)
+ operands.Put(reduced)
}
vs := operands.Values()
@@ -235,14 +240,15 @@ func NewAND(a, b SemanticContext) *AND {
return and
}
-func (a *AND) equals(other interface{}) bool {
+func (a *AND) Equals(other Collectable[SemanticContext]) bool {
if a == other {
return true
- } else if _, ok := other.(*AND); !ok {
+ }
+ if _, ok := other.(*AND); !ok {
return false
} else {
for i, v := range other.(*AND).opnds {
- if !a.opnds[i].equals(v) {
+ if !a.opnds[i].Equals(v) {
return false
}
}
@@ -250,13 +256,11 @@ func (a *AND) equals(other interface{}) bool {
}
}
-//
// {@inheritDoc}
//
//
// The evaluation of predicates by a context is short-circuiting, but
// unordered.
-//
func (a *AND) evaluate(parser Recognizer, outerContext RuleContext) bool {
for i := 0; i < len(a.opnds); i++ {
if !a.opnds[i].evaluate(parser, outerContext) {
@@ -304,18 +308,18 @@ func (a *AND) evalPrecedence(parser Recognizer, outerContext RuleContext) Semant
return result
}
-func (a *AND) hash() int {
+func (a *AND) Hash() int {
h := murmurInit(37) // Init with a value different from OR
for _, op := range a.opnds {
- h = murmurUpdate(h, op.hash())
+ h = murmurUpdate(h, op.Hash())
}
return murmurFinish(h, len(a.opnds))
}
-func (a *OR) hash() int {
+func (a *OR) Hash() int {
h := murmurInit(41) // Init with a value different from AND
for _, op := range a.opnds {
- h = murmurUpdate(h, op.hash())
+ h = murmurUpdate(h, op.Hash())
}
return murmurFinish(h, len(a.opnds))
}
@@ -345,21 +349,21 @@ type OR struct {
func NewOR(a, b SemanticContext) *OR {
- operands := newArray2DHashSet(nil, nil)
+ operands := NewJStore[SemanticContext, Comparator[SemanticContext]](semctxEqInst)
if aa, ok := a.(*OR); ok {
for _, o := range aa.opnds {
- operands.Add(o)
+ operands.Put(o)
}
} else {
- operands.Add(a)
+ operands.Put(a)
}
if ba, ok := b.(*OR); ok {
for _, o := range ba.opnds {
- operands.Add(o)
+ operands.Put(o)
}
} else {
- operands.Add(b)
+ operands.Put(b)
}
precedencePredicates := PrecedencePredicatefilterPrecedencePredicates(operands)
if len(precedencePredicates) > 0 {
@@ -372,7 +376,7 @@ func NewOR(a, b SemanticContext) *OR {
}
}
- operands.Add(reduced)
+ operands.Put(reduced)
}
vs := operands.Values()
@@ -388,14 +392,14 @@ func NewOR(a, b SemanticContext) *OR {
return o
}
-func (o *OR) equals(other interface{}) bool {
+func (o *OR) Equals(other Collectable[SemanticContext]) bool {
if o == other {
return true
} else if _, ok := other.(*OR); !ok {
return false
} else {
for i, v := range other.(*OR).opnds {
- if !o.opnds[i].equals(v) {
+ if !o.opnds[i].Equals(v) {
return false
}
}
@@ -406,7 +410,6 @@ func (o *OR) equals(other interface{}) bool {
//
// The evaluation of predicates by o context is short-circuiting, but
// unordered.
-//
func (o *OR) evaluate(parser Recognizer, outerContext RuleContext) bool {
for i := 0; i < len(o.opnds); i++ {
if o.opnds[i].evaluate(parser, outerContext) {
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/token.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/token.go
new file mode 100644
index 000000000..f73b06bc6
--- /dev/null
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/token.go
@@ -0,0 +1,209 @@
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
+// Use of this file is governed by the BSD 3-clause license that
+// can be found in the LICENSE.txt file in the project root.
+
+package antlr
+
+import (
+ "strconv"
+ "strings"
+)
+
+type TokenSourceCharStreamPair struct {
+ tokenSource TokenSource
+ charStream CharStream
+}
+
+// A token has properties: text, type, line, character position in the line
+// (so we can ignore tabs), token channel, index, and source from which
+// we obtained this token.
+
+type Token interface {
+ GetSource() *TokenSourceCharStreamPair
+ GetTokenType() int
+ GetChannel() int
+ GetStart() int
+ GetStop() int
+ GetLine() int
+ GetColumn() int
+
+ GetText() string
+ SetText(s string)
+
+ GetTokenIndex() int
+ SetTokenIndex(v int)
+
+ GetTokenSource() TokenSource
+ GetInputStream() CharStream
+}
+
+type BaseToken struct {
+ source *TokenSourceCharStreamPair
+ tokenType int // token type of the token
+ channel int // The parser ignores everything not on DEFAULT_CHANNEL
+ start int // optional return -1 if not implemented.
+ stop int // optional return -1 if not implemented.
+ tokenIndex int // from 0..n-1 of the token object in the input stream
+ line int // line=1..n of the 1st character
+ column int // beginning of the line at which it occurs, 0..n-1
+ text string // text of the token.
+ readOnly bool
+}
+
+const (
+ TokenInvalidType = 0
+
+ // During lookahead operations, this "token" signifies we hit rule end ATN state
+ // and did not follow it despite needing to.
+ TokenEpsilon = -2
+
+ TokenMinUserTokenType = 1
+
+ TokenEOF = -1
+
+ // All tokens go to the parser (unless Skip() is called in that rule)
+ // on a particular "channel". The parser tunes to a particular channel
+ // so that whitespace etc... can go to the parser on a "hidden" channel.
+
+ TokenDefaultChannel = 0
+
+ // Anything on different channel than DEFAULT_CHANNEL is not parsed
+ // by parser.
+
+ TokenHiddenChannel = 1
+)
+
+func (b *BaseToken) GetChannel() int {
+ return b.channel
+}
+
+func (b *BaseToken) GetStart() int {
+ return b.start
+}
+
+func (b *BaseToken) GetStop() int {
+ return b.stop
+}
+
+func (b *BaseToken) GetLine() int {
+ return b.line
+}
+
+func (b *BaseToken) GetColumn() int {
+ return b.column
+}
+
+func (b *BaseToken) GetTokenType() int {
+ return b.tokenType
+}
+
+func (b *BaseToken) GetSource() *TokenSourceCharStreamPair {
+ return b.source
+}
+
+func (b *BaseToken) GetTokenIndex() int {
+ return b.tokenIndex
+}
+
+func (b *BaseToken) SetTokenIndex(v int) {
+ b.tokenIndex = v
+}
+
+func (b *BaseToken) GetTokenSource() TokenSource {
+ return b.source.tokenSource
+}
+
+func (b *BaseToken) GetInputStream() CharStream {
+ return b.source.charStream
+}
+
+type CommonToken struct {
+ *BaseToken
+}
+
+func NewCommonToken(source *TokenSourceCharStreamPair, tokenType, channel, start, stop int) *CommonToken {
+
+ t := new(CommonToken)
+
+ t.BaseToken = new(BaseToken)
+
+ t.source = source
+ t.tokenType = tokenType
+ t.channel = channel
+ t.start = start
+ t.stop = stop
+ t.tokenIndex = -1
+ if t.source.tokenSource != nil {
+ t.line = source.tokenSource.GetLine()
+ t.column = source.tokenSource.GetCharPositionInLine()
+ } else {
+ t.column = -1
+ }
+ return t
+}
+
+// An empty {@link Pair} which is used as the default value of
+// {@link //source} for tokens that do not have a source.
+
+//CommonToken.EMPTY_SOURCE = [ nil, nil ]
+
+// Constructs a New{@link CommonToken} as a copy of another {@link Token}.
+//
+//
+// If {@code oldToken} is also a {@link CommonToken} instance, the newly
+// constructed token will share a reference to the {@link //text} field and
+// the {@link Pair} stored in {@link //source}. Otherwise, {@link //text} will
+// be assigned the result of calling {@link //GetText}, and {@link //source}
+// will be constructed from the result of {@link Token//GetTokenSource} and
+// {@link Token//GetInputStream}.
+//
+// @param oldToken The token to copy.
+func (c *CommonToken) clone() *CommonToken {
+ t := NewCommonToken(c.source, c.tokenType, c.channel, c.start, c.stop)
+ t.tokenIndex = c.GetTokenIndex()
+ t.line = c.GetLine()
+ t.column = c.GetColumn()
+ t.text = c.GetText()
+ return t
+}
+
+func (c *CommonToken) GetText() string {
+ if c.text != "" {
+ return c.text
+ }
+ input := c.GetInputStream()
+ if input == nil {
+ return ""
+ }
+ n := input.Size()
+ if c.start < n && c.stop < n {
+ return input.GetTextFromInterval(NewInterval(c.start, c.stop))
+ }
+ return ""
+}
+
+func (c *CommonToken) SetText(text string) {
+ c.text = text
+}
+
+func (c *CommonToken) String() string {
+ txt := c.GetText()
+ if txt != "" {
+ txt = strings.Replace(txt, "\n", "\\n", -1)
+ txt = strings.Replace(txt, "\r", "\\r", -1)
+ txt = strings.Replace(txt, "\t", "\\t", -1)
+ } else {
+ txt = ""
+ }
+
+ var ch string
+ if c.channel > 0 {
+ ch = ",channel=" + strconv.Itoa(c.channel)
+ } else {
+ ch = ""
+ }
+
+ return "[@" + strconv.Itoa(c.tokenIndex) + "," + strconv.Itoa(c.start) + ":" + strconv.Itoa(c.stop) + "='" +
+ txt + "',<" + strconv.Itoa(c.tokenType) + ">" +
+ ch + "," + strconv.Itoa(c.line) + ":" + strconv.Itoa(c.column) + "]"
+}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/token_source.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/token_source.go
similarity index 85%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/token_source.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/token_source.go
index e023978fe..a3f36eaa6 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/token_source.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/token_source.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/token_stream.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/token_stream.go
similarity index 87%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/token_stream.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/token_stream.go
index df92c8147..1527d43f6 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/token_stream.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/token_stream.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/tokenstream_rewriter.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/tokenstream_rewriter.go
new file mode 100644
index 000000000..b3e38af34
--- /dev/null
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/tokenstream_rewriter.go
@@ -0,0 +1,659 @@
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
+// Use of this file is governed by the BSD 3-clause license that
+// can be found in the LICENSE.txt file in the project root.
+
+package antlr
+
+import (
+ "bytes"
+ "fmt"
+)
+
+//
+// Useful for rewriting out a buffered input token stream after doing some
+// augmentation or other manipulations on it.
+
+//
+// You can insert stuff, replace, and delete chunks. Note that the operations
+// are done lazily--only if you convert the buffer to a {@link String} with
+// {@link TokenStream#getText()}. This is very efficient because you are not
+// moving data around all the time. As the buffer of tokens is converted to
+// strings, the {@link #getText()} method(s) scan the input token stream and
+// check to see if there is an operation at the current index. If so, the
+// operation is done and then normal {@link String} rendering continues on the
+// buffer. This is like having multiple Turing machine instruction streams
+// (programs) operating on a single input tape. :)
+//
+
+// This rewriter makes no modifications to the token stream. It does not ask the
+// stream to fill itself up nor does it advance the input cursor. The token
+// stream {@link TokenStream#index()} will return the same value before and
+// after any {@link #getText()} call.
+
+//
+// The rewriter only works on tokens that you have in the buffer and ignores the
+// current input cursor. If you are buffering tokens on-demand, calling
+// {@link #getText()} halfway through the input will only do rewrites for those
+// tokens in the first half of the file.
+
+//
+// Since the operations are done lazily at {@link #getText}-time, operations do
+// not screw up the token index values. That is, an insert operation at token
+// index {@code i} does not change the index values for tokens
+// {@code i}+1..n-1.
+
+//
+// Because operations never actually alter the buffer, you may always get the
+// original token stream back without undoing anything. Since the instructions
+// are queued up, you can easily simulate transactions and roll back any changes
+// if there is an error just by removing instructions. For example,
+
+//
+// CharStream input = new ANTLRFileStream("input");
+// TLexer lex = new TLexer(input);
+// CommonTokenStream tokens = new CommonTokenStream(lex);
+// T parser = new T(tokens);
+// TokenStreamRewriter rewriter = new TokenStreamRewriter(tokens);
+// parser.startRule();
+//
+
+//
+// Then in the rules, you can execute (assuming rewriter is visible):
+
+//
+// Token t,u;
+// ...
+// rewriter.insertAfter(t, "text to put after t");}
+// rewriter.insertAfter(u, "text after u");}
+// System.out.println(rewriter.getText());
+//
+
+//
+// You can also have multiple "instruction streams" and get multiple rewrites
+// from a single pass over the input. Just name the instruction streams and use
+// that name again when printing the buffer. This could be useful for generating
+// a C file and also its header file--all from the same buffer:
+
+//
+// rewriter.insertAfter("pass1", t, "text to put after t");}
+// rewriter.insertAfter("pass2", u, "text after u");}
+// System.out.println(rewriter.getText("pass1"));
+// System.out.println(rewriter.getText("pass2"));
+//
+
+//
+// If you don't use named rewrite streams, a "default" stream is used as the
+// first example shows.
+
+const (
+ Default_Program_Name = "default"
+ Program_Init_Size = 100
+ Min_Token_Index = 0
+)
+
+// Define the rewrite operation hierarchy
+
+type RewriteOperation interface {
+ // Execute the rewrite operation by possibly adding to the buffer.
+ // Return the index of the next token to operate on.
+ Execute(buffer *bytes.Buffer) int
+ String() string
+ GetInstructionIndex() int
+ GetIndex() int
+ GetText() string
+ GetOpName() string
+ GetTokens() TokenStream
+ SetInstructionIndex(val int)
+ SetIndex(int)
+ SetText(string)
+ SetOpName(string)
+ SetTokens(TokenStream)
+}
+
+type BaseRewriteOperation struct {
+ //Current index of rewrites list
+ instruction_index int
+ //Token buffer index
+ index int
+ //Substitution text
+ text string
+ //Actual operation name
+ op_name string
+ //Pointer to token steam
+ tokens TokenStream
+}
+
+func (op *BaseRewriteOperation) GetInstructionIndex() int {
+ return op.instruction_index
+}
+
+func (op *BaseRewriteOperation) GetIndex() int {
+ return op.index
+}
+
+func (op *BaseRewriteOperation) GetText() string {
+ return op.text
+}
+
+func (op *BaseRewriteOperation) GetOpName() string {
+ return op.op_name
+}
+
+func (op *BaseRewriteOperation) GetTokens() TokenStream {
+ return op.tokens
+}
+
+func (op *BaseRewriteOperation) SetInstructionIndex(val int) {
+ op.instruction_index = val
+}
+
+func (op *BaseRewriteOperation) SetIndex(val int) {
+ op.index = val
+}
+
+func (op *BaseRewriteOperation) SetText(val string) {
+ op.text = val
+}
+
+func (op *BaseRewriteOperation) SetOpName(val string) {
+ op.op_name = val
+}
+
+func (op *BaseRewriteOperation) SetTokens(val TokenStream) {
+ op.tokens = val
+}
+
+func (op *BaseRewriteOperation) Execute(buffer *bytes.Buffer) int {
+ return op.index
+}
+
+func (op *BaseRewriteOperation) String() string {
+ return fmt.Sprintf("<%s@%d:\"%s\">",
+ op.op_name,
+ op.tokens.Get(op.GetIndex()),
+ op.text,
+ )
+
+}
+
+type InsertBeforeOp struct {
+ BaseRewriteOperation
+}
+
+func NewInsertBeforeOp(index int, text string, stream TokenStream) *InsertBeforeOp {
+ return &InsertBeforeOp{BaseRewriteOperation: BaseRewriteOperation{
+ index: index,
+ text: text,
+ op_name: "InsertBeforeOp",
+ tokens: stream,
+ }}
+}
+
+func (op *InsertBeforeOp) Execute(buffer *bytes.Buffer) int {
+ buffer.WriteString(op.text)
+ if op.tokens.Get(op.index).GetTokenType() != TokenEOF {
+ buffer.WriteString(op.tokens.Get(op.index).GetText())
+ }
+ return op.index + 1
+}
+
+func (op *InsertBeforeOp) String() string {
+ return op.BaseRewriteOperation.String()
+}
+
+// Distinguish between insert after/before to do the "insert afters"
+// first and then the "insert befores" at same index. Implementation
+// of "insert after" is "insert before index+1".
+
+type InsertAfterOp struct {
+ BaseRewriteOperation
+}
+
+func NewInsertAfterOp(index int, text string, stream TokenStream) *InsertAfterOp {
+ return &InsertAfterOp{BaseRewriteOperation: BaseRewriteOperation{
+ index: index + 1,
+ text: text,
+ tokens: stream,
+ }}
+}
+
+func (op *InsertAfterOp) Execute(buffer *bytes.Buffer) int {
+ buffer.WriteString(op.text)
+ if op.tokens.Get(op.index).GetTokenType() != TokenEOF {
+ buffer.WriteString(op.tokens.Get(op.index).GetText())
+ }
+ return op.index + 1
+}
+
+func (op *InsertAfterOp) String() string {
+ return op.BaseRewriteOperation.String()
+}
+
+// I'm going to try replacing range from x..y with (y-x)+1 ReplaceOp
+// instructions.
+type ReplaceOp struct {
+ BaseRewriteOperation
+ LastIndex int
+}
+
+func NewReplaceOp(from, to int, text string, stream TokenStream) *ReplaceOp {
+ return &ReplaceOp{
+ BaseRewriteOperation: BaseRewriteOperation{
+ index: from,
+ text: text,
+ op_name: "ReplaceOp",
+ tokens: stream,
+ },
+ LastIndex: to,
+ }
+}
+
+func (op *ReplaceOp) Execute(buffer *bytes.Buffer) int {
+ if op.text != "" {
+ buffer.WriteString(op.text)
+ }
+ return op.LastIndex + 1
+}
+
+func (op *ReplaceOp) String() string {
+ if op.text == "" {
+ return fmt.Sprintf("",
+ op.tokens.Get(op.index), op.tokens.Get(op.LastIndex))
+ }
+ return fmt.Sprintf("",
+ op.tokens.Get(op.index), op.tokens.Get(op.LastIndex), op.text)
+}
+
+type TokenStreamRewriter struct {
+ //Our source stream
+ tokens TokenStream
+ // You may have multiple, named streams of rewrite operations.
+ // I'm calling these things "programs."
+ // Maps String (name) → rewrite (List)
+ programs map[string][]RewriteOperation
+ last_rewrite_token_indexes map[string]int
+}
+
+func NewTokenStreamRewriter(tokens TokenStream) *TokenStreamRewriter {
+ return &TokenStreamRewriter{
+ tokens: tokens,
+ programs: map[string][]RewriteOperation{
+ Default_Program_Name: make([]RewriteOperation, 0, Program_Init_Size),
+ },
+ last_rewrite_token_indexes: map[string]int{},
+ }
+}
+
+func (tsr *TokenStreamRewriter) GetTokenStream() TokenStream {
+ return tsr.tokens
+}
+
+// Rollback the instruction stream for a program so that
+// the indicated instruction (via instructionIndex) is no
+// longer in the stream. UNTESTED!
+func (tsr *TokenStreamRewriter) Rollback(program_name string, instruction_index int) {
+ is, ok := tsr.programs[program_name]
+ if ok {
+ tsr.programs[program_name] = is[Min_Token_Index:instruction_index]
+ }
+}
+
+func (tsr *TokenStreamRewriter) RollbackDefault(instruction_index int) {
+ tsr.Rollback(Default_Program_Name, instruction_index)
+}
+
+// Reset the program so that no instructions exist
+func (tsr *TokenStreamRewriter) DeleteProgram(program_name string) {
+ tsr.Rollback(program_name, Min_Token_Index) //TODO: double test on that cause lower bound is not included
+}
+
+func (tsr *TokenStreamRewriter) DeleteProgramDefault() {
+ tsr.DeleteProgram(Default_Program_Name)
+}
+
+func (tsr *TokenStreamRewriter) InsertAfter(program_name string, index int, text string) {
+ // to insert after, just insert before next index (even if past end)
+ var op RewriteOperation = NewInsertAfterOp(index, text, tsr.tokens)
+ rewrites := tsr.GetProgram(program_name)
+ op.SetInstructionIndex(len(rewrites))
+ tsr.AddToProgram(program_name, op)
+}
+
+func (tsr *TokenStreamRewriter) InsertAfterDefault(index int, text string) {
+ tsr.InsertAfter(Default_Program_Name, index, text)
+}
+
+func (tsr *TokenStreamRewriter) InsertAfterToken(program_name string, token Token, text string) {
+ tsr.InsertAfter(program_name, token.GetTokenIndex(), text)
+}
+
+func (tsr *TokenStreamRewriter) InsertBefore(program_name string, index int, text string) {
+ var op RewriteOperation = NewInsertBeforeOp(index, text, tsr.tokens)
+ rewrites := tsr.GetProgram(program_name)
+ op.SetInstructionIndex(len(rewrites))
+ tsr.AddToProgram(program_name, op)
+}
+
+func (tsr *TokenStreamRewriter) InsertBeforeDefault(index int, text string) {
+ tsr.InsertBefore(Default_Program_Name, index, text)
+}
+
+func (tsr *TokenStreamRewriter) InsertBeforeToken(program_name string, token Token, text string) {
+ tsr.InsertBefore(program_name, token.GetTokenIndex(), text)
+}
+
+func (tsr *TokenStreamRewriter) Replace(program_name string, from, to int, text string) {
+ if from > to || from < 0 || to < 0 || to >= tsr.tokens.Size() {
+ panic(fmt.Sprintf("replace: range invalid: %d..%d(size=%d)",
+ from, to, tsr.tokens.Size()))
+ }
+ var op RewriteOperation = NewReplaceOp(from, to, text, tsr.tokens)
+ rewrites := tsr.GetProgram(program_name)
+ op.SetInstructionIndex(len(rewrites))
+ tsr.AddToProgram(program_name, op)
+}
+
+func (tsr *TokenStreamRewriter) ReplaceDefault(from, to int, text string) {
+ tsr.Replace(Default_Program_Name, from, to, text)
+}
+
+func (tsr *TokenStreamRewriter) ReplaceDefaultPos(index int, text string) {
+ tsr.ReplaceDefault(index, index, text)
+}
+
+func (tsr *TokenStreamRewriter) ReplaceToken(program_name string, from, to Token, text string) {
+ tsr.Replace(program_name, from.GetTokenIndex(), to.GetTokenIndex(), text)
+}
+
+func (tsr *TokenStreamRewriter) ReplaceTokenDefault(from, to Token, text string) {
+ tsr.ReplaceToken(Default_Program_Name, from, to, text)
+}
+
+func (tsr *TokenStreamRewriter) ReplaceTokenDefaultPos(index Token, text string) {
+ tsr.ReplaceTokenDefault(index, index, text)
+}
+
+func (tsr *TokenStreamRewriter) Delete(program_name string, from, to int) {
+ tsr.Replace(program_name, from, to, "")
+}
+
+func (tsr *TokenStreamRewriter) DeleteDefault(from, to int) {
+ tsr.Delete(Default_Program_Name, from, to)
+}
+
+func (tsr *TokenStreamRewriter) DeleteDefaultPos(index int) {
+ tsr.DeleteDefault(index, index)
+}
+
+func (tsr *TokenStreamRewriter) DeleteToken(program_name string, from, to Token) {
+ tsr.ReplaceToken(program_name, from, to, "")
+}
+
+func (tsr *TokenStreamRewriter) DeleteTokenDefault(from, to Token) {
+ tsr.DeleteToken(Default_Program_Name, from, to)
+}
+
+func (tsr *TokenStreamRewriter) GetLastRewriteTokenIndex(program_name string) int {
+ i, ok := tsr.last_rewrite_token_indexes[program_name]
+ if !ok {
+ return -1
+ }
+ return i
+}
+
+func (tsr *TokenStreamRewriter) GetLastRewriteTokenIndexDefault() int {
+ return tsr.GetLastRewriteTokenIndex(Default_Program_Name)
+}
+
+func (tsr *TokenStreamRewriter) SetLastRewriteTokenIndex(program_name string, i int) {
+ tsr.last_rewrite_token_indexes[program_name] = i
+}
+
+func (tsr *TokenStreamRewriter) InitializeProgram(name string) []RewriteOperation {
+ is := make([]RewriteOperation, 0, Program_Init_Size)
+ tsr.programs[name] = is
+ return is
+}
+
+func (tsr *TokenStreamRewriter) AddToProgram(name string, op RewriteOperation) {
+ is := tsr.GetProgram(name)
+ is = append(is, op)
+ tsr.programs[name] = is
+}
+
+func (tsr *TokenStreamRewriter) GetProgram(name string) []RewriteOperation {
+ is, ok := tsr.programs[name]
+ if !ok {
+ is = tsr.InitializeProgram(name)
+ }
+ return is
+}
+
+// Return the text from the original tokens altered per the
+// instructions given to this rewriter.
+func (tsr *TokenStreamRewriter) GetTextDefault() string {
+ return tsr.GetText(
+ Default_Program_Name,
+ NewInterval(0, tsr.tokens.Size()-1))
+}
+
+// Return the text from the original tokens altered per the
+// instructions given to this rewriter.
+func (tsr *TokenStreamRewriter) GetText(program_name string, interval *Interval) string {
+ rewrites := tsr.programs[program_name]
+ start := interval.Start
+ stop := interval.Stop
+ // ensure start/end are in range
+ stop = min(stop, tsr.tokens.Size()-1)
+ start = max(start, 0)
+ if rewrites == nil || len(rewrites) == 0 {
+ return tsr.tokens.GetTextFromInterval(interval) // no instructions to execute
+ }
+ buf := bytes.Buffer{}
+ // First, optimize instruction stream
+ indexToOp := reduceToSingleOperationPerIndex(rewrites)
+ // Walk buffer, executing instructions and emitting tokens
+ for i := start; i <= stop && i < tsr.tokens.Size(); {
+ op := indexToOp[i]
+ delete(indexToOp, i) // remove so any left have index size-1
+ t := tsr.tokens.Get(i)
+ if op == nil {
+ // no operation at that index, just dump token
+ if t.GetTokenType() != TokenEOF {
+ buf.WriteString(t.GetText())
+ }
+ i++ // move to next token
+ } else {
+ i = op.Execute(&buf) // execute operation and skip
+ }
+ }
+ // include stuff after end if it's last index in buffer
+ // So, if they did an insertAfter(lastValidIndex, "foo"), include
+ // foo if end==lastValidIndex.
+ if stop == tsr.tokens.Size()-1 {
+ // Scan any remaining operations after last token
+ // should be included (they will be inserts).
+ for _, op := range indexToOp {
+ if op.GetIndex() >= tsr.tokens.Size()-1 {
+ buf.WriteString(op.GetText())
+ }
+ }
+ }
+ return buf.String()
+}
+
+// We need to combine operations and report invalid operations (like
+// overlapping replaces that are not completed nested). Inserts to
+// same index need to be combined etc... Here are the cases:
+//
+// I.i.u I.j.v leave alone, nonoverlapping
+// I.i.u I.i.v combine: Iivu
+//
+// R.i-j.u R.x-y.v | i-j in x-y delete first R
+// R.i-j.u R.i-j.v delete first R
+// R.i-j.u R.x-y.v | x-y in i-j ERROR
+// R.i-j.u R.x-y.v | boundaries overlap ERROR
+//
+// Delete special case of replace (text==null):
+// D.i-j.u D.x-y.v | boundaries overlap combine to max(min)..max(right)
+//
+// I.i.u R.x-y.v | i in (x+1)-y delete I (since insert before
+// we're not deleting i)
+// I.i.u R.x-y.v | i not in (x+1)-y leave alone, nonoverlapping
+// R.x-y.v I.i.u | i in x-y ERROR
+// R.x-y.v I.x.u R.x-y.uv (combine, delete I)
+// R.x-y.v I.i.u | i not in x-y leave alone, nonoverlapping
+//
+// I.i.u = insert u before op @ index i
+// R.x-y.u = replace x-y indexed tokens with u
+//
+// First we need to examine replaces. For any replace op:
+//
+// 1. wipe out any insertions before op within that range.
+// 2. Drop any replace op before that is contained completely within
+// that range.
+// 3. Throw exception upon boundary overlap with any previous replace.
+//
+// Then we can deal with inserts:
+//
+// 1. for any inserts to same index, combine even if not adjacent.
+// 2. for any prior replace with same left boundary, combine this
+// insert with replace and delete this replace.
+// 3. throw exception if index in same range as previous replace
+//
+// Don't actually delete; make op null in list. Easier to walk list.
+// Later we can throw as we add to index → op map.
+//
+// Note that I.2 R.2-2 will wipe out I.2 even though, technically, the
+// inserted stuff would be before the replace range. But, if you
+// add tokens in front of a method body '{' and then delete the method
+// body, I think the stuff before the '{' you added should disappear too.
+//
+// Return a map from token index to operation.
+func reduceToSingleOperationPerIndex(rewrites []RewriteOperation) map[int]RewriteOperation {
+ // WALK REPLACES
+ for i := 0; i < len(rewrites); i++ {
+ op := rewrites[i]
+ if op == nil {
+ continue
+ }
+ rop, ok := op.(*ReplaceOp)
+ if !ok {
+ continue
+ }
+ // Wipe prior inserts within range
+ for j := 0; j < i && j < len(rewrites); j++ {
+ if iop, ok := rewrites[j].(*InsertBeforeOp); ok {
+ if iop.index == rop.index {
+ // E.g., insert before 2, delete 2..2; update replace
+ // text to include insert before, kill insert
+ rewrites[iop.instruction_index] = nil
+ if rop.text != "" {
+ rop.text = iop.text + rop.text
+ } else {
+ rop.text = iop.text
+ }
+ } else if iop.index > rop.index && iop.index <= rop.LastIndex {
+ // delete insert as it's a no-op.
+ rewrites[iop.instruction_index] = nil
+ }
+ }
+ }
+ // Drop any prior replaces contained within
+ for j := 0; j < i && j < len(rewrites); j++ {
+ if prevop, ok := rewrites[j].(*ReplaceOp); ok {
+ if prevop.index >= rop.index && prevop.LastIndex <= rop.LastIndex {
+ // delete replace as it's a no-op.
+ rewrites[prevop.instruction_index] = nil
+ continue
+ }
+ // throw exception unless disjoint or identical
+ disjoint := prevop.LastIndex < rop.index || prevop.index > rop.LastIndex
+ // Delete special case of replace (text==null):
+ // D.i-j.u D.x-y.v | boundaries overlap combine to max(min)..max(right)
+ if prevop.text == "" && rop.text == "" && !disjoint {
+ rewrites[prevop.instruction_index] = nil
+ rop.index = min(prevop.index, rop.index)
+ rop.LastIndex = max(prevop.LastIndex, rop.LastIndex)
+ println("new rop" + rop.String()) //TODO: remove console write, taken from Java version
+ } else if !disjoint {
+ panic("replace op boundaries of " + rop.String() + " overlap with previous " + prevop.String())
+ }
+ }
+ }
+ }
+ // WALK INSERTS
+ for i := 0; i < len(rewrites); i++ {
+ op := rewrites[i]
+ if op == nil {
+ continue
+ }
+ //hack to replicate inheritance in composition
+ _, iok := rewrites[i].(*InsertBeforeOp)
+ _, aok := rewrites[i].(*InsertAfterOp)
+ if !iok && !aok {
+ continue
+ }
+ iop := rewrites[i]
+ // combine current insert with prior if any at same index
+ // deviating a bit from TokenStreamRewriter.java - hard to incorporate inheritance logic
+ for j := 0; j < i && j < len(rewrites); j++ {
+ if nextIop, ok := rewrites[j].(*InsertAfterOp); ok {
+ if nextIop.index == iop.GetIndex() {
+ iop.SetText(nextIop.text + iop.GetText())
+ rewrites[j] = nil
+ }
+ }
+ if prevIop, ok := rewrites[j].(*InsertBeforeOp); ok {
+ if prevIop.index == iop.GetIndex() {
+ iop.SetText(iop.GetText() + prevIop.text)
+ rewrites[prevIop.instruction_index] = nil
+ }
+ }
+ }
+ // look for replaces where iop.index is in range; error
+ for j := 0; j < i && j < len(rewrites); j++ {
+ if rop, ok := rewrites[j].(*ReplaceOp); ok {
+ if iop.GetIndex() == rop.index {
+ rop.text = iop.GetText() + rop.text
+ rewrites[i] = nil
+ continue
+ }
+ if iop.GetIndex() >= rop.index && iop.GetIndex() <= rop.LastIndex {
+ panic("insert op " + iop.String() + " within boundaries of previous " + rop.String())
+ }
+ }
+ }
+ }
+ m := map[int]RewriteOperation{}
+ for i := 0; i < len(rewrites); i++ {
+ op := rewrites[i]
+ if op == nil {
+ continue
+ }
+ if _, ok := m[op.GetIndex()]; ok {
+ panic("should only be one op per index")
+ }
+ m[op.GetIndex()] = op
+ }
+ return m
+}
+
+/*
+ Quick fixing Go lack of overloads
+*/
+
+func max(a, b int) int {
+ if a > b {
+ return a
+ } else {
+ return b
+ }
+}
+func min(a, b int) int {
+ if a < b {
+ return a
+ } else {
+ return b
+ }
+}
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/trace_listener.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/trace_listener.go
similarity index 93%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/trace_listener.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/trace_listener.go
index e6fff992f..7b663bf84 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/trace_listener.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/trace_listener.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/transition.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/transition.go
similarity index 99%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/transition.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/transition.go
index 53056bd74..36be4f733 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/transition.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/transition.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/tree.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/tree.go
similarity index 98%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/tree.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/tree.go
index 08ce22bba..85b4f137b 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/tree.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/tree.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -234,10 +234,8 @@ func (p *ParseTreeWalker) Walk(listener ParseTreeListener, t Tree) {
}
}
-//
// Enters a grammar rule by first triggering the generic event {@link ParseTreeListener//EnterEveryRule}
// then by triggering the event specific to the given parse tree node
-//
func (p *ParseTreeWalker) EnterRule(listener ParseTreeListener, r RuleNode) {
ctx := r.GetRuleContext().(ParserRuleContext)
listener.EnterEveryRule(ctx)
@@ -246,7 +244,6 @@ func (p *ParseTreeWalker) EnterRule(listener ParseTreeListener, r RuleNode) {
// Exits a grammar rule by first triggering the event specific to the given parse tree node
// then by triggering the generic event {@link ParseTreeListener//ExitEveryRule}
-//
func (p *ParseTreeWalker) ExitRule(listener ParseTreeListener, r RuleNode) {
ctx := r.GetRuleContext().(ParserRuleContext)
ctx.ExitRule(listener)
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/trees.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/trees.go
similarity index 93%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/trees.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/trees.go
index 80144ecad..d7dbb0322 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/trees.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/trees.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -9,8 +9,9 @@ import "fmt"
/** A set of utility routines useful for all kinds of ANTLR trees. */
// Print out a whole tree in LISP form. {@link //getNodeText} is used on the
-// node payloads to get the text for the nodes. Detect
-// parse trees and extract data appropriately.
+//
+// node payloads to get the text for the nodes. Detect
+// parse trees and extract data appropriately.
func TreesStringTree(tree Tree, ruleNames []string, recog Recognizer) string {
if recog != nil {
@@ -80,8 +81,8 @@ func TreesGetChildren(t Tree) []Tree {
}
// Return a list of all ancestors of this node. The first node of
-// list is the root and the last is the parent of this node.
//
+// list is the root and the last is the parent of this node.
func TreesgetAncestors(t Tree) []Tree {
ancestors := make([]Tree, 0)
t = t.GetParent()
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/utils.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/utils.go
similarity index 94%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/utils.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/utils.go
index ec219df98..9fad5d916 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/utils.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/utils.go
@@ -1,4 +1,4 @@
-// Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
+// Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.
// Use of this file is governed by the BSD 3-clause license that
// can be found in the LICENSE.txt file in the project root.
@@ -47,28 +47,25 @@ func (s *IntStack) Push(e int) {
*s = append(*s, e)
}
-func standardEqualsFunction(a interface{}, b interface{}) bool {
-
- ac, oka := a.(comparable)
- bc, okb := b.(comparable)
+type comparable interface {
+ Equals(other Collectable[any]) bool
+}
- if !oka || !okb {
- panic("Not Comparable")
- }
+func standardEqualsFunction(a Collectable[any], b Collectable[any]) bool {
- return ac.equals(bc)
+ return a.Equals(b)
}
func standardHashFunction(a interface{}) int {
if h, ok := a.(hasher); ok {
- return h.hash()
+ return h.Hash()
}
panic("Not Hasher")
}
type hasher interface {
- hash() int
+ Hash() int
}
const bitsPerWord = 64
@@ -171,7 +168,7 @@ func (b *BitSet) equals(other interface{}) bool {
// We only compare set bits, so we cannot rely on the two slices having the same size. Its
// possible for two BitSets to have different slice lengths but the same set bits. So we only
- // compare the relavent words and ignore the trailing zeros.
+ // compare the relevant words and ignore the trailing zeros.
bLen := b.minLen()
otherLen := otherBitSet.minLen()
diff --git a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/utils_set.go b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/utils_set.go
similarity index 80%
rename from tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/utils_set.go
rename to tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/utils_set.go
index 0d4eac698..c9bd6751e 100644
--- a/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/utils_set.go
+++ b/tools/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/utils_set.go
@@ -8,8 +8,6 @@ const (
_loadFactor = 0.75
)
-var _ Set = (*array2DHashSet)(nil)
-
type Set interface {
Add(value interface{}) (added interface{})
Len() int
@@ -20,9 +18,9 @@ type Set interface {
}
type array2DHashSet struct {
- buckets [][]interface{}
+ buckets [][]Collectable[any]
hashcodeFunction func(interface{}) int
- equalsFunction func(interface{}, interface{}) bool
+ equalsFunction func(Collectable[any], Collectable[any]) bool
n int // How many elements in set
threshold int // when to expand
@@ -61,11 +59,11 @@ func (as *array2DHashSet) Values() []interface{} {
return values
}
-func (as *array2DHashSet) Contains(value interface{}) bool {
+func (as *array2DHashSet) Contains(value Collectable[any]) bool {
return as.Get(value) != nil
}
-func (as *array2DHashSet) Add(value interface{}) interface{} {
+func (as *array2DHashSet) Add(value Collectable[any]) interface{} {
if as.n > as.threshold {
as.expand()
}
@@ -98,7 +96,7 @@ func (as *array2DHashSet) expand() {
b := as.getBuckets(o)
bucketLength := newBucketLengths[b]
- var newBucket []interface{}
+ var newBucket []Collectable[any]
if bucketLength == 0 {
// new bucket
newBucket = as.createBucket(as.initialBucketCapacity)
@@ -107,7 +105,7 @@ func (as *array2DHashSet) expand() {
newBucket = newTable[b]
if bucketLength == len(newBucket) {
// expand
- newBucketCopy := make([]interface{}, len(newBucket)<<1)
+ newBucketCopy := make([]Collectable[any], len(newBucket)<<1)
copy(newBucketCopy[:bucketLength], newBucket)
newBucket = newBucketCopy
newTable[b] = newBucket
@@ -124,7 +122,7 @@ func (as *array2DHashSet) Len() int {
return as.n
}
-func (as *array2DHashSet) Get(o interface{}) interface{} {
+func (as *array2DHashSet) Get(o Collectable[any]) interface{} {
if o == nil {
return nil
}
@@ -147,7 +145,7 @@ func (as *array2DHashSet) Get(o interface{}) interface{} {
return nil
}
-func (as *array2DHashSet) innerAdd(o interface{}) interface{} {
+func (as *array2DHashSet) innerAdd(o Collectable[any]) interface{} {
b := as.getBuckets(o)
bucket := as.buckets[b]
@@ -178,7 +176,7 @@ func (as *array2DHashSet) innerAdd(o interface{}) interface{} {
// full bucket, expand and add to end
oldLength := len(bucket)
- bucketCopy := make([]interface{}, oldLength<<1)
+ bucketCopy := make([]Collectable[any], oldLength<<1)
copy(bucketCopy[:oldLength], bucket)
bucket = bucketCopy
as.buckets[b] = bucket
@@ -187,22 +185,22 @@ func (as *array2DHashSet) innerAdd(o interface{}) interface{} {
return o
}
-func (as *array2DHashSet) getBuckets(value interface{}) int {
+func (as *array2DHashSet) getBuckets(value Collectable[any]) int {
hash := as.hashcodeFunction(value)
return hash & (len(as.buckets) - 1)
}
-func (as *array2DHashSet) createBuckets(cap int) [][]interface{} {
- return make([][]interface{}, cap)
+func (as *array2DHashSet) createBuckets(cap int) [][]Collectable[any] {
+ return make([][]Collectable[any], cap)
}
-func (as *array2DHashSet) createBucket(cap int) []interface{} {
- return make([]interface{}, cap)
+func (as *array2DHashSet) createBucket(cap int) []Collectable[any] {
+ return make([]Collectable[any], cap)
}
func newArray2DHashSetWithCap(
hashcodeFunction func(interface{}) int,
- equalsFunction func(interface{}, interface{}) bool,
+ equalsFunction func(Collectable[any], Collectable[any]) bool,
initCap int,
initBucketCap int,
) *array2DHashSet {
@@ -231,7 +229,7 @@ func newArray2DHashSetWithCap(
func newArray2DHashSet(
hashcodeFunction func(interface{}) int,
- equalsFunction func(interface{}, interface{}) bool,
+ equalsFunction func(Collectable[any], Collectable[any]) bool,
) *array2DHashSet {
return newArray2DHashSetWithCap(hashcodeFunction, equalsFunction, _initalCapacity, _initalBucketCapacity)
}
diff --git a/tools/vendor/github.com/asaskevich/govalidator/validator.go b/tools/vendor/github.com/asaskevich/govalidator/validator.go
index 46ecfc84a..c9c4fac06 100644
--- a/tools/vendor/github.com/asaskevich/govalidator/validator.go
+++ b/tools/vendor/github.com/asaskevich/govalidator/validator.go
@@ -454,27 +454,26 @@ func IsCreditCard(str string) bool {
if !rxCreditCard.MatchString(sanitized) {
return false
}
+
+ number, _ := ToInt(sanitized)
+ number, lastDigit := number / 10, number % 10
+
var sum int64
- var digit string
- var tmpNum int64
- var shouldDouble bool
- for i := len(sanitized) - 1; i >= 0; i-- {
- digit = sanitized[i:(i + 1)]
- tmpNum, _ = ToInt(digit)
- if shouldDouble {
- tmpNum *= 2
- if tmpNum >= 10 {
- sum += (tmpNum % 10) + 1
- } else {
- sum += tmpNum
+ for i:=0; number > 0; i++ {
+ digit := number % 10
+
+ if i % 2 == 0 {
+ digit *= 2
+ if digit > 9 {
+ digit -= 9
}
- } else {
- sum += tmpNum
}
- shouldDouble = !shouldDouble
+
+ sum += digit
+ number = number / 10
}
-
- return sum%10 == 0
+
+ return (sum + lastDigit) % 10 == 0
}
// IsISBN10 checks if the string is an ISBN version 10.
diff --git a/tools/vendor/github.com/cenkalti/backoff/v4/.travis.yml b/tools/vendor/github.com/cenkalti/backoff/v4/.travis.yml
deleted file mode 100644
index c79105c2f..000000000
--- a/tools/vendor/github.com/cenkalti/backoff/v4/.travis.yml
+++ /dev/null
@@ -1,10 +0,0 @@
-language: go
-go:
- - 1.13
- - 1.x
- - tip
-before_install:
- - go get github.com/mattn/goveralls
- - go get golang.org/x/tools/cmd/cover
-script:
- - $HOME/gopath/bin/goveralls -service=travis-ci
diff --git a/tools/vendor/github.com/cenkalti/backoff/v4/retry.go b/tools/vendor/github.com/cenkalti/backoff/v4/retry.go
index 1ce2507eb..b9c0c51cd 100644
--- a/tools/vendor/github.com/cenkalti/backoff/v4/retry.go
+++ b/tools/vendor/github.com/cenkalti/backoff/v4/retry.go
@@ -5,10 +5,20 @@ import (
"time"
)
+// An OperationWithData is executing by RetryWithData() or RetryNotifyWithData().
+// The operation will be retried using a backoff policy if it returns an error.
+type OperationWithData[T any] func() (T, error)
+
// An Operation is executing by Retry() or RetryNotify().
// The operation will be retried using a backoff policy if it returns an error.
type Operation func() error
+func (o Operation) withEmptyData() OperationWithData[struct{}] {
+ return func() (struct{}, error) {
+ return struct{}{}, o()
+ }
+}
+
// Notify is a notify-on-error function. It receives an operation error and
// backoff delay if the operation failed (with an error).
//
@@ -28,18 +38,41 @@ func Retry(o Operation, b BackOff) error {
return RetryNotify(o, b, nil)
}
+// RetryWithData is like Retry but returns data in the response too.
+func RetryWithData[T any](o OperationWithData[T], b BackOff) (T, error) {
+ return RetryNotifyWithData(o, b, nil)
+}
+
// RetryNotify calls notify function with the error and wait duration
// for each failed attempt before sleep.
func RetryNotify(operation Operation, b BackOff, notify Notify) error {
return RetryNotifyWithTimer(operation, b, notify, nil)
}
+// RetryNotifyWithData is like RetryNotify but returns data in the response too.
+func RetryNotifyWithData[T any](operation OperationWithData[T], b BackOff, notify Notify) (T, error) {
+ return doRetryNotify(operation, b, notify, nil)
+}
+
// RetryNotifyWithTimer calls notify function with the error and wait duration using the given Timer
// for each failed attempt before sleep.
// A default timer that uses system timer is used when nil is passed.
func RetryNotifyWithTimer(operation Operation, b BackOff, notify Notify, t Timer) error {
- var err error
- var next time.Duration
+ _, err := doRetryNotify(operation.withEmptyData(), b, notify, t)
+ return err
+}
+
+// RetryNotifyWithTimerAndData is like RetryNotifyWithTimer but returns data in the response too.
+func RetryNotifyWithTimerAndData[T any](operation OperationWithData[T], b BackOff, notify Notify, t Timer) (T, error) {
+ return doRetryNotify(operation, b, notify, t)
+}
+
+func doRetryNotify[T any](operation OperationWithData[T], b BackOff, notify Notify, t Timer) (T, error) {
+ var (
+ err error
+ next time.Duration
+ res T
+ )
if t == nil {
t = &defaultTimer{}
}
@@ -52,21 +85,22 @@ func RetryNotifyWithTimer(operation Operation, b BackOff, notify Notify, t Timer
b.Reset()
for {
- if err = operation(); err == nil {
- return nil
+ res, err = operation()
+ if err == nil {
+ return res, nil
}
var permanent *PermanentError
if errors.As(err, &permanent) {
- return permanent.Err
+ return res, permanent.Err
}
if next = b.NextBackOff(); next == Stop {
if cerr := ctx.Err(); cerr != nil {
- return cerr
+ return res, cerr
}
- return err
+ return res, err
}
if notify != nil {
@@ -77,7 +111,7 @@ func RetryNotifyWithTimer(operation Operation, b BackOff, notify Notify, t Timer
select {
case <-ctx.Done():
- return ctx.Err()
+ return res, ctx.Err()
case <-t.C():
}
}
diff --git a/tools/vendor/github.com/cespare/xxhash/v2/README.md b/tools/vendor/github.com/cespare/xxhash/v2/README.md
index 792b4a60b..8bf0e5b78 100644
--- a/tools/vendor/github.com/cespare/xxhash/v2/README.md
+++ b/tools/vendor/github.com/cespare/xxhash/v2/README.md
@@ -3,8 +3,7 @@
[![Go Reference](https://pkg.go.dev/badge/github.com/cespare/xxhash/v2.svg)](https://pkg.go.dev/github.com/cespare/xxhash/v2)
[![Test](https://github.com/cespare/xxhash/actions/workflows/test.yml/badge.svg)](https://github.com/cespare/xxhash/actions/workflows/test.yml)
-xxhash is a Go implementation of the 64-bit
-[xxHash](http://cyan4973.github.io/xxHash/) algorithm, XXH64. This is a
+xxhash is a Go implementation of the 64-bit [xxHash] algorithm, XXH64. This is a
high-quality hashing algorithm that is much faster than anything in the Go
standard library.
@@ -25,8 +24,11 @@ func (*Digest) WriteString(string) (int, error)
func (*Digest) Sum64() uint64
```
-This implementation provides a fast pure-Go implementation and an even faster
-assembly implementation for amd64.
+The package is written with optimized pure Go and also contains even faster
+assembly implementations for amd64 and arm64. If desired, the `purego` build tag
+opts into using the Go code even on those architectures.
+
+[xxHash]: http://cyan4973.github.io/xxHash/
## Compatibility
@@ -45,19 +47,20 @@ I recommend using the latest release of Go.
Here are some quick benchmarks comparing the pure-Go and assembly
implementations of Sum64.
-| input size | purego | asm |
-| --- | --- | --- |
-| 5 B | 979.66 MB/s | 1291.17 MB/s |
-| 100 B | 7475.26 MB/s | 7973.40 MB/s |
-| 4 KB | 17573.46 MB/s | 17602.65 MB/s |
-| 10 MB | 17131.46 MB/s | 17142.16 MB/s |
+| input size | purego | asm |
+| ---------- | --------- | --------- |
+| 4 B | 1.3 GB/s | 1.2 GB/s |
+| 16 B | 2.9 GB/s | 3.5 GB/s |
+| 100 B | 6.9 GB/s | 8.1 GB/s |
+| 4 KB | 11.7 GB/s | 16.7 GB/s |
+| 10 MB | 12.0 GB/s | 17.3 GB/s |
-These numbers were generated on Ubuntu 18.04 with an Intel i7-8700K CPU using
-the following commands under Go 1.11.2:
+These numbers were generated on Ubuntu 20.04 with an Intel Xeon Platinum 8252C
+CPU using the following commands under Go 1.19.2:
```
-$ go test -tags purego -benchtime 10s -bench '/xxhash,direct,bytes'
-$ go test -benchtime 10s -bench '/xxhash,direct,bytes'
+benchstat <(go test -tags purego -benchtime 500ms -count 15 -bench 'Sum64$')
+benchstat <(go test -benchtime 500ms -count 15 -bench 'Sum64$')
```
## Projects using this package
diff --git a/tools/vendor/github.com/cespare/xxhash/v2/testall.sh b/tools/vendor/github.com/cespare/xxhash/v2/testall.sh
new file mode 100644
index 000000000..94b9c4439
--- /dev/null
+++ b/tools/vendor/github.com/cespare/xxhash/v2/testall.sh
@@ -0,0 +1,10 @@
+#!/bin/bash
+set -eu -o pipefail
+
+# Small convenience script for running the tests with various combinations of
+# arch/tags. This assumes we're running on amd64 and have qemu available.
+
+go test ./...
+go test -tags purego ./...
+GOARCH=arm64 go test
+GOARCH=arm64 go test -tags purego
diff --git a/tools/vendor/github.com/cespare/xxhash/v2/xxhash.go b/tools/vendor/github.com/cespare/xxhash/v2/xxhash.go
index 15c835d54..a9e0d45c9 100644
--- a/tools/vendor/github.com/cespare/xxhash/v2/xxhash.go
+++ b/tools/vendor/github.com/cespare/xxhash/v2/xxhash.go
@@ -16,19 +16,11 @@ const (
prime5 uint64 = 2870177450012600261
)
-// NOTE(caleb): I'm using both consts and vars of the primes. Using consts where
-// possible in the Go code is worth a small (but measurable) performance boost
-// by avoiding some MOVQs. Vars are needed for the asm and also are useful for
-// convenience in the Go code in a few places where we need to intentionally
-// avoid constant arithmetic (e.g., v1 := prime1 + prime2 fails because the
-// result overflows a uint64).
-var (
- prime1v = prime1
- prime2v = prime2
- prime3v = prime3
- prime4v = prime4
- prime5v = prime5
-)
+// Store the primes in an array as well.
+//
+// The consts are used when possible in Go code to avoid MOVs but we need a
+// contiguous array of the assembly code.
+var primes = [...]uint64{prime1, prime2, prime3, prime4, prime5}
// Digest implements hash.Hash64.
type Digest struct {
@@ -50,10 +42,10 @@ func New() *Digest {
// Reset clears the Digest's state so that it can be reused.
func (d *Digest) Reset() {
- d.v1 = prime1v + prime2
+ d.v1 = primes[0] + prime2
d.v2 = prime2
d.v3 = 0
- d.v4 = -prime1v
+ d.v4 = -primes[0]
d.total = 0
d.n = 0
}
@@ -69,21 +61,23 @@ func (d *Digest) Write(b []byte) (n int, err error) {
n = len(b)
d.total += uint64(n)
+ memleft := d.mem[d.n&(len(d.mem)-1):]
+
if d.n+n < 32 {
// This new data doesn't even fill the current block.
- copy(d.mem[d.n:], b)
+ copy(memleft, b)
d.n += n
return
}
if d.n > 0 {
// Finish off the partial block.
- copy(d.mem[d.n:], b)
+ c := copy(memleft, b)
d.v1 = round(d.v1, u64(d.mem[0:8]))
d.v2 = round(d.v2, u64(d.mem[8:16]))
d.v3 = round(d.v3, u64(d.mem[16:24]))
d.v4 = round(d.v4, u64(d.mem[24:32]))
- b = b[32-d.n:]
+ b = b[c:]
d.n = 0
}
@@ -133,21 +127,20 @@ func (d *Digest) Sum64() uint64 {
h += d.total
- i, end := 0, d.n
- for ; i+8 <= end; i += 8 {
- k1 := round(0, u64(d.mem[i:i+8]))
+ b := d.mem[:d.n&(len(d.mem)-1)]
+ for ; len(b) >= 8; b = b[8:] {
+ k1 := round(0, u64(b[:8]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
- if i+4 <= end {
- h ^= uint64(u32(d.mem[i:i+4])) * prime1
+ if len(b) >= 4 {
+ h ^= uint64(u32(b[:4])) * prime1
h = rol23(h)*prime2 + prime3
- i += 4
+ b = b[4:]
}
- for i < end {
- h ^= uint64(d.mem[i]) * prime5
+ for ; len(b) > 0; b = b[1:] {
+ h ^= uint64(b[0]) * prime5
h = rol11(h) * prime1
- i++
}
h ^= h >> 33
diff --git a/tools/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.go b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.go
deleted file mode 100644
index ad14b807f..000000000
--- a/tools/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.go
+++ /dev/null
@@ -1,13 +0,0 @@
-// +build !appengine
-// +build gc
-// +build !purego
-
-package xxhash
-
-// Sum64 computes the 64-bit xxHash digest of b.
-//
-//go:noescape
-func Sum64(b []byte) uint64
-
-//go:noescape
-func writeBlocks(d *Digest, b []byte) int
diff --git a/tools/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s
index be8db5bf7..3e8b13257 100644
--- a/tools/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s
+++ b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s
@@ -1,215 +1,209 @@
+//go:build !appengine && gc && !purego
// +build !appengine
// +build gc
// +build !purego
#include "textflag.h"
-// Register allocation:
-// AX h
-// SI pointer to advance through b
-// DX n
-// BX loop end
-// R8 v1, k1
-// R9 v2
-// R10 v3
-// R11 v4
-// R12 tmp
-// R13 prime1v
-// R14 prime2v
-// DI prime4v
-
-// round reads from and advances the buffer pointer in SI.
-// It assumes that R13 has prime1v and R14 has prime2v.
-#define round(r) \
- MOVQ (SI), R12 \
- ADDQ $8, SI \
- IMULQ R14, R12 \
- ADDQ R12, r \
- ROLQ $31, r \
- IMULQ R13, r
-
-// mergeRound applies a merge round on the two registers acc and val.
-// It assumes that R13 has prime1v, R14 has prime2v, and DI has prime4v.
-#define mergeRound(acc, val) \
- IMULQ R14, val \
- ROLQ $31, val \
- IMULQ R13, val \
- XORQ val, acc \
- IMULQ R13, acc \
- ADDQ DI, acc
+// Registers:
+#define h AX
+#define d AX
+#define p SI // pointer to advance through b
+#define n DX
+#define end BX // loop end
+#define v1 R8
+#define v2 R9
+#define v3 R10
+#define v4 R11
+#define x R12
+#define prime1 R13
+#define prime2 R14
+#define prime4 DI
+
+#define round(acc, x) \
+ IMULQ prime2, x \
+ ADDQ x, acc \
+ ROLQ $31, acc \
+ IMULQ prime1, acc
+
+// round0 performs the operation x = round(0, x).
+#define round0(x) \
+ IMULQ prime2, x \
+ ROLQ $31, x \
+ IMULQ prime1, x
+
+// mergeRound applies a merge round on the two registers acc and x.
+// It assumes that prime1, prime2, and prime4 have been loaded.
+#define mergeRound(acc, x) \
+ round0(x) \
+ XORQ x, acc \
+ IMULQ prime1, acc \
+ ADDQ prime4, acc
+
+// blockLoop processes as many 32-byte blocks as possible,
+// updating v1, v2, v3, and v4. It assumes that there is at least one block
+// to process.
+#define blockLoop() \
+loop: \
+ MOVQ +0(p), x \
+ round(v1, x) \
+ MOVQ +8(p), x \
+ round(v2, x) \
+ MOVQ +16(p), x \
+ round(v3, x) \
+ MOVQ +24(p), x \
+ round(v4, x) \
+ ADDQ $32, p \
+ CMPQ p, end \
+ JLE loop
// func Sum64(b []byte) uint64
-TEXT ·Sum64(SB), NOSPLIT, $0-32
+TEXT ·Sum64(SB), NOSPLIT|NOFRAME, $0-32
// Load fixed primes.
- MOVQ ·prime1v(SB), R13
- MOVQ ·prime2v(SB), R14
- MOVQ ·prime4v(SB), DI
+ MOVQ ·primes+0(SB), prime1
+ MOVQ ·primes+8(SB), prime2
+ MOVQ ·primes+24(SB), prime4
// Load slice.
- MOVQ b_base+0(FP), SI
- MOVQ b_len+8(FP), DX
- LEAQ (SI)(DX*1), BX
+ MOVQ b_base+0(FP), p
+ MOVQ b_len+8(FP), n
+ LEAQ (p)(n*1), end
// The first loop limit will be len(b)-32.
- SUBQ $32, BX
+ SUBQ $32, end
// Check whether we have at least one block.
- CMPQ DX, $32
+ CMPQ n, $32
JLT noBlocks
// Set up initial state (v1, v2, v3, v4).
- MOVQ R13, R8
- ADDQ R14, R8
- MOVQ R14, R9
- XORQ R10, R10
- XORQ R11, R11
- SUBQ R13, R11
-
- // Loop until SI > BX.
-blockLoop:
- round(R8)
- round(R9)
- round(R10)
- round(R11)
-
- CMPQ SI, BX
- JLE blockLoop
-
- MOVQ R8, AX
- ROLQ $1, AX
- MOVQ R9, R12
- ROLQ $7, R12
- ADDQ R12, AX
- MOVQ R10, R12
- ROLQ $12, R12
- ADDQ R12, AX
- MOVQ R11, R12
- ROLQ $18, R12
- ADDQ R12, AX
-
- mergeRound(AX, R8)
- mergeRound(AX, R9)
- mergeRound(AX, R10)
- mergeRound(AX, R11)
+ MOVQ prime1, v1
+ ADDQ prime2, v1
+ MOVQ prime2, v2
+ XORQ v3, v3
+ XORQ v4, v4
+ SUBQ prime1, v4
+
+ blockLoop()
+
+ MOVQ v1, h
+ ROLQ $1, h
+ MOVQ v2, x
+ ROLQ $7, x
+ ADDQ x, h
+ MOVQ v3, x
+ ROLQ $12, x
+ ADDQ x, h
+ MOVQ v4, x
+ ROLQ $18, x
+ ADDQ x, h
+
+ mergeRound(h, v1)
+ mergeRound(h, v2)
+ mergeRound(h, v3)
+ mergeRound(h, v4)
JMP afterBlocks
noBlocks:
- MOVQ ·prime5v(SB), AX
+ MOVQ ·primes+32(SB), h
afterBlocks:
- ADDQ DX, AX
-
- // Right now BX has len(b)-32, and we want to loop until SI > len(b)-8.
- ADDQ $24, BX
-
- CMPQ SI, BX
- JG fourByte
-
-wordLoop:
- // Calculate k1.
- MOVQ (SI), R8
- ADDQ $8, SI
- IMULQ R14, R8
- ROLQ $31, R8
- IMULQ R13, R8
-
- XORQ R8, AX
- ROLQ $27, AX
- IMULQ R13, AX
- ADDQ DI, AX
-
- CMPQ SI, BX
- JLE wordLoop
-
-fourByte:
- ADDQ $4, BX
- CMPQ SI, BX
- JG singles
-
- MOVL (SI), R8
- ADDQ $4, SI
- IMULQ R13, R8
- XORQ R8, AX
-
- ROLQ $23, AX
- IMULQ R14, AX
- ADDQ ·prime3v(SB), AX
-
-singles:
- ADDQ $4, BX
- CMPQ SI, BX
+ ADDQ n, h
+
+ ADDQ $24, end
+ CMPQ p, end
+ JG try4
+
+loop8:
+ MOVQ (p), x
+ ADDQ $8, p
+ round0(x)
+ XORQ x, h
+ ROLQ $27, h
+ IMULQ prime1, h
+ ADDQ prime4, h
+
+ CMPQ p, end
+ JLE loop8
+
+try4:
+ ADDQ $4, end
+ CMPQ p, end
+ JG try1
+
+ MOVL (p), x
+ ADDQ $4, p
+ IMULQ prime1, x
+ XORQ x, h
+
+ ROLQ $23, h
+ IMULQ prime2, h
+ ADDQ ·primes+16(SB), h
+
+try1:
+ ADDQ $4, end
+ CMPQ p, end
JGE finalize
-singlesLoop:
- MOVBQZX (SI), R12
- ADDQ $1, SI
- IMULQ ·prime5v(SB), R12
- XORQ R12, AX
+loop1:
+ MOVBQZX (p), x
+ ADDQ $1, p
+ IMULQ ·primes+32(SB), x
+ XORQ x, h
+ ROLQ $11, h
+ IMULQ prime1, h
- ROLQ $11, AX
- IMULQ R13, AX
-
- CMPQ SI, BX
- JL singlesLoop
+ CMPQ p, end
+ JL loop1
finalize:
- MOVQ AX, R12
- SHRQ $33, R12
- XORQ R12, AX
- IMULQ R14, AX
- MOVQ AX, R12
- SHRQ $29, R12
- XORQ R12, AX
- IMULQ ·prime3v(SB), AX
- MOVQ AX, R12
- SHRQ $32, R12
- XORQ R12, AX
-
- MOVQ AX, ret+24(FP)
+ MOVQ h, x
+ SHRQ $33, x
+ XORQ x, h
+ IMULQ prime2, h
+ MOVQ h, x
+ SHRQ $29, x
+ XORQ x, h
+ IMULQ ·primes+16(SB), h
+ MOVQ h, x
+ SHRQ $32, x
+ XORQ x, h
+
+ MOVQ h, ret+24(FP)
RET
-// writeBlocks uses the same registers as above except that it uses AX to store
-// the d pointer.
-
// func writeBlocks(d *Digest, b []byte) int
-TEXT ·writeBlocks(SB), NOSPLIT, $0-40
+TEXT ·writeBlocks(SB), NOSPLIT|NOFRAME, $0-40
// Load fixed primes needed for round.
- MOVQ ·prime1v(SB), R13
- MOVQ ·prime2v(SB), R14
+ MOVQ ·primes+0(SB), prime1
+ MOVQ ·primes+8(SB), prime2
// Load slice.
- MOVQ b_base+8(FP), SI
- MOVQ b_len+16(FP), DX
- LEAQ (SI)(DX*1), BX
- SUBQ $32, BX
+ MOVQ b_base+8(FP), p
+ MOVQ b_len+16(FP), n
+ LEAQ (p)(n*1), end
+ SUBQ $32, end
// Load vN from d.
- MOVQ d+0(FP), AX
- MOVQ 0(AX), R8 // v1
- MOVQ 8(AX), R9 // v2
- MOVQ 16(AX), R10 // v3
- MOVQ 24(AX), R11 // v4
+ MOVQ s+0(FP), d
+ MOVQ 0(d), v1
+ MOVQ 8(d), v2
+ MOVQ 16(d), v3
+ MOVQ 24(d), v4
// We don't need to check the loop condition here; this function is
// always called with at least one block of data to process.
-blockLoop:
- round(R8)
- round(R9)
- round(R10)
- round(R11)
-
- CMPQ SI, BX
- JLE blockLoop
+ blockLoop()
// Copy vN back to d.
- MOVQ R8, 0(AX)
- MOVQ R9, 8(AX)
- MOVQ R10, 16(AX)
- MOVQ R11, 24(AX)
-
- // The number of bytes written is SI minus the old base pointer.
- SUBQ b_base+8(FP), SI
- MOVQ SI, ret+32(FP)
+ MOVQ v1, 0(d)
+ MOVQ v2, 8(d)
+ MOVQ v3, 16(d)
+ MOVQ v4, 24(d)
+
+ // The number of bytes written is p minus the old base pointer.
+ SUBQ b_base+8(FP), p
+ MOVQ p, ret+32(FP)
RET
diff --git a/tools/vendor/github.com/cespare/xxhash/v2/xxhash_arm64.s b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_arm64.s
new file mode 100644
index 000000000..7e3145a22
--- /dev/null
+++ b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_arm64.s
@@ -0,0 +1,183 @@
+//go:build !appengine && gc && !purego
+// +build !appengine
+// +build gc
+// +build !purego
+
+#include "textflag.h"
+
+// Registers:
+#define digest R1
+#define h R2 // return value
+#define p R3 // input pointer
+#define n R4 // input length
+#define nblocks R5 // n / 32
+#define prime1 R7
+#define prime2 R8
+#define prime3 R9
+#define prime4 R10
+#define prime5 R11
+#define v1 R12
+#define v2 R13
+#define v3 R14
+#define v4 R15
+#define x1 R20
+#define x2 R21
+#define x3 R22
+#define x4 R23
+
+#define round(acc, x) \
+ MADD prime2, acc, x, acc \
+ ROR $64-31, acc \
+ MUL prime1, acc
+
+// round0 performs the operation x = round(0, x).
+#define round0(x) \
+ MUL prime2, x \
+ ROR $64-31, x \
+ MUL prime1, x
+
+#define mergeRound(acc, x) \
+ round0(x) \
+ EOR x, acc \
+ MADD acc, prime4, prime1, acc
+
+// blockLoop processes as many 32-byte blocks as possible,
+// updating v1, v2, v3, and v4. It assumes that n >= 32.
+#define blockLoop() \
+ LSR $5, n, nblocks \
+ PCALIGN $16 \
+ loop: \
+ LDP.P 16(p), (x1, x2) \
+ LDP.P 16(p), (x3, x4) \
+ round(v1, x1) \
+ round(v2, x2) \
+ round(v3, x3) \
+ round(v4, x4) \
+ SUB $1, nblocks \
+ CBNZ nblocks, loop
+
+// func Sum64(b []byte) uint64
+TEXT ·Sum64(SB), NOSPLIT|NOFRAME, $0-32
+ LDP b_base+0(FP), (p, n)
+
+ LDP ·primes+0(SB), (prime1, prime2)
+ LDP ·primes+16(SB), (prime3, prime4)
+ MOVD ·primes+32(SB), prime5
+
+ CMP $32, n
+ CSEL LT, prime5, ZR, h // if n < 32 { h = prime5 } else { h = 0 }
+ BLT afterLoop
+
+ ADD prime1, prime2, v1
+ MOVD prime2, v2
+ MOVD $0, v3
+ NEG prime1, v4
+
+ blockLoop()
+
+ ROR $64-1, v1, x1
+ ROR $64-7, v2, x2
+ ADD x1, x2
+ ROR $64-12, v3, x3
+ ROR $64-18, v4, x4
+ ADD x3, x4
+ ADD x2, x4, h
+
+ mergeRound(h, v1)
+ mergeRound(h, v2)
+ mergeRound(h, v3)
+ mergeRound(h, v4)
+
+afterLoop:
+ ADD n, h
+
+ TBZ $4, n, try8
+ LDP.P 16(p), (x1, x2)
+
+ round0(x1)
+
+ // NOTE: here and below, sequencing the EOR after the ROR (using a
+ // rotated register) is worth a small but measurable speedup for small
+ // inputs.
+ ROR $64-27, h
+ EOR x1 @> 64-27, h, h
+ MADD h, prime4, prime1, h
+
+ round0(x2)
+ ROR $64-27, h
+ EOR x2 @> 64-27, h, h
+ MADD h, prime4, prime1, h
+
+try8:
+ TBZ $3, n, try4
+ MOVD.P 8(p), x1
+
+ round0(x1)
+ ROR $64-27, h
+ EOR x1 @> 64-27, h, h
+ MADD h, prime4, prime1, h
+
+try4:
+ TBZ $2, n, try2
+ MOVWU.P 4(p), x2
+
+ MUL prime1, x2
+ ROR $64-23, h
+ EOR x2 @> 64-23, h, h
+ MADD h, prime3, prime2, h
+
+try2:
+ TBZ $1, n, try1
+ MOVHU.P 2(p), x3
+ AND $255, x3, x1
+ LSR $8, x3, x2
+
+ MUL prime5, x1
+ ROR $64-11, h
+ EOR x1 @> 64-11, h, h
+ MUL prime1, h
+
+ MUL prime5, x2
+ ROR $64-11, h
+ EOR x2 @> 64-11, h, h
+ MUL prime1, h
+
+try1:
+ TBZ $0, n, finalize
+ MOVBU (p), x4
+
+ MUL prime5, x4
+ ROR $64-11, h
+ EOR x4 @> 64-11, h, h
+ MUL prime1, h
+
+finalize:
+ EOR h >> 33, h
+ MUL prime2, h
+ EOR h >> 29, h
+ MUL prime3, h
+ EOR h >> 32, h
+
+ MOVD h, ret+24(FP)
+ RET
+
+// func writeBlocks(d *Digest, b []byte) int
+TEXT ·writeBlocks(SB), NOSPLIT|NOFRAME, $0-40
+ LDP ·primes+0(SB), (prime1, prime2)
+
+ // Load state. Assume v[1-4] are stored contiguously.
+ MOVD d+0(FP), digest
+ LDP 0(digest), (v1, v2)
+ LDP 16(digest), (v3, v4)
+
+ LDP b_base+8(FP), (p, n)
+
+ blockLoop()
+
+ // Store updated state.
+ STP (v1, v2), 0(digest)
+ STP (v3, v4), 16(digest)
+
+ BIC $31, n
+ MOVD n, ret+32(FP)
+ RET
diff --git a/tools/vendor/github.com/cespare/xxhash/v2/xxhash_asm.go b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_asm.go
new file mode 100644
index 000000000..9216e0a40
--- /dev/null
+++ b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_asm.go
@@ -0,0 +1,15 @@
+//go:build (amd64 || arm64) && !appengine && gc && !purego
+// +build amd64 arm64
+// +build !appengine
+// +build gc
+// +build !purego
+
+package xxhash
+
+// Sum64 computes the 64-bit xxHash digest of b.
+//
+//go:noescape
+func Sum64(b []byte) uint64
+
+//go:noescape
+func writeBlocks(d *Digest, b []byte) int
diff --git a/tools/vendor/github.com/cespare/xxhash/v2/xxhash_other.go b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_other.go
index 4a5a82160..26df13bba 100644
--- a/tools/vendor/github.com/cespare/xxhash/v2/xxhash_other.go
+++ b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_other.go
@@ -1,4 +1,5 @@
-// +build !amd64 appengine !gc purego
+//go:build (!amd64 && !arm64) || appengine || !gc || purego
+// +build !amd64,!arm64 appengine !gc purego
package xxhash
@@ -14,10 +15,10 @@ func Sum64(b []byte) uint64 {
var h uint64
if n >= 32 {
- v1 := prime1v + prime2
+ v1 := primes[0] + prime2
v2 := prime2
v3 := uint64(0)
- v4 := -prime1v
+ v4 := -primes[0]
for len(b) >= 32 {
v1 = round(v1, u64(b[0:8:len(b)]))
v2 = round(v2, u64(b[8:16:len(b)]))
@@ -36,19 +37,18 @@ func Sum64(b []byte) uint64 {
h += uint64(n)
- i, end := 0, len(b)
- for ; i+8 <= end; i += 8 {
- k1 := round(0, u64(b[i:i+8:len(b)]))
+ for ; len(b) >= 8; b = b[8:] {
+ k1 := round(0, u64(b[:8]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
- if i+4 <= end {
- h ^= uint64(u32(b[i:i+4:len(b)])) * prime1
+ if len(b) >= 4 {
+ h ^= uint64(u32(b[:4])) * prime1
h = rol23(h)*prime2 + prime3
- i += 4
+ b = b[4:]
}
- for ; i < end; i++ {
- h ^= uint64(b[i]) * prime5
+ for ; len(b) > 0; b = b[1:] {
+ h ^= uint64(b[0]) * prime5
h = rol11(h) * prime1
}
diff --git a/tools/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go
index fc9bea7a3..e86f1b5fd 100644
--- a/tools/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go
+++ b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go
@@ -1,3 +1,4 @@
+//go:build appengine
// +build appengine
// This file contains the safe implementations of otherwise unsafe-using code.
diff --git a/tools/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go
index 376e0ca2e..1c1638fd8 100644
--- a/tools/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go
+++ b/tools/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go
@@ -1,3 +1,4 @@
+//go:build !appengine
// +build !appengine
// This file encapsulates usage of unsafe.
@@ -11,7 +12,7 @@ import (
// In the future it's possible that compiler optimizations will make these
// XxxString functions unnecessary by realizing that calls such as
-// Sum64([]byte(s)) don't need to copy s. See https://golang.org/issue/2205.
+// Sum64([]byte(s)) don't need to copy s. See https://go.dev/issue/2205.
// If that happens, even if we keep these functions they can be replaced with
// the trivial safe code.
diff --git a/tools/vendor/github.com/containerd/cgroups/stats/v1/doc.go b/tools/vendor/github.com/containerd/cgroups/stats/v1/doc.go
deleted file mode 100644
index 23f3cdd4b..000000000
--- a/tools/vendor/github.com/containerd/cgroups/stats/v1/doc.go
+++ /dev/null
@@ -1,17 +0,0 @@
-/*
- Copyright The containerd Authors.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-*/
-
-package v1
diff --git a/tools/vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.go b/tools/vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.go
deleted file mode 100644
index 6d2d41770..000000000
--- a/tools/vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.go
+++ /dev/null
@@ -1,6125 +0,0 @@
-// Code generated by protoc-gen-gogo. DO NOT EDIT.
-// source: github.com/containerd/cgroups/stats/v1/metrics.proto
-
-package v1
-
-import (
- fmt "fmt"
- _ "github.com/gogo/protobuf/gogoproto"
- proto "github.com/gogo/protobuf/proto"
- io "io"
- math "math"
- math_bits "math/bits"
- reflect "reflect"
- strings "strings"
-)
-
-// Reference imports to suppress errors if they are not otherwise used.
-var _ = proto.Marshal
-var _ = fmt.Errorf
-var _ = math.Inf
-
-// This is a compile-time assertion to ensure that this generated file
-// is compatible with the proto package it is being compiled against.
-// A compilation error at this line likely means your copy of the
-// proto package needs to be updated.
-const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
-
-type Metrics struct {
- Hugetlb []*HugetlbStat `protobuf:"bytes,1,rep,name=hugetlb,proto3" json:"hugetlb,omitempty"`
- Pids *PidsStat `protobuf:"bytes,2,opt,name=pids,proto3" json:"pids,omitempty"`
- CPU *CPUStat `protobuf:"bytes,3,opt,name=cpu,proto3" json:"cpu,omitempty"`
- Memory *MemoryStat `protobuf:"bytes,4,opt,name=memory,proto3" json:"memory,omitempty"`
- Blkio *BlkIOStat `protobuf:"bytes,5,opt,name=blkio,proto3" json:"blkio,omitempty"`
- Rdma *RdmaStat `protobuf:"bytes,6,opt,name=rdma,proto3" json:"rdma,omitempty"`
- Network []*NetworkStat `protobuf:"bytes,7,rep,name=network,proto3" json:"network,omitempty"`
- CgroupStats *CgroupStats `protobuf:"bytes,8,opt,name=cgroup_stats,json=cgroupStats,proto3" json:"cgroup_stats,omitempty"`
- MemoryOomControl *MemoryOomControl `protobuf:"bytes,9,opt,name=memory_oom_control,json=memoryOomControl,proto3" json:"memory_oom_control,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *Metrics) Reset() { *m = Metrics{} }
-func (*Metrics) ProtoMessage() {}
-func (*Metrics) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{0}
-}
-func (m *Metrics) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *Metrics) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_Metrics.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *Metrics) XXX_Merge(src proto.Message) {
- xxx_messageInfo_Metrics.Merge(m, src)
-}
-func (m *Metrics) XXX_Size() int {
- return m.Size()
-}
-func (m *Metrics) XXX_DiscardUnknown() {
- xxx_messageInfo_Metrics.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_Metrics proto.InternalMessageInfo
-
-type HugetlbStat struct {
- Usage uint64 `protobuf:"varint,1,opt,name=usage,proto3" json:"usage,omitempty"`
- Max uint64 `protobuf:"varint,2,opt,name=max,proto3" json:"max,omitempty"`
- Failcnt uint64 `protobuf:"varint,3,opt,name=failcnt,proto3" json:"failcnt,omitempty"`
- Pagesize string `protobuf:"bytes,4,opt,name=pagesize,proto3" json:"pagesize,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *HugetlbStat) Reset() { *m = HugetlbStat{} }
-func (*HugetlbStat) ProtoMessage() {}
-func (*HugetlbStat) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{1}
-}
-func (m *HugetlbStat) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *HugetlbStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_HugetlbStat.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *HugetlbStat) XXX_Merge(src proto.Message) {
- xxx_messageInfo_HugetlbStat.Merge(m, src)
-}
-func (m *HugetlbStat) XXX_Size() int {
- return m.Size()
-}
-func (m *HugetlbStat) XXX_DiscardUnknown() {
- xxx_messageInfo_HugetlbStat.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_HugetlbStat proto.InternalMessageInfo
-
-type PidsStat struct {
- Current uint64 `protobuf:"varint,1,opt,name=current,proto3" json:"current,omitempty"`
- Limit uint64 `protobuf:"varint,2,opt,name=limit,proto3" json:"limit,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *PidsStat) Reset() { *m = PidsStat{} }
-func (*PidsStat) ProtoMessage() {}
-func (*PidsStat) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{2}
-}
-func (m *PidsStat) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *PidsStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_PidsStat.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *PidsStat) XXX_Merge(src proto.Message) {
- xxx_messageInfo_PidsStat.Merge(m, src)
-}
-func (m *PidsStat) XXX_Size() int {
- return m.Size()
-}
-func (m *PidsStat) XXX_DiscardUnknown() {
- xxx_messageInfo_PidsStat.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_PidsStat proto.InternalMessageInfo
-
-type CPUStat struct {
- Usage *CPUUsage `protobuf:"bytes,1,opt,name=usage,proto3" json:"usage,omitempty"`
- Throttling *Throttle `protobuf:"bytes,2,opt,name=throttling,proto3" json:"throttling,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *CPUStat) Reset() { *m = CPUStat{} }
-func (*CPUStat) ProtoMessage() {}
-func (*CPUStat) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{3}
-}
-func (m *CPUStat) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *CPUStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_CPUStat.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *CPUStat) XXX_Merge(src proto.Message) {
- xxx_messageInfo_CPUStat.Merge(m, src)
-}
-func (m *CPUStat) XXX_Size() int {
- return m.Size()
-}
-func (m *CPUStat) XXX_DiscardUnknown() {
- xxx_messageInfo_CPUStat.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_CPUStat proto.InternalMessageInfo
-
-type CPUUsage struct {
- // values in nanoseconds
- Total uint64 `protobuf:"varint,1,opt,name=total,proto3" json:"total,omitempty"`
- Kernel uint64 `protobuf:"varint,2,opt,name=kernel,proto3" json:"kernel,omitempty"`
- User uint64 `protobuf:"varint,3,opt,name=user,proto3" json:"user,omitempty"`
- PerCPU []uint64 `protobuf:"varint,4,rep,packed,name=per_cpu,json=perCpu,proto3" json:"per_cpu,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *CPUUsage) Reset() { *m = CPUUsage{} }
-func (*CPUUsage) ProtoMessage() {}
-func (*CPUUsage) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{4}
-}
-func (m *CPUUsage) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *CPUUsage) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_CPUUsage.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *CPUUsage) XXX_Merge(src proto.Message) {
- xxx_messageInfo_CPUUsage.Merge(m, src)
-}
-func (m *CPUUsage) XXX_Size() int {
- return m.Size()
-}
-func (m *CPUUsage) XXX_DiscardUnknown() {
- xxx_messageInfo_CPUUsage.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_CPUUsage proto.InternalMessageInfo
-
-type Throttle struct {
- Periods uint64 `protobuf:"varint,1,opt,name=periods,proto3" json:"periods,omitempty"`
- ThrottledPeriods uint64 `protobuf:"varint,2,opt,name=throttled_periods,json=throttledPeriods,proto3" json:"throttled_periods,omitempty"`
- ThrottledTime uint64 `protobuf:"varint,3,opt,name=throttled_time,json=throttledTime,proto3" json:"throttled_time,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *Throttle) Reset() { *m = Throttle{} }
-func (*Throttle) ProtoMessage() {}
-func (*Throttle) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{5}
-}
-func (m *Throttle) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *Throttle) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_Throttle.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *Throttle) XXX_Merge(src proto.Message) {
- xxx_messageInfo_Throttle.Merge(m, src)
-}
-func (m *Throttle) XXX_Size() int {
- return m.Size()
-}
-func (m *Throttle) XXX_DiscardUnknown() {
- xxx_messageInfo_Throttle.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_Throttle proto.InternalMessageInfo
-
-type MemoryStat struct {
- Cache uint64 `protobuf:"varint,1,opt,name=cache,proto3" json:"cache,omitempty"`
- RSS uint64 `protobuf:"varint,2,opt,name=rss,proto3" json:"rss,omitempty"`
- RSSHuge uint64 `protobuf:"varint,3,opt,name=rss_huge,json=rssHuge,proto3" json:"rss_huge,omitempty"`
- MappedFile uint64 `protobuf:"varint,4,opt,name=mapped_file,json=mappedFile,proto3" json:"mapped_file,omitempty"`
- Dirty uint64 `protobuf:"varint,5,opt,name=dirty,proto3" json:"dirty,omitempty"`
- Writeback uint64 `protobuf:"varint,6,opt,name=writeback,proto3" json:"writeback,omitempty"`
- PgPgIn uint64 `protobuf:"varint,7,opt,name=pg_pg_in,json=pgPgIn,proto3" json:"pg_pg_in,omitempty"`
- PgPgOut uint64 `protobuf:"varint,8,opt,name=pg_pg_out,json=pgPgOut,proto3" json:"pg_pg_out,omitempty"`
- PgFault uint64 `protobuf:"varint,9,opt,name=pg_fault,json=pgFault,proto3" json:"pg_fault,omitempty"`
- PgMajFault uint64 `protobuf:"varint,10,opt,name=pg_maj_fault,json=pgMajFault,proto3" json:"pg_maj_fault,omitempty"`
- InactiveAnon uint64 `protobuf:"varint,11,opt,name=inactive_anon,json=inactiveAnon,proto3" json:"inactive_anon,omitempty"`
- ActiveAnon uint64 `protobuf:"varint,12,opt,name=active_anon,json=activeAnon,proto3" json:"active_anon,omitempty"`
- InactiveFile uint64 `protobuf:"varint,13,opt,name=inactive_file,json=inactiveFile,proto3" json:"inactive_file,omitempty"`
- ActiveFile uint64 `protobuf:"varint,14,opt,name=active_file,json=activeFile,proto3" json:"active_file,omitempty"`
- Unevictable uint64 `protobuf:"varint,15,opt,name=unevictable,proto3" json:"unevictable,omitempty"`
- HierarchicalMemoryLimit uint64 `protobuf:"varint,16,opt,name=hierarchical_memory_limit,json=hierarchicalMemoryLimit,proto3" json:"hierarchical_memory_limit,omitempty"`
- HierarchicalSwapLimit uint64 `protobuf:"varint,17,opt,name=hierarchical_swap_limit,json=hierarchicalSwapLimit,proto3" json:"hierarchical_swap_limit,omitempty"`
- TotalCache uint64 `protobuf:"varint,18,opt,name=total_cache,json=totalCache,proto3" json:"total_cache,omitempty"`
- TotalRSS uint64 `protobuf:"varint,19,opt,name=total_rss,json=totalRss,proto3" json:"total_rss,omitempty"`
- TotalRSSHuge uint64 `protobuf:"varint,20,opt,name=total_rss_huge,json=totalRssHuge,proto3" json:"total_rss_huge,omitempty"`
- TotalMappedFile uint64 `protobuf:"varint,21,opt,name=total_mapped_file,json=totalMappedFile,proto3" json:"total_mapped_file,omitempty"`
- TotalDirty uint64 `protobuf:"varint,22,opt,name=total_dirty,json=totalDirty,proto3" json:"total_dirty,omitempty"`
- TotalWriteback uint64 `protobuf:"varint,23,opt,name=total_writeback,json=totalWriteback,proto3" json:"total_writeback,omitempty"`
- TotalPgPgIn uint64 `protobuf:"varint,24,opt,name=total_pg_pg_in,json=totalPgPgIn,proto3" json:"total_pg_pg_in,omitempty"`
- TotalPgPgOut uint64 `protobuf:"varint,25,opt,name=total_pg_pg_out,json=totalPgPgOut,proto3" json:"total_pg_pg_out,omitempty"`
- TotalPgFault uint64 `protobuf:"varint,26,opt,name=total_pg_fault,json=totalPgFault,proto3" json:"total_pg_fault,omitempty"`
- TotalPgMajFault uint64 `protobuf:"varint,27,opt,name=total_pg_maj_fault,json=totalPgMajFault,proto3" json:"total_pg_maj_fault,omitempty"`
- TotalInactiveAnon uint64 `protobuf:"varint,28,opt,name=total_inactive_anon,json=totalInactiveAnon,proto3" json:"total_inactive_anon,omitempty"`
- TotalActiveAnon uint64 `protobuf:"varint,29,opt,name=total_active_anon,json=totalActiveAnon,proto3" json:"total_active_anon,omitempty"`
- TotalInactiveFile uint64 `protobuf:"varint,30,opt,name=total_inactive_file,json=totalInactiveFile,proto3" json:"total_inactive_file,omitempty"`
- TotalActiveFile uint64 `protobuf:"varint,31,opt,name=total_active_file,json=totalActiveFile,proto3" json:"total_active_file,omitempty"`
- TotalUnevictable uint64 `protobuf:"varint,32,opt,name=total_unevictable,json=totalUnevictable,proto3" json:"total_unevictable,omitempty"`
- Usage *MemoryEntry `protobuf:"bytes,33,opt,name=usage,proto3" json:"usage,omitempty"`
- Swap *MemoryEntry `protobuf:"bytes,34,opt,name=swap,proto3" json:"swap,omitempty"`
- Kernel *MemoryEntry `protobuf:"bytes,35,opt,name=kernel,proto3" json:"kernel,omitempty"`
- KernelTCP *MemoryEntry `protobuf:"bytes,36,opt,name=kernel_tcp,json=kernelTcp,proto3" json:"kernel_tcp,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *MemoryStat) Reset() { *m = MemoryStat{} }
-func (*MemoryStat) ProtoMessage() {}
-func (*MemoryStat) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{6}
-}
-func (m *MemoryStat) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *MemoryStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_MemoryStat.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *MemoryStat) XXX_Merge(src proto.Message) {
- xxx_messageInfo_MemoryStat.Merge(m, src)
-}
-func (m *MemoryStat) XXX_Size() int {
- return m.Size()
-}
-func (m *MemoryStat) XXX_DiscardUnknown() {
- xxx_messageInfo_MemoryStat.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_MemoryStat proto.InternalMessageInfo
-
-type MemoryEntry struct {
- Limit uint64 `protobuf:"varint,1,opt,name=limit,proto3" json:"limit,omitempty"`
- Usage uint64 `protobuf:"varint,2,opt,name=usage,proto3" json:"usage,omitempty"`
- Max uint64 `protobuf:"varint,3,opt,name=max,proto3" json:"max,omitempty"`
- Failcnt uint64 `protobuf:"varint,4,opt,name=failcnt,proto3" json:"failcnt,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *MemoryEntry) Reset() { *m = MemoryEntry{} }
-func (*MemoryEntry) ProtoMessage() {}
-func (*MemoryEntry) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{7}
-}
-func (m *MemoryEntry) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *MemoryEntry) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_MemoryEntry.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *MemoryEntry) XXX_Merge(src proto.Message) {
- xxx_messageInfo_MemoryEntry.Merge(m, src)
-}
-func (m *MemoryEntry) XXX_Size() int {
- return m.Size()
-}
-func (m *MemoryEntry) XXX_DiscardUnknown() {
- xxx_messageInfo_MemoryEntry.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_MemoryEntry proto.InternalMessageInfo
-
-type MemoryOomControl struct {
- OomKillDisable uint64 `protobuf:"varint,1,opt,name=oom_kill_disable,json=oomKillDisable,proto3" json:"oom_kill_disable,omitempty"`
- UnderOom uint64 `protobuf:"varint,2,opt,name=under_oom,json=underOom,proto3" json:"under_oom,omitempty"`
- OomKill uint64 `protobuf:"varint,3,opt,name=oom_kill,json=oomKill,proto3" json:"oom_kill,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *MemoryOomControl) Reset() { *m = MemoryOomControl{} }
-func (*MemoryOomControl) ProtoMessage() {}
-func (*MemoryOomControl) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{8}
-}
-func (m *MemoryOomControl) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *MemoryOomControl) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_MemoryOomControl.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *MemoryOomControl) XXX_Merge(src proto.Message) {
- xxx_messageInfo_MemoryOomControl.Merge(m, src)
-}
-func (m *MemoryOomControl) XXX_Size() int {
- return m.Size()
-}
-func (m *MemoryOomControl) XXX_DiscardUnknown() {
- xxx_messageInfo_MemoryOomControl.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_MemoryOomControl proto.InternalMessageInfo
-
-type BlkIOStat struct {
- IoServiceBytesRecursive []*BlkIOEntry `protobuf:"bytes,1,rep,name=io_service_bytes_recursive,json=ioServiceBytesRecursive,proto3" json:"io_service_bytes_recursive,omitempty"`
- IoServicedRecursive []*BlkIOEntry `protobuf:"bytes,2,rep,name=io_serviced_recursive,json=ioServicedRecursive,proto3" json:"io_serviced_recursive,omitempty"`
- IoQueuedRecursive []*BlkIOEntry `protobuf:"bytes,3,rep,name=io_queued_recursive,json=ioQueuedRecursive,proto3" json:"io_queued_recursive,omitempty"`
- IoServiceTimeRecursive []*BlkIOEntry `protobuf:"bytes,4,rep,name=io_service_time_recursive,json=ioServiceTimeRecursive,proto3" json:"io_service_time_recursive,omitempty"`
- IoWaitTimeRecursive []*BlkIOEntry `protobuf:"bytes,5,rep,name=io_wait_time_recursive,json=ioWaitTimeRecursive,proto3" json:"io_wait_time_recursive,omitempty"`
- IoMergedRecursive []*BlkIOEntry `protobuf:"bytes,6,rep,name=io_merged_recursive,json=ioMergedRecursive,proto3" json:"io_merged_recursive,omitempty"`
- IoTimeRecursive []*BlkIOEntry `protobuf:"bytes,7,rep,name=io_time_recursive,json=ioTimeRecursive,proto3" json:"io_time_recursive,omitempty"`
- SectorsRecursive []*BlkIOEntry `protobuf:"bytes,8,rep,name=sectors_recursive,json=sectorsRecursive,proto3" json:"sectors_recursive,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *BlkIOStat) Reset() { *m = BlkIOStat{} }
-func (*BlkIOStat) ProtoMessage() {}
-func (*BlkIOStat) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{9}
-}
-func (m *BlkIOStat) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *BlkIOStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_BlkIOStat.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *BlkIOStat) XXX_Merge(src proto.Message) {
- xxx_messageInfo_BlkIOStat.Merge(m, src)
-}
-func (m *BlkIOStat) XXX_Size() int {
- return m.Size()
-}
-func (m *BlkIOStat) XXX_DiscardUnknown() {
- xxx_messageInfo_BlkIOStat.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_BlkIOStat proto.InternalMessageInfo
-
-type BlkIOEntry struct {
- Op string `protobuf:"bytes,1,opt,name=op,proto3" json:"op,omitempty"`
- Device string `protobuf:"bytes,2,opt,name=device,proto3" json:"device,omitempty"`
- Major uint64 `protobuf:"varint,3,opt,name=major,proto3" json:"major,omitempty"`
- Minor uint64 `protobuf:"varint,4,opt,name=minor,proto3" json:"minor,omitempty"`
- Value uint64 `protobuf:"varint,5,opt,name=value,proto3" json:"value,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *BlkIOEntry) Reset() { *m = BlkIOEntry{} }
-func (*BlkIOEntry) ProtoMessage() {}
-func (*BlkIOEntry) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{10}
-}
-func (m *BlkIOEntry) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *BlkIOEntry) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_BlkIOEntry.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *BlkIOEntry) XXX_Merge(src proto.Message) {
- xxx_messageInfo_BlkIOEntry.Merge(m, src)
-}
-func (m *BlkIOEntry) XXX_Size() int {
- return m.Size()
-}
-func (m *BlkIOEntry) XXX_DiscardUnknown() {
- xxx_messageInfo_BlkIOEntry.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_BlkIOEntry proto.InternalMessageInfo
-
-type RdmaStat struct {
- Current []*RdmaEntry `protobuf:"bytes,1,rep,name=current,proto3" json:"current,omitempty"`
- Limit []*RdmaEntry `protobuf:"bytes,2,rep,name=limit,proto3" json:"limit,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *RdmaStat) Reset() { *m = RdmaStat{} }
-func (*RdmaStat) ProtoMessage() {}
-func (*RdmaStat) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{11}
-}
-func (m *RdmaStat) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *RdmaStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_RdmaStat.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *RdmaStat) XXX_Merge(src proto.Message) {
- xxx_messageInfo_RdmaStat.Merge(m, src)
-}
-func (m *RdmaStat) XXX_Size() int {
- return m.Size()
-}
-func (m *RdmaStat) XXX_DiscardUnknown() {
- xxx_messageInfo_RdmaStat.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_RdmaStat proto.InternalMessageInfo
-
-type RdmaEntry struct {
- Device string `protobuf:"bytes,1,opt,name=device,proto3" json:"device,omitempty"`
- HcaHandles uint32 `protobuf:"varint,2,opt,name=hca_handles,json=hcaHandles,proto3" json:"hca_handles,omitempty"`
- HcaObjects uint32 `protobuf:"varint,3,opt,name=hca_objects,json=hcaObjects,proto3" json:"hca_objects,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *RdmaEntry) Reset() { *m = RdmaEntry{} }
-func (*RdmaEntry) ProtoMessage() {}
-func (*RdmaEntry) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{12}
-}
-func (m *RdmaEntry) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *RdmaEntry) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_RdmaEntry.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *RdmaEntry) XXX_Merge(src proto.Message) {
- xxx_messageInfo_RdmaEntry.Merge(m, src)
-}
-func (m *RdmaEntry) XXX_Size() int {
- return m.Size()
-}
-func (m *RdmaEntry) XXX_DiscardUnknown() {
- xxx_messageInfo_RdmaEntry.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_RdmaEntry proto.InternalMessageInfo
-
-type NetworkStat struct {
- Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
- RxBytes uint64 `protobuf:"varint,2,opt,name=rx_bytes,json=rxBytes,proto3" json:"rx_bytes,omitempty"`
- RxPackets uint64 `protobuf:"varint,3,opt,name=rx_packets,json=rxPackets,proto3" json:"rx_packets,omitempty"`
- RxErrors uint64 `protobuf:"varint,4,opt,name=rx_errors,json=rxErrors,proto3" json:"rx_errors,omitempty"`
- RxDropped uint64 `protobuf:"varint,5,opt,name=rx_dropped,json=rxDropped,proto3" json:"rx_dropped,omitempty"`
- TxBytes uint64 `protobuf:"varint,6,opt,name=tx_bytes,json=txBytes,proto3" json:"tx_bytes,omitempty"`
- TxPackets uint64 `protobuf:"varint,7,opt,name=tx_packets,json=txPackets,proto3" json:"tx_packets,omitempty"`
- TxErrors uint64 `protobuf:"varint,8,opt,name=tx_errors,json=txErrors,proto3" json:"tx_errors,omitempty"`
- TxDropped uint64 `protobuf:"varint,9,opt,name=tx_dropped,json=txDropped,proto3" json:"tx_dropped,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *NetworkStat) Reset() { *m = NetworkStat{} }
-func (*NetworkStat) ProtoMessage() {}
-func (*NetworkStat) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{13}
-}
-func (m *NetworkStat) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *NetworkStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_NetworkStat.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *NetworkStat) XXX_Merge(src proto.Message) {
- xxx_messageInfo_NetworkStat.Merge(m, src)
-}
-func (m *NetworkStat) XXX_Size() int {
- return m.Size()
-}
-func (m *NetworkStat) XXX_DiscardUnknown() {
- xxx_messageInfo_NetworkStat.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_NetworkStat proto.InternalMessageInfo
-
-// CgroupStats exports per-cgroup statistics.
-type CgroupStats struct {
- // number of tasks sleeping
- NrSleeping uint64 `protobuf:"varint,1,opt,name=nr_sleeping,json=nrSleeping,proto3" json:"nr_sleeping,omitempty"`
- // number of tasks running
- NrRunning uint64 `protobuf:"varint,2,opt,name=nr_running,json=nrRunning,proto3" json:"nr_running,omitempty"`
- // number of tasks in stopped state
- NrStopped uint64 `protobuf:"varint,3,opt,name=nr_stopped,json=nrStopped,proto3" json:"nr_stopped,omitempty"`
- // number of tasks in uninterruptible state
- NrUninterruptible uint64 `protobuf:"varint,4,opt,name=nr_uninterruptible,json=nrUninterruptible,proto3" json:"nr_uninterruptible,omitempty"`
- // number of tasks waiting on IO
- NrIoWait uint64 `protobuf:"varint,5,opt,name=nr_io_wait,json=nrIoWait,proto3" json:"nr_io_wait,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
-}
-
-func (m *CgroupStats) Reset() { *m = CgroupStats{} }
-func (*CgroupStats) ProtoMessage() {}
-func (*CgroupStats) Descriptor() ([]byte, []int) {
- return fileDescriptor_a17b2d87c332bfaa, []int{14}
-}
-func (m *CgroupStats) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *CgroupStats) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_CgroupStats.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *CgroupStats) XXX_Merge(src proto.Message) {
- xxx_messageInfo_CgroupStats.Merge(m, src)
-}
-func (m *CgroupStats) XXX_Size() int {
- return m.Size()
-}
-func (m *CgroupStats) XXX_DiscardUnknown() {
- xxx_messageInfo_CgroupStats.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_CgroupStats proto.InternalMessageInfo
-
-func init() {
- proto.RegisterType((*Metrics)(nil), "io.containerd.cgroups.v1.Metrics")
- proto.RegisterType((*HugetlbStat)(nil), "io.containerd.cgroups.v1.HugetlbStat")
- proto.RegisterType((*PidsStat)(nil), "io.containerd.cgroups.v1.PidsStat")
- proto.RegisterType((*CPUStat)(nil), "io.containerd.cgroups.v1.CPUStat")
- proto.RegisterType((*CPUUsage)(nil), "io.containerd.cgroups.v1.CPUUsage")
- proto.RegisterType((*Throttle)(nil), "io.containerd.cgroups.v1.Throttle")
- proto.RegisterType((*MemoryStat)(nil), "io.containerd.cgroups.v1.MemoryStat")
- proto.RegisterType((*MemoryEntry)(nil), "io.containerd.cgroups.v1.MemoryEntry")
- proto.RegisterType((*MemoryOomControl)(nil), "io.containerd.cgroups.v1.MemoryOomControl")
- proto.RegisterType((*BlkIOStat)(nil), "io.containerd.cgroups.v1.BlkIOStat")
- proto.RegisterType((*BlkIOEntry)(nil), "io.containerd.cgroups.v1.BlkIOEntry")
- proto.RegisterType((*RdmaStat)(nil), "io.containerd.cgroups.v1.RdmaStat")
- proto.RegisterType((*RdmaEntry)(nil), "io.containerd.cgroups.v1.RdmaEntry")
- proto.RegisterType((*NetworkStat)(nil), "io.containerd.cgroups.v1.NetworkStat")
- proto.RegisterType((*CgroupStats)(nil), "io.containerd.cgroups.v1.CgroupStats")
-}
-
-func init() {
- proto.RegisterFile("github.com/containerd/cgroups/stats/v1/metrics.proto", fileDescriptor_a17b2d87c332bfaa)
-}
-
-var fileDescriptor_a17b2d87c332bfaa = []byte{
- // 1749 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x58, 0xcd, 0x72, 0xe3, 0xc6,
- 0x11, 0x36, 0x45, 0x48, 0x24, 0x9a, 0x92, 0x56, 0x9a, 0xfd, 0x83, 0xe4, 0xb5, 0x28, 0x53, 0xbb,
- 0x89, 0xe2, 0xad, 0x48, 0x65, 0x27, 0xb5, 0x95, 0x75, 0xec, 0x4a, 0x59, 0x5a, 0xbb, 0x76, 0xcb,
- 0x51, 0x44, 0x83, 0x52, 0xd9, 0x39, 0xa1, 0x40, 0x70, 0x16, 0x9c, 0x15, 0x80, 0x81, 0x07, 0x03,
- 0x89, 0xca, 0x29, 0x87, 0x54, 0xe5, 0x94, 0x07, 0xca, 0x1b, 0xf8, 0x98, 0x4b, 0x52, 0xc9, 0x45,
- 0x15, 0xf3, 0x49, 0x52, 0x33, 0x3d, 0xf8, 0xa1, 0xbc, 0x5a, 0x85, 0x37, 0x76, 0xcf, 0xd7, 0x5f,
- 0xf7, 0x34, 0xbe, 0x19, 0x34, 0x08, 0xbf, 0x0e, 0x99, 0x1c, 0xe7, 0xc3, 0xbd, 0x80, 0xc7, 0xfb,
- 0x01, 0x4f, 0xa4, 0xcf, 0x12, 0x2a, 0x46, 0xfb, 0x41, 0x28, 0x78, 0x9e, 0x66, 0xfb, 0x99, 0xf4,
- 0x65, 0xb6, 0x7f, 0xfe, 0xf1, 0x7e, 0x4c, 0xa5, 0x60, 0x41, 0xb6, 0x97, 0x0a, 0x2e, 0x39, 0x71,
- 0x18, 0xdf, 0xab, 0xd0, 0x7b, 0x06, 0xbd, 0x77, 0xfe, 0xf1, 0xe6, 0xbd, 0x90, 0x87, 0x5c, 0x83,
- 0xf6, 0xd5, 0x2f, 0xc4, 0xf7, 0xfe, 0x65, 0x41, 0xeb, 0x08, 0x19, 0xc8, 0xef, 0xa0, 0x35, 0xce,
- 0x43, 0x2a, 0xa3, 0xa1, 0xd3, 0xd8, 0x6e, 0xee, 0x76, 0x3e, 0x79, 0xb2, 0x77, 0x13, 0xdb, 0xde,
- 0x4b, 0x04, 0x0e, 0xa4, 0x2f, 0xdd, 0x22, 0x8a, 0x3c, 0x03, 0x2b, 0x65, 0xa3, 0xcc, 0x59, 0xd8,
- 0x6e, 0xec, 0x76, 0x3e, 0xe9, 0xdd, 0x1c, 0xdd, 0x67, 0xa3, 0x4c, 0x87, 0x6a, 0x3c, 0xf9, 0x0c,
- 0x9a, 0x41, 0x9a, 0x3b, 0x4d, 0x1d, 0xf6, 0xe1, 0xcd, 0x61, 0x87, 0xfd, 0x53, 0x15, 0x75, 0xd0,
- 0x9a, 0x5e, 0x75, 0x9b, 0x87, 0xfd, 0x53, 0x57, 0x85, 0x91, 0xcf, 0x60, 0x29, 0xa6, 0x31, 0x17,
- 0x97, 0x8e, 0xa5, 0x09, 0x1e, 0xdf, 0x4c, 0x70, 0xa4, 0x71, 0x3a, 0xb3, 0x89, 0x21, 0xcf, 0x61,
- 0x71, 0x18, 0x9d, 0x31, 0xee, 0x2c, 0xea, 0xe0, 0x9d, 0x9b, 0x83, 0x0f, 0xa2, 0xb3, 0x57, 0xc7,
- 0x3a, 0x16, 0x23, 0xd4, 0x76, 0xc5, 0x28, 0xf6, 0x9d, 0xa5, 0xdb, 0xb6, 0xeb, 0x8e, 0x62, 0x1f,
- 0xb7, 0xab, 0xf0, 0xaa, 0xcf, 0x09, 0x95, 0x17, 0x5c, 0x9c, 0x39, 0xad, 0xdb, 0xfa, 0xfc, 0x07,
- 0x04, 0x62, 0x9f, 0x4d, 0x14, 0x79, 0x09, 0xcb, 0x08, 0xf1, 0xb4, 0x0a, 0x9c, 0xb6, 0x2e, 0xe0,
- 0x1d, 0x2c, 0x87, 0xfa, 0xa7, 0x22, 0xc9, 0xdc, 0x4e, 0x50, 0x19, 0xe4, 0x3b, 0x20, 0xd8, 0x07,
- 0x8f, 0xf3, 0xd8, 0x53, 0xc1, 0x82, 0x47, 0x8e, 0xad, 0xf9, 0x3e, 0xba, 0xad, 0x8f, 0xc7, 0x3c,
- 0x3e, 0xc4, 0x08, 0x77, 0x2d, 0xbe, 0xe6, 0xe9, 0x9d, 0x41, 0xa7, 0xa6, 0x11, 0x72, 0x0f, 0x16,
- 0xf3, 0xcc, 0x0f, 0xa9, 0xd3, 0xd8, 0x6e, 0xec, 0x5a, 0x2e, 0x1a, 0x64, 0x0d, 0x9a, 0xb1, 0x3f,
- 0xd1, 0x7a, 0xb1, 0x5c, 0xf5, 0x93, 0x38, 0xd0, 0x7a, 0xed, 0xb3, 0x28, 0x48, 0xa4, 0x96, 0x83,
- 0xe5, 0x16, 0x26, 0xd9, 0x84, 0x76, 0xea, 0x87, 0x34, 0x63, 0x7f, 0xa2, 0xfa, 0x41, 0xdb, 0x6e,
- 0x69, 0xf7, 0x3e, 0x85, 0x76, 0x21, 0x29, 0xc5, 0x10, 0xe4, 0x42, 0xd0, 0x44, 0x9a, 0x5c, 0x85,
- 0xa9, 0x6a, 0x88, 0x58, 0xcc, 0xa4, 0xc9, 0x87, 0x46, 0xef, 0xaf, 0x0d, 0x68, 0x19, 0x61, 0x91,
- 0xdf, 0xd4, 0xab, 0x7c, 0xe7, 0x23, 0x3d, 0xec, 0x9f, 0x9e, 0x2a, 0x64, 0xb1, 0x93, 0x03, 0x00,
- 0x39, 0x16, 0x5c, 0xca, 0x88, 0x25, 0xe1, 0xed, 0x07, 0xe0, 0x04, 0xb1, 0xd4, 0xad, 0x45, 0xf5,
- 0xbe, 0x87, 0x76, 0x41, 0xab, 0x6a, 0x95, 0x5c, 0xfa, 0x51, 0xd1, 0x2f, 0x6d, 0x90, 0x07, 0xb0,
- 0x74, 0x46, 0x45, 0x42, 0x23, 0xb3, 0x05, 0x63, 0x11, 0x02, 0x56, 0x9e, 0x51, 0x61, 0x5a, 0xa6,
- 0x7f, 0x93, 0x1d, 0x68, 0xa5, 0x54, 0x78, 0xea, 0x60, 0x59, 0xdb, 0xcd, 0x5d, 0xeb, 0x00, 0xa6,
- 0x57, 0xdd, 0xa5, 0x3e, 0x15, 0xea, 0xe0, 0x2c, 0xa5, 0x54, 0x1c, 0xa6, 0x79, 0x6f, 0x02, 0xed,
- 0xa2, 0x14, 0xd5, 0xb8, 0x94, 0x0a, 0xc6, 0x47, 0x59, 0xd1, 0x38, 0x63, 0x92, 0xa7, 0xb0, 0x6e,
- 0xca, 0xa4, 0x23, 0xaf, 0xc0, 0x60, 0x05, 0x6b, 0xe5, 0x42, 0xdf, 0x80, 0x9f, 0xc0, 0x6a, 0x05,
- 0x96, 0x2c, 0xa6, 0xa6, 0xaa, 0x95, 0xd2, 0x7b, 0xc2, 0x62, 0xda, 0xfb, 0x4f, 0x07, 0xa0, 0x3a,
- 0x8e, 0x6a, 0xbf, 0x81, 0x1f, 0x8c, 0x4b, 0x7d, 0x68, 0x83, 0x6c, 0x40, 0x53, 0x64, 0x26, 0x15,
- 0x9e, 0x7a, 0x77, 0x30, 0x70, 0x95, 0x8f, 0xfc, 0x0c, 0xda, 0x22, 0xcb, 0x3c, 0x75, 0xf5, 0x60,
- 0x82, 0x83, 0xce, 0xf4, 0xaa, 0xdb, 0x72, 0x07, 0x03, 0x25, 0x3b, 0xb7, 0x25, 0xb2, 0x4c, 0xfd,
- 0x20, 0x5d, 0xe8, 0xc4, 0x7e, 0x9a, 0xd2, 0x91, 0xf7, 0x9a, 0x45, 0xa8, 0x1c, 0xcb, 0x05, 0x74,
- 0x7d, 0xc5, 0x22, 0xdd, 0xe9, 0x11, 0x13, 0xf2, 0x52, 0x5f, 0x00, 0x96, 0x8b, 0x06, 0x79, 0x04,
- 0xf6, 0x85, 0x60, 0x92, 0x0e, 0xfd, 0xe0, 0x4c, 0x1f, 0x70, 0xcb, 0xad, 0x1c, 0xc4, 0x81, 0x76,
- 0x1a, 0x7a, 0x69, 0xe8, 0xb1, 0xc4, 0x69, 0xe1, 0x93, 0x48, 0xc3, 0x7e, 0xf8, 0x2a, 0x21, 0x9b,
- 0x60, 0xe3, 0x0a, 0xcf, 0xa5, 0x3e, 0x97, 0xaa, 0x8d, 0x61, 0x3f, 0x3c, 0xce, 0x25, 0xd9, 0xd0,
- 0x51, 0xaf, 0xfd, 0x3c, 0x92, 0xfa, 0x88, 0xe9, 0xa5, 0xaf, 0x94, 0x49, 0xb6, 0x61, 0x39, 0x0d,
- 0xbd, 0xd8, 0x7f, 0x63, 0x96, 0x01, 0xcb, 0x4c, 0xc3, 0x23, 0xff, 0x0d, 0x22, 0x76, 0x60, 0x85,
- 0x25, 0x7e, 0x20, 0xd9, 0x39, 0xf5, 0xfc, 0x84, 0x27, 0x4e, 0x47, 0x43, 0x96, 0x0b, 0xe7, 0x17,
- 0x09, 0x4f, 0xd4, 0x66, 0xeb, 0x90, 0x65, 0x64, 0xa9, 0x01, 0xea, 0x2c, 0xba, 0x1f, 0x2b, 0xb3,
- 0x2c, 0xba, 0x23, 0x15, 0x8b, 0x86, 0xac, 0xd6, 0x59, 0x34, 0x60, 0x1b, 0x3a, 0x79, 0x42, 0xcf,
- 0x59, 0x20, 0xfd, 0x61, 0x44, 0x9d, 0x3b, 0x1a, 0x50, 0x77, 0x91, 0x4f, 0x61, 0x63, 0xcc, 0xa8,
- 0xf0, 0x45, 0x30, 0x66, 0x81, 0x1f, 0x79, 0xe6, 0x92, 0xc1, 0xe3, 0xb7, 0xa6, 0xf1, 0x0f, 0xeb,
- 0x00, 0x54, 0xc2, 0xef, 0xd5, 0x32, 0x79, 0x06, 0x33, 0x4b, 0x5e, 0x76, 0xe1, 0xa7, 0x26, 0x72,
- 0x5d, 0x47, 0xde, 0xaf, 0x2f, 0x0f, 0x2e, 0xfc, 0x14, 0xe3, 0xba, 0xd0, 0xd1, 0xa7, 0xc4, 0x43,
- 0x21, 0x11, 0x2c, 0x5b, 0xbb, 0x0e, 0xb5, 0x9a, 0x7e, 0x01, 0x36, 0x02, 0x94, 0xa6, 0xee, 0x6a,
- 0xcd, 0x2c, 0x4f, 0xaf, 0xba, 0xed, 0x13, 0xe5, 0x54, 0xc2, 0x6a, 0xeb, 0x65, 0x37, 0xcb, 0xc8,
- 0x33, 0x58, 0x2d, 0xa1, 0xa8, 0xb1, 0x7b, 0x1a, 0xbf, 0x36, 0xbd, 0xea, 0x2e, 0x17, 0x78, 0x2d,
- 0xb4, 0xe5, 0x22, 0x46, 0xab, 0xed, 0x23, 0x58, 0xc7, 0xb8, 0xba, 0xe6, 0xee, 0xeb, 0x4a, 0xee,
- 0xe8, 0x85, 0xa3, 0x4a, 0x78, 0x65, 0xbd, 0x28, 0xbf, 0x07, 0xb5, 0x7a, 0x5f, 0x68, 0x0d, 0xfe,
- 0x1c, 0x30, 0xc6, 0xab, 0x94, 0xf8, 0x50, 0x83, 0xb0, 0xb6, 0x6f, 0x4b, 0x39, 0xee, 0x14, 0xd5,
- 0x96, 0xa2, 0x74, 0xf0, 0x91, 0x68, 0x6f, 0x1f, 0x95, 0xf9, 0xa4, 0x60, 0xab, 0xf4, 0xb9, 0x81,
- 0x0f, 0xbf, 0x44, 0x29, 0x91, 0x3e, 0xae, 0x71, 0xa1, 0x16, 0x37, 0x67, 0x50, 0xa8, 0xc6, 0xa7,
- 0x40, 0x4a, 0x54, 0xa5, 0xda, 0xf7, 0x6b, 0x1b, 0xed, 0x57, 0xd2, 0xdd, 0x83, 0xbb, 0x08, 0x9e,
- 0x15, 0xf0, 0x23, 0x8d, 0xc6, 0x7e, 0xbd, 0xaa, 0xab, 0xb8, 0x6c, 0x62, 0x1d, 0xfd, 0x41, 0x8d,
- 0xfb, 0x8b, 0x0a, 0xfb, 0x53, 0x6e, 0xdd, 0xf2, 0xad, 0xb7, 0x70, 0xeb, 0xa6, 0x5f, 0xe7, 0xd6,
- 0xe8, 0xee, 0x4f, 0xb8, 0x35, 0xf6, 0x69, 0x81, 0xad, 0x8b, 0x7d, 0xdb, 0x5c, 0x7b, 0x6a, 0xe1,
- 0xb4, 0xa6, 0xf8, 0xdf, 0x16, 0xaf, 0x8e, 0x0f, 0x6f, 0x7b, 0x19, 0xa3, 0xd6, 0xbf, 0x4c, 0xa4,
- 0xb8, 0x2c, 0xde, 0x1e, 0xcf, 0xc1, 0x52, 0x2a, 0x77, 0x7a, 0xf3, 0xc4, 0xea, 0x10, 0xf2, 0x79,
- 0xf9, 0x4a, 0xd8, 0x99, 0x27, 0xb8, 0x78, 0x73, 0x0c, 0x00, 0xf0, 0x97, 0x27, 0x83, 0xd4, 0x79,
- 0x3c, 0x07, 0xc5, 0xc1, 0xca, 0xf4, 0xaa, 0x6b, 0x7f, 0xad, 0x83, 0x4f, 0x0e, 0xfb, 0xae, 0x8d,
- 0x3c, 0x27, 0x41, 0xda, 0xa3, 0xd0, 0xa9, 0x01, 0xab, 0xf7, 0x6e, 0xa3, 0xf6, 0xde, 0xad, 0x26,
- 0x82, 0x85, 0xb7, 0x4c, 0x04, 0xcd, 0xb7, 0x4e, 0x04, 0xd6, 0xcc, 0x44, 0xd0, 0x93, 0xb0, 0x76,
- 0x7d, 0x10, 0x21, 0xbb, 0xb0, 0xa6, 0x26, 0x99, 0x33, 0x16, 0xa9, 0x73, 0x95, 0xe9, 0x47, 0x86,
- 0x69, 0x57, 0x39, 0x8f, 0xbf, 0x66, 0x51, 0xf4, 0x02, 0xbd, 0xe4, 0x7d, 0xb0, 0xf3, 0x64, 0x44,
- 0x85, 0x9a, 0x7c, 0x4c, 0x0d, 0x6d, 0xed, 0x38, 0xe6, 0xb1, 0xba, 0xaa, 0x0b, 0x9a, 0x62, 0x0e,
- 0x31, 0xe1, 0xbd, 0x7f, 0x2e, 0x82, 0x5d, 0x8e, 0x82, 0xc4, 0x87, 0x4d, 0xc6, 0xbd, 0x8c, 0x8a,
- 0x73, 0x16, 0x50, 0x6f, 0x78, 0x29, 0x69, 0xe6, 0x09, 0x1a, 0xe4, 0x22, 0x63, 0xe7, 0xd4, 0x8c,
- 0xd1, 0x8f, 0x6f, 0x99, 0x29, 0xf1, 0x89, 0x3c, 0x64, 0x7c, 0x80, 0x34, 0x07, 0x8a, 0xc5, 0x2d,
- 0x48, 0xc8, 0x77, 0x70, 0xbf, 0x4a, 0x31, 0xaa, 0xb1, 0x2f, 0xcc, 0xc1, 0x7e, 0xb7, 0x64, 0x1f,
- 0x55, 0xcc, 0x27, 0x70, 0x97, 0x71, 0xef, 0xfb, 0x9c, 0xe6, 0x33, 0xbc, 0xcd, 0x39, 0x78, 0xd7,
- 0x19, 0xff, 0x46, 0xc7, 0x57, 0xac, 0x1e, 0x6c, 0xd4, 0x5a, 0xa2, 0x26, 0x80, 0x1a, 0xb7, 0x35,
- 0x07, 0xf7, 0x83, 0xb2, 0x66, 0x35, 0x31, 0x54, 0x09, 0xfe, 0x08, 0x0f, 0x18, 0xf7, 0x2e, 0x7c,
- 0x26, 0xaf, 0xb3, 0x2f, 0xce, 0xd7, 0x91, 0x6f, 0x7d, 0x26, 0x67, 0xa9, 0xb1, 0x23, 0x31, 0x15,
- 0xe1, 0x4c, 0x47, 0x96, 0xe6, 0xeb, 0xc8, 0x91, 0x8e, 0xaf, 0x58, 0xfb, 0xb0, 0xce, 0xf8, 0xf5,
- 0x5a, 0x5b, 0x73, 0x70, 0xde, 0x61, 0x7c, 0xb6, 0xce, 0x6f, 0x60, 0x3d, 0xa3, 0x81, 0xe4, 0xa2,
- 0xae, 0xb6, 0xf6, 0x1c, 0x8c, 0x6b, 0x26, 0xbc, 0xa4, 0xec, 0x9d, 0x03, 0x54, 0xeb, 0x64, 0x15,
- 0x16, 0x78, 0xaa, 0x4f, 0x8e, 0xed, 0x2e, 0xf0, 0x54, 0x4d, 0x9e, 0x23, 0x75, 0xd9, 0xe1, 0x71,
- 0xb5, 0x5d, 0x63, 0xa9, 0x53, 0x1c, 0xfb, 0x6f, 0x78, 0x31, 0x7a, 0xa2, 0xa1, 0xbd, 0x2c, 0xe1,
- 0xc2, 0x9c, 0x58, 0x34, 0x94, 0xf7, 0xdc, 0x8f, 0x72, 0x5a, 0x4c, 0x5a, 0xda, 0xe8, 0xfd, 0xa5,
- 0x01, 0xed, 0xe2, 0x03, 0x89, 0x7c, 0x5e, 0x1f, 0xde, 0x9b, 0xef, 0xfe, 0x1e, 0x53, 0x41, 0xb8,
- 0x99, 0x72, 0xc2, 0x7f, 0x5e, 0x4d, 0xf8, 0xff, 0x77, 0xb0, 0xf9, 0x0c, 0xa0, 0x60, 0x97, 0xbe,
- 0xda, 0x6e, 0x1b, 0x33, 0xbb, 0xed, 0x42, 0x67, 0x1c, 0xf8, 0xde, 0xd8, 0x4f, 0x46, 0x11, 0xc5,
- 0xb9, 0x74, 0xc5, 0x85, 0x71, 0xe0, 0xbf, 0x44, 0x4f, 0x01, 0xe0, 0xc3, 0x37, 0x34, 0x90, 0x99,
- 0x6e, 0x0a, 0x02, 0x8e, 0xd1, 0xd3, 0xfb, 0xdb, 0x02, 0x74, 0x6a, 0xdf, 0x74, 0x6a, 0x72, 0x4f,
- 0xfc, 0xb8, 0xc8, 0xa3, 0x7f, 0xab, 0xcb, 0x47, 0x4c, 0xf0, 0x2e, 0x31, 0x17, 0x53, 0x4b, 0x4c,
- 0xf4, 0xa5, 0x40, 0x3e, 0x00, 0x10, 0x13, 0x2f, 0xf5, 0x83, 0x33, 0x6a, 0xe8, 0x2d, 0xd7, 0x16,
- 0x93, 0x3e, 0x3a, 0xd4, 0x9d, 0x26, 0x26, 0x1e, 0x15, 0x82, 0x8b, 0xcc, 0xf4, 0xbe, 0x2d, 0x26,
- 0x5f, 0x6a, 0xdb, 0xc4, 0x8e, 0x04, 0x57, 0x13, 0x88, 0x79, 0x06, 0xb6, 0x98, 0xbc, 0x40, 0x87,
- 0xca, 0x2a, 0x8b, 0xac, 0x38, 0xf0, 0xb6, 0x64, 0x95, 0x55, 0x56, 0x59, 0x71, 0xe0, 0xb5, 0x65,
- 0x3d, 0xab, 0x2c, 0xb3, 0xe2, 0xcc, 0xdb, 0x96, 0xb5, 0xac, 0xb2, 0xca, 0x6a, 0x17, 0xb1, 0x26,
- 0x6b, 0xef, 0xef, 0x0d, 0xe8, 0xd4, 0xbe, 0x4e, 0x55, 0x03, 0x13, 0xe1, 0x65, 0x11, 0xa5, 0xa9,
- 0xfa, 0x90, 0xc2, 0xab, 0x1b, 0x12, 0x31, 0x30, 0x1e, 0xc5, 0x97, 0x08, 0x4f, 0xe4, 0x49, 0x52,
- 0x7c, 0x68, 0x59, 0xae, 0x9d, 0x08, 0x17, 0x1d, 0x66, 0x39, 0x93, 0x98, 0xae, 0x59, 0x2c, 0x0f,
- 0xd0, 0x41, 0x7e, 0x09, 0x24, 0x11, 0x5e, 0x9e, 0xb0, 0x44, 0x52, 0x21, 0xf2, 0x54, 0xb2, 0x61,
- 0xf9, 0x51, 0xb0, 0x9e, 0x88, 0xd3, 0xd9, 0x05, 0xf2, 0x48, 0xb3, 0x99, 0xcb, 0xc6, 0xb4, 0xac,
- 0x9d, 0x88, 0x57, 0xfa, 0xe6, 0x38, 0x70, 0x7e, 0xf8, 0x71, 0xeb, 0xbd, 0x7f, 0xff, 0xb8, 0xf5,
- 0xde, 0x9f, 0xa7, 0x5b, 0x8d, 0x1f, 0xa6, 0x5b, 0x8d, 0x7f, 0x4c, 0xb7, 0x1a, 0xff, 0x9d, 0x6e,
- 0x35, 0x86, 0x4b, 0xfa, 0xcf, 0x95, 0x5f, 0xfd, 0x2f, 0x00, 0x00, 0xff, 0xff, 0xc4, 0x4e, 0x24,
- 0x22, 0xc4, 0x11, 0x00, 0x00,
-}
-
-func (m *Metrics) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *Metrics) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *Metrics) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if m.MemoryOomControl != nil {
- {
- size, err := m.MemoryOomControl.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x4a
- }
- if m.CgroupStats != nil {
- {
- size, err := m.CgroupStats.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x42
- }
- if len(m.Network) > 0 {
- for iNdEx := len(m.Network) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.Network[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x3a
- }
- }
- if m.Rdma != nil {
- {
- size, err := m.Rdma.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x32
- }
- if m.Blkio != nil {
- {
- size, err := m.Blkio.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x2a
- }
- if m.Memory != nil {
- {
- size, err := m.Memory.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x22
- }
- if m.CPU != nil {
- {
- size, err := m.CPU.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x1a
- }
- if m.Pids != nil {
- {
- size, err := m.Pids.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x12
- }
- if len(m.Hugetlb) > 0 {
- for iNdEx := len(m.Hugetlb) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.Hugetlb[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0xa
- }
- }
- return len(dAtA) - i, nil
-}
-
-func (m *HugetlbStat) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *HugetlbStat) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *HugetlbStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if len(m.Pagesize) > 0 {
- i -= len(m.Pagesize)
- copy(dAtA[i:], m.Pagesize)
- i = encodeVarintMetrics(dAtA, i, uint64(len(m.Pagesize)))
- i--
- dAtA[i] = 0x22
- }
- if m.Failcnt != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Failcnt))
- i--
- dAtA[i] = 0x18
- }
- if m.Max != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Max))
- i--
- dAtA[i] = 0x10
- }
- if m.Usage != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Usage))
- i--
- dAtA[i] = 0x8
- }
- return len(dAtA) - i, nil
-}
-
-func (m *PidsStat) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *PidsStat) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *PidsStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if m.Limit != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Limit))
- i--
- dAtA[i] = 0x10
- }
- if m.Current != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Current))
- i--
- dAtA[i] = 0x8
- }
- return len(dAtA) - i, nil
-}
-
-func (m *CPUStat) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *CPUStat) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *CPUStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if m.Throttling != nil {
- {
- size, err := m.Throttling.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x12
- }
- if m.Usage != nil {
- {
- size, err := m.Usage.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0xa
- }
- return len(dAtA) - i, nil
-}
-
-func (m *CPUUsage) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *CPUUsage) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *CPUUsage) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if len(m.PerCPU) > 0 {
- dAtA11 := make([]byte, len(m.PerCPU)*10)
- var j10 int
- for _, num := range m.PerCPU {
- for num >= 1<<7 {
- dAtA11[j10] = uint8(uint64(num)&0x7f | 0x80)
- num >>= 7
- j10++
- }
- dAtA11[j10] = uint8(num)
- j10++
- }
- i -= j10
- copy(dAtA[i:], dAtA11[:j10])
- i = encodeVarintMetrics(dAtA, i, uint64(j10))
- i--
- dAtA[i] = 0x22
- }
- if m.User != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.User))
- i--
- dAtA[i] = 0x18
- }
- if m.Kernel != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Kernel))
- i--
- dAtA[i] = 0x10
- }
- if m.Total != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Total))
- i--
- dAtA[i] = 0x8
- }
- return len(dAtA) - i, nil
-}
-
-func (m *Throttle) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *Throttle) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *Throttle) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if m.ThrottledTime != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.ThrottledTime))
- i--
- dAtA[i] = 0x18
- }
- if m.ThrottledPeriods != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.ThrottledPeriods))
- i--
- dAtA[i] = 0x10
- }
- if m.Periods != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Periods))
- i--
- dAtA[i] = 0x8
- }
- return len(dAtA) - i, nil
-}
-
-func (m *MemoryStat) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *MemoryStat) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *MemoryStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if m.KernelTCP != nil {
- {
- size, err := m.KernelTCP.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x2
- i--
- dAtA[i] = 0xa2
- }
- if m.Kernel != nil {
- {
- size, err := m.Kernel.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x2
- i--
- dAtA[i] = 0x9a
- }
- if m.Swap != nil {
- {
- size, err := m.Swap.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x2
- i--
- dAtA[i] = 0x92
- }
- if m.Usage != nil {
- {
- size, err := m.Usage.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x2
- i--
- dAtA[i] = 0x8a
- }
- if m.TotalUnevictable != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalUnevictable))
- i--
- dAtA[i] = 0x2
- i--
- dAtA[i] = 0x80
- }
- if m.TotalActiveFile != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalActiveFile))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xf8
- }
- if m.TotalInactiveFile != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalInactiveFile))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xf0
- }
- if m.TotalActiveAnon != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalActiveAnon))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xe8
- }
- if m.TotalInactiveAnon != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalInactiveAnon))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xe0
- }
- if m.TotalPgMajFault != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalPgMajFault))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xd8
- }
- if m.TotalPgFault != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalPgFault))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xd0
- }
- if m.TotalPgPgOut != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalPgPgOut))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xc8
- }
- if m.TotalPgPgIn != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalPgPgIn))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xc0
- }
- if m.TotalWriteback != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalWriteback))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xb8
- }
- if m.TotalDirty != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalDirty))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xb0
- }
- if m.TotalMappedFile != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalMappedFile))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xa8
- }
- if m.TotalRSSHuge != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalRSSHuge))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0xa0
- }
- if m.TotalRSS != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalRSS))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0x98
- }
- if m.TotalCache != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TotalCache))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0x90
- }
- if m.HierarchicalSwapLimit != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.HierarchicalSwapLimit))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0x88
- }
- if m.HierarchicalMemoryLimit != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.HierarchicalMemoryLimit))
- i--
- dAtA[i] = 0x1
- i--
- dAtA[i] = 0x80
- }
- if m.Unevictable != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Unevictable))
- i--
- dAtA[i] = 0x78
- }
- if m.ActiveFile != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.ActiveFile))
- i--
- dAtA[i] = 0x70
- }
- if m.InactiveFile != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.InactiveFile))
- i--
- dAtA[i] = 0x68
- }
- if m.ActiveAnon != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.ActiveAnon))
- i--
- dAtA[i] = 0x60
- }
- if m.InactiveAnon != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.InactiveAnon))
- i--
- dAtA[i] = 0x58
- }
- if m.PgMajFault != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.PgMajFault))
- i--
- dAtA[i] = 0x50
- }
- if m.PgFault != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.PgFault))
- i--
- dAtA[i] = 0x48
- }
- if m.PgPgOut != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.PgPgOut))
- i--
- dAtA[i] = 0x40
- }
- if m.PgPgIn != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.PgPgIn))
- i--
- dAtA[i] = 0x38
- }
- if m.Writeback != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Writeback))
- i--
- dAtA[i] = 0x30
- }
- if m.Dirty != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Dirty))
- i--
- dAtA[i] = 0x28
- }
- if m.MappedFile != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.MappedFile))
- i--
- dAtA[i] = 0x20
- }
- if m.RSSHuge != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.RSSHuge))
- i--
- dAtA[i] = 0x18
- }
- if m.RSS != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.RSS))
- i--
- dAtA[i] = 0x10
- }
- if m.Cache != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Cache))
- i--
- dAtA[i] = 0x8
- }
- return len(dAtA) - i, nil
-}
-
-func (m *MemoryEntry) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *MemoryEntry) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *MemoryEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if m.Failcnt != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Failcnt))
- i--
- dAtA[i] = 0x20
- }
- if m.Max != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Max))
- i--
- dAtA[i] = 0x18
- }
- if m.Usage != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Usage))
- i--
- dAtA[i] = 0x10
- }
- if m.Limit != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Limit))
- i--
- dAtA[i] = 0x8
- }
- return len(dAtA) - i, nil
-}
-
-func (m *MemoryOomControl) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *MemoryOomControl) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *MemoryOomControl) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if m.OomKill != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.OomKill))
- i--
- dAtA[i] = 0x18
- }
- if m.UnderOom != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.UnderOom))
- i--
- dAtA[i] = 0x10
- }
- if m.OomKillDisable != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.OomKillDisable))
- i--
- dAtA[i] = 0x8
- }
- return len(dAtA) - i, nil
-}
-
-func (m *BlkIOStat) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *BlkIOStat) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *BlkIOStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if len(m.SectorsRecursive) > 0 {
- for iNdEx := len(m.SectorsRecursive) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.SectorsRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x42
- }
- }
- if len(m.IoTimeRecursive) > 0 {
- for iNdEx := len(m.IoTimeRecursive) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.IoTimeRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x3a
- }
- }
- if len(m.IoMergedRecursive) > 0 {
- for iNdEx := len(m.IoMergedRecursive) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.IoMergedRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x32
- }
- }
- if len(m.IoWaitTimeRecursive) > 0 {
- for iNdEx := len(m.IoWaitTimeRecursive) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.IoWaitTimeRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x2a
- }
- }
- if len(m.IoServiceTimeRecursive) > 0 {
- for iNdEx := len(m.IoServiceTimeRecursive) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.IoServiceTimeRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x22
- }
- }
- if len(m.IoQueuedRecursive) > 0 {
- for iNdEx := len(m.IoQueuedRecursive) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.IoQueuedRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x1a
- }
- }
- if len(m.IoServicedRecursive) > 0 {
- for iNdEx := len(m.IoServicedRecursive) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.IoServicedRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x12
- }
- }
- if len(m.IoServiceBytesRecursive) > 0 {
- for iNdEx := len(m.IoServiceBytesRecursive) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.IoServiceBytesRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0xa
- }
- }
- return len(dAtA) - i, nil
-}
-
-func (m *BlkIOEntry) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *BlkIOEntry) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *BlkIOEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if m.Value != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Value))
- i--
- dAtA[i] = 0x28
- }
- if m.Minor != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Minor))
- i--
- dAtA[i] = 0x20
- }
- if m.Major != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.Major))
- i--
- dAtA[i] = 0x18
- }
- if len(m.Device) > 0 {
- i -= len(m.Device)
- copy(dAtA[i:], m.Device)
- i = encodeVarintMetrics(dAtA, i, uint64(len(m.Device)))
- i--
- dAtA[i] = 0x12
- }
- if len(m.Op) > 0 {
- i -= len(m.Op)
- copy(dAtA[i:], m.Op)
- i = encodeVarintMetrics(dAtA, i, uint64(len(m.Op)))
- i--
- dAtA[i] = 0xa
- }
- return len(dAtA) - i, nil
-}
-
-func (m *RdmaStat) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *RdmaStat) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *RdmaStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if len(m.Limit) > 0 {
- for iNdEx := len(m.Limit) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.Limit[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x12
- }
- }
- if len(m.Current) > 0 {
- for iNdEx := len(m.Current) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.Current[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintMetrics(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0xa
- }
- }
- return len(dAtA) - i, nil
-}
-
-func (m *RdmaEntry) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *RdmaEntry) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *RdmaEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if m.HcaObjects != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.HcaObjects))
- i--
- dAtA[i] = 0x18
- }
- if m.HcaHandles != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.HcaHandles))
- i--
- dAtA[i] = 0x10
- }
- if len(m.Device) > 0 {
- i -= len(m.Device)
- copy(dAtA[i:], m.Device)
- i = encodeVarintMetrics(dAtA, i, uint64(len(m.Device)))
- i--
- dAtA[i] = 0xa
- }
- return len(dAtA) - i, nil
-}
-
-func (m *NetworkStat) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *NetworkStat) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *NetworkStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if m.TxDropped != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TxDropped))
- i--
- dAtA[i] = 0x48
- }
- if m.TxErrors != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TxErrors))
- i--
- dAtA[i] = 0x40
- }
- if m.TxPackets != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TxPackets))
- i--
- dAtA[i] = 0x38
- }
- if m.TxBytes != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.TxBytes))
- i--
- dAtA[i] = 0x30
- }
- if m.RxDropped != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.RxDropped))
- i--
- dAtA[i] = 0x28
- }
- if m.RxErrors != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.RxErrors))
- i--
- dAtA[i] = 0x20
- }
- if m.RxPackets != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.RxPackets))
- i--
- dAtA[i] = 0x18
- }
- if m.RxBytes != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.RxBytes))
- i--
- dAtA[i] = 0x10
- }
- if len(m.Name) > 0 {
- i -= len(m.Name)
- copy(dAtA[i:], m.Name)
- i = encodeVarintMetrics(dAtA, i, uint64(len(m.Name)))
- i--
- dAtA[i] = 0xa
- }
- return len(dAtA) - i, nil
-}
-
-func (m *CgroupStats) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *CgroupStats) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *CgroupStats) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.XXX_unrecognized != nil {
- i -= len(m.XXX_unrecognized)
- copy(dAtA[i:], m.XXX_unrecognized)
- }
- if m.NrIoWait != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.NrIoWait))
- i--
- dAtA[i] = 0x28
- }
- if m.NrUninterruptible != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.NrUninterruptible))
- i--
- dAtA[i] = 0x20
- }
- if m.NrStopped != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.NrStopped))
- i--
- dAtA[i] = 0x18
- }
- if m.NrRunning != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.NrRunning))
- i--
- dAtA[i] = 0x10
- }
- if m.NrSleeping != 0 {
- i = encodeVarintMetrics(dAtA, i, uint64(m.NrSleeping))
- i--
- dAtA[i] = 0x8
- }
- return len(dAtA) - i, nil
-}
-
-func encodeVarintMetrics(dAtA []byte, offset int, v uint64) int {
- offset -= sovMetrics(v)
- base := offset
- for v >= 1<<7 {
- dAtA[offset] = uint8(v&0x7f | 0x80)
- v >>= 7
- offset++
- }
- dAtA[offset] = uint8(v)
- return base
-}
-func (m *Metrics) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if len(m.Hugetlb) > 0 {
- for _, e := range m.Hugetlb {
- l = e.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- }
- if m.Pids != nil {
- l = m.Pids.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- if m.CPU != nil {
- l = m.CPU.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- if m.Memory != nil {
- l = m.Memory.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- if m.Blkio != nil {
- l = m.Blkio.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- if m.Rdma != nil {
- l = m.Rdma.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- if len(m.Network) > 0 {
- for _, e := range m.Network {
- l = e.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- }
- if m.CgroupStats != nil {
- l = m.CgroupStats.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- if m.MemoryOomControl != nil {
- l = m.MemoryOomControl.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *HugetlbStat) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Usage != 0 {
- n += 1 + sovMetrics(uint64(m.Usage))
- }
- if m.Max != 0 {
- n += 1 + sovMetrics(uint64(m.Max))
- }
- if m.Failcnt != 0 {
- n += 1 + sovMetrics(uint64(m.Failcnt))
- }
- l = len(m.Pagesize)
- if l > 0 {
- n += 1 + l + sovMetrics(uint64(l))
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *PidsStat) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Current != 0 {
- n += 1 + sovMetrics(uint64(m.Current))
- }
- if m.Limit != 0 {
- n += 1 + sovMetrics(uint64(m.Limit))
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *CPUStat) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Usage != nil {
- l = m.Usage.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- if m.Throttling != nil {
- l = m.Throttling.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *CPUUsage) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Total != 0 {
- n += 1 + sovMetrics(uint64(m.Total))
- }
- if m.Kernel != 0 {
- n += 1 + sovMetrics(uint64(m.Kernel))
- }
- if m.User != 0 {
- n += 1 + sovMetrics(uint64(m.User))
- }
- if len(m.PerCPU) > 0 {
- l = 0
- for _, e := range m.PerCPU {
- l += sovMetrics(uint64(e))
- }
- n += 1 + sovMetrics(uint64(l)) + l
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *Throttle) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Periods != 0 {
- n += 1 + sovMetrics(uint64(m.Periods))
- }
- if m.ThrottledPeriods != 0 {
- n += 1 + sovMetrics(uint64(m.ThrottledPeriods))
- }
- if m.ThrottledTime != 0 {
- n += 1 + sovMetrics(uint64(m.ThrottledTime))
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *MemoryStat) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Cache != 0 {
- n += 1 + sovMetrics(uint64(m.Cache))
- }
- if m.RSS != 0 {
- n += 1 + sovMetrics(uint64(m.RSS))
- }
- if m.RSSHuge != 0 {
- n += 1 + sovMetrics(uint64(m.RSSHuge))
- }
- if m.MappedFile != 0 {
- n += 1 + sovMetrics(uint64(m.MappedFile))
- }
- if m.Dirty != 0 {
- n += 1 + sovMetrics(uint64(m.Dirty))
- }
- if m.Writeback != 0 {
- n += 1 + sovMetrics(uint64(m.Writeback))
- }
- if m.PgPgIn != 0 {
- n += 1 + sovMetrics(uint64(m.PgPgIn))
- }
- if m.PgPgOut != 0 {
- n += 1 + sovMetrics(uint64(m.PgPgOut))
- }
- if m.PgFault != 0 {
- n += 1 + sovMetrics(uint64(m.PgFault))
- }
- if m.PgMajFault != 0 {
- n += 1 + sovMetrics(uint64(m.PgMajFault))
- }
- if m.InactiveAnon != 0 {
- n += 1 + sovMetrics(uint64(m.InactiveAnon))
- }
- if m.ActiveAnon != 0 {
- n += 1 + sovMetrics(uint64(m.ActiveAnon))
- }
- if m.InactiveFile != 0 {
- n += 1 + sovMetrics(uint64(m.InactiveFile))
- }
- if m.ActiveFile != 0 {
- n += 1 + sovMetrics(uint64(m.ActiveFile))
- }
- if m.Unevictable != 0 {
- n += 1 + sovMetrics(uint64(m.Unevictable))
- }
- if m.HierarchicalMemoryLimit != 0 {
- n += 2 + sovMetrics(uint64(m.HierarchicalMemoryLimit))
- }
- if m.HierarchicalSwapLimit != 0 {
- n += 2 + sovMetrics(uint64(m.HierarchicalSwapLimit))
- }
- if m.TotalCache != 0 {
- n += 2 + sovMetrics(uint64(m.TotalCache))
- }
- if m.TotalRSS != 0 {
- n += 2 + sovMetrics(uint64(m.TotalRSS))
- }
- if m.TotalRSSHuge != 0 {
- n += 2 + sovMetrics(uint64(m.TotalRSSHuge))
- }
- if m.TotalMappedFile != 0 {
- n += 2 + sovMetrics(uint64(m.TotalMappedFile))
- }
- if m.TotalDirty != 0 {
- n += 2 + sovMetrics(uint64(m.TotalDirty))
- }
- if m.TotalWriteback != 0 {
- n += 2 + sovMetrics(uint64(m.TotalWriteback))
- }
- if m.TotalPgPgIn != 0 {
- n += 2 + sovMetrics(uint64(m.TotalPgPgIn))
- }
- if m.TotalPgPgOut != 0 {
- n += 2 + sovMetrics(uint64(m.TotalPgPgOut))
- }
- if m.TotalPgFault != 0 {
- n += 2 + sovMetrics(uint64(m.TotalPgFault))
- }
- if m.TotalPgMajFault != 0 {
- n += 2 + sovMetrics(uint64(m.TotalPgMajFault))
- }
- if m.TotalInactiveAnon != 0 {
- n += 2 + sovMetrics(uint64(m.TotalInactiveAnon))
- }
- if m.TotalActiveAnon != 0 {
- n += 2 + sovMetrics(uint64(m.TotalActiveAnon))
- }
- if m.TotalInactiveFile != 0 {
- n += 2 + sovMetrics(uint64(m.TotalInactiveFile))
- }
- if m.TotalActiveFile != 0 {
- n += 2 + sovMetrics(uint64(m.TotalActiveFile))
- }
- if m.TotalUnevictable != 0 {
- n += 2 + sovMetrics(uint64(m.TotalUnevictable))
- }
- if m.Usage != nil {
- l = m.Usage.Size()
- n += 2 + l + sovMetrics(uint64(l))
- }
- if m.Swap != nil {
- l = m.Swap.Size()
- n += 2 + l + sovMetrics(uint64(l))
- }
- if m.Kernel != nil {
- l = m.Kernel.Size()
- n += 2 + l + sovMetrics(uint64(l))
- }
- if m.KernelTCP != nil {
- l = m.KernelTCP.Size()
- n += 2 + l + sovMetrics(uint64(l))
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *MemoryEntry) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Limit != 0 {
- n += 1 + sovMetrics(uint64(m.Limit))
- }
- if m.Usage != 0 {
- n += 1 + sovMetrics(uint64(m.Usage))
- }
- if m.Max != 0 {
- n += 1 + sovMetrics(uint64(m.Max))
- }
- if m.Failcnt != 0 {
- n += 1 + sovMetrics(uint64(m.Failcnt))
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *MemoryOomControl) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.OomKillDisable != 0 {
- n += 1 + sovMetrics(uint64(m.OomKillDisable))
- }
- if m.UnderOom != 0 {
- n += 1 + sovMetrics(uint64(m.UnderOom))
- }
- if m.OomKill != 0 {
- n += 1 + sovMetrics(uint64(m.OomKill))
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *BlkIOStat) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if len(m.IoServiceBytesRecursive) > 0 {
- for _, e := range m.IoServiceBytesRecursive {
- l = e.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- }
- if len(m.IoServicedRecursive) > 0 {
- for _, e := range m.IoServicedRecursive {
- l = e.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- }
- if len(m.IoQueuedRecursive) > 0 {
- for _, e := range m.IoQueuedRecursive {
- l = e.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- }
- if len(m.IoServiceTimeRecursive) > 0 {
- for _, e := range m.IoServiceTimeRecursive {
- l = e.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- }
- if len(m.IoWaitTimeRecursive) > 0 {
- for _, e := range m.IoWaitTimeRecursive {
- l = e.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- }
- if len(m.IoMergedRecursive) > 0 {
- for _, e := range m.IoMergedRecursive {
- l = e.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- }
- if len(m.IoTimeRecursive) > 0 {
- for _, e := range m.IoTimeRecursive {
- l = e.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- }
- if len(m.SectorsRecursive) > 0 {
- for _, e := range m.SectorsRecursive {
- l = e.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *BlkIOEntry) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = len(m.Op)
- if l > 0 {
- n += 1 + l + sovMetrics(uint64(l))
- }
- l = len(m.Device)
- if l > 0 {
- n += 1 + l + sovMetrics(uint64(l))
- }
- if m.Major != 0 {
- n += 1 + sovMetrics(uint64(m.Major))
- }
- if m.Minor != 0 {
- n += 1 + sovMetrics(uint64(m.Minor))
- }
- if m.Value != 0 {
- n += 1 + sovMetrics(uint64(m.Value))
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *RdmaStat) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if len(m.Current) > 0 {
- for _, e := range m.Current {
- l = e.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- }
- if len(m.Limit) > 0 {
- for _, e := range m.Limit {
- l = e.Size()
- n += 1 + l + sovMetrics(uint64(l))
- }
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *RdmaEntry) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = len(m.Device)
- if l > 0 {
- n += 1 + l + sovMetrics(uint64(l))
- }
- if m.HcaHandles != 0 {
- n += 1 + sovMetrics(uint64(m.HcaHandles))
- }
- if m.HcaObjects != 0 {
- n += 1 + sovMetrics(uint64(m.HcaObjects))
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *NetworkStat) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = len(m.Name)
- if l > 0 {
- n += 1 + l + sovMetrics(uint64(l))
- }
- if m.RxBytes != 0 {
- n += 1 + sovMetrics(uint64(m.RxBytes))
- }
- if m.RxPackets != 0 {
- n += 1 + sovMetrics(uint64(m.RxPackets))
- }
- if m.RxErrors != 0 {
- n += 1 + sovMetrics(uint64(m.RxErrors))
- }
- if m.RxDropped != 0 {
- n += 1 + sovMetrics(uint64(m.RxDropped))
- }
- if m.TxBytes != 0 {
- n += 1 + sovMetrics(uint64(m.TxBytes))
- }
- if m.TxPackets != 0 {
- n += 1 + sovMetrics(uint64(m.TxPackets))
- }
- if m.TxErrors != 0 {
- n += 1 + sovMetrics(uint64(m.TxErrors))
- }
- if m.TxDropped != 0 {
- n += 1 + sovMetrics(uint64(m.TxDropped))
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func (m *CgroupStats) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.NrSleeping != 0 {
- n += 1 + sovMetrics(uint64(m.NrSleeping))
- }
- if m.NrRunning != 0 {
- n += 1 + sovMetrics(uint64(m.NrRunning))
- }
- if m.NrStopped != 0 {
- n += 1 + sovMetrics(uint64(m.NrStopped))
- }
- if m.NrUninterruptible != 0 {
- n += 1 + sovMetrics(uint64(m.NrUninterruptible))
- }
- if m.NrIoWait != 0 {
- n += 1 + sovMetrics(uint64(m.NrIoWait))
- }
- if m.XXX_unrecognized != nil {
- n += len(m.XXX_unrecognized)
- }
- return n
-}
-
-func sovMetrics(x uint64) (n int) {
- return (math_bits.Len64(x|1) + 6) / 7
-}
-func sozMetrics(x uint64) (n int) {
- return sovMetrics(uint64((x << 1) ^ uint64((int64(x) >> 63))))
-}
-func (this *Metrics) String() string {
- if this == nil {
- return "nil"
- }
- repeatedStringForHugetlb := "[]*HugetlbStat{"
- for _, f := range this.Hugetlb {
- repeatedStringForHugetlb += strings.Replace(f.String(), "HugetlbStat", "HugetlbStat", 1) + ","
- }
- repeatedStringForHugetlb += "}"
- repeatedStringForNetwork := "[]*NetworkStat{"
- for _, f := range this.Network {
- repeatedStringForNetwork += strings.Replace(f.String(), "NetworkStat", "NetworkStat", 1) + ","
- }
- repeatedStringForNetwork += "}"
- s := strings.Join([]string{`&Metrics{`,
- `Hugetlb:` + repeatedStringForHugetlb + `,`,
- `Pids:` + strings.Replace(this.Pids.String(), "PidsStat", "PidsStat", 1) + `,`,
- `CPU:` + strings.Replace(this.CPU.String(), "CPUStat", "CPUStat", 1) + `,`,
- `Memory:` + strings.Replace(this.Memory.String(), "MemoryStat", "MemoryStat", 1) + `,`,
- `Blkio:` + strings.Replace(this.Blkio.String(), "BlkIOStat", "BlkIOStat", 1) + `,`,
- `Rdma:` + strings.Replace(this.Rdma.String(), "RdmaStat", "RdmaStat", 1) + `,`,
- `Network:` + repeatedStringForNetwork + `,`,
- `CgroupStats:` + strings.Replace(this.CgroupStats.String(), "CgroupStats", "CgroupStats", 1) + `,`,
- `MemoryOomControl:` + strings.Replace(this.MemoryOomControl.String(), "MemoryOomControl", "MemoryOomControl", 1) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *HugetlbStat) String() string {
- if this == nil {
- return "nil"
- }
- s := strings.Join([]string{`&HugetlbStat{`,
- `Usage:` + fmt.Sprintf("%v", this.Usage) + `,`,
- `Max:` + fmt.Sprintf("%v", this.Max) + `,`,
- `Failcnt:` + fmt.Sprintf("%v", this.Failcnt) + `,`,
- `Pagesize:` + fmt.Sprintf("%v", this.Pagesize) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *PidsStat) String() string {
- if this == nil {
- return "nil"
- }
- s := strings.Join([]string{`&PidsStat{`,
- `Current:` + fmt.Sprintf("%v", this.Current) + `,`,
- `Limit:` + fmt.Sprintf("%v", this.Limit) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *CPUStat) String() string {
- if this == nil {
- return "nil"
- }
- s := strings.Join([]string{`&CPUStat{`,
- `Usage:` + strings.Replace(this.Usage.String(), "CPUUsage", "CPUUsage", 1) + `,`,
- `Throttling:` + strings.Replace(this.Throttling.String(), "Throttle", "Throttle", 1) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *CPUUsage) String() string {
- if this == nil {
- return "nil"
- }
- s := strings.Join([]string{`&CPUUsage{`,
- `Total:` + fmt.Sprintf("%v", this.Total) + `,`,
- `Kernel:` + fmt.Sprintf("%v", this.Kernel) + `,`,
- `User:` + fmt.Sprintf("%v", this.User) + `,`,
- `PerCPU:` + fmt.Sprintf("%v", this.PerCPU) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *Throttle) String() string {
- if this == nil {
- return "nil"
- }
- s := strings.Join([]string{`&Throttle{`,
- `Periods:` + fmt.Sprintf("%v", this.Periods) + `,`,
- `ThrottledPeriods:` + fmt.Sprintf("%v", this.ThrottledPeriods) + `,`,
- `ThrottledTime:` + fmt.Sprintf("%v", this.ThrottledTime) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *MemoryStat) String() string {
- if this == nil {
- return "nil"
- }
- s := strings.Join([]string{`&MemoryStat{`,
- `Cache:` + fmt.Sprintf("%v", this.Cache) + `,`,
- `RSS:` + fmt.Sprintf("%v", this.RSS) + `,`,
- `RSSHuge:` + fmt.Sprintf("%v", this.RSSHuge) + `,`,
- `MappedFile:` + fmt.Sprintf("%v", this.MappedFile) + `,`,
- `Dirty:` + fmt.Sprintf("%v", this.Dirty) + `,`,
- `Writeback:` + fmt.Sprintf("%v", this.Writeback) + `,`,
- `PgPgIn:` + fmt.Sprintf("%v", this.PgPgIn) + `,`,
- `PgPgOut:` + fmt.Sprintf("%v", this.PgPgOut) + `,`,
- `PgFault:` + fmt.Sprintf("%v", this.PgFault) + `,`,
- `PgMajFault:` + fmt.Sprintf("%v", this.PgMajFault) + `,`,
- `InactiveAnon:` + fmt.Sprintf("%v", this.InactiveAnon) + `,`,
- `ActiveAnon:` + fmt.Sprintf("%v", this.ActiveAnon) + `,`,
- `InactiveFile:` + fmt.Sprintf("%v", this.InactiveFile) + `,`,
- `ActiveFile:` + fmt.Sprintf("%v", this.ActiveFile) + `,`,
- `Unevictable:` + fmt.Sprintf("%v", this.Unevictable) + `,`,
- `HierarchicalMemoryLimit:` + fmt.Sprintf("%v", this.HierarchicalMemoryLimit) + `,`,
- `HierarchicalSwapLimit:` + fmt.Sprintf("%v", this.HierarchicalSwapLimit) + `,`,
- `TotalCache:` + fmt.Sprintf("%v", this.TotalCache) + `,`,
- `TotalRSS:` + fmt.Sprintf("%v", this.TotalRSS) + `,`,
- `TotalRSSHuge:` + fmt.Sprintf("%v", this.TotalRSSHuge) + `,`,
- `TotalMappedFile:` + fmt.Sprintf("%v", this.TotalMappedFile) + `,`,
- `TotalDirty:` + fmt.Sprintf("%v", this.TotalDirty) + `,`,
- `TotalWriteback:` + fmt.Sprintf("%v", this.TotalWriteback) + `,`,
- `TotalPgPgIn:` + fmt.Sprintf("%v", this.TotalPgPgIn) + `,`,
- `TotalPgPgOut:` + fmt.Sprintf("%v", this.TotalPgPgOut) + `,`,
- `TotalPgFault:` + fmt.Sprintf("%v", this.TotalPgFault) + `,`,
- `TotalPgMajFault:` + fmt.Sprintf("%v", this.TotalPgMajFault) + `,`,
- `TotalInactiveAnon:` + fmt.Sprintf("%v", this.TotalInactiveAnon) + `,`,
- `TotalActiveAnon:` + fmt.Sprintf("%v", this.TotalActiveAnon) + `,`,
- `TotalInactiveFile:` + fmt.Sprintf("%v", this.TotalInactiveFile) + `,`,
- `TotalActiveFile:` + fmt.Sprintf("%v", this.TotalActiveFile) + `,`,
- `TotalUnevictable:` + fmt.Sprintf("%v", this.TotalUnevictable) + `,`,
- `Usage:` + strings.Replace(this.Usage.String(), "MemoryEntry", "MemoryEntry", 1) + `,`,
- `Swap:` + strings.Replace(this.Swap.String(), "MemoryEntry", "MemoryEntry", 1) + `,`,
- `Kernel:` + strings.Replace(this.Kernel.String(), "MemoryEntry", "MemoryEntry", 1) + `,`,
- `KernelTCP:` + strings.Replace(this.KernelTCP.String(), "MemoryEntry", "MemoryEntry", 1) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *MemoryEntry) String() string {
- if this == nil {
- return "nil"
- }
- s := strings.Join([]string{`&MemoryEntry{`,
- `Limit:` + fmt.Sprintf("%v", this.Limit) + `,`,
- `Usage:` + fmt.Sprintf("%v", this.Usage) + `,`,
- `Max:` + fmt.Sprintf("%v", this.Max) + `,`,
- `Failcnt:` + fmt.Sprintf("%v", this.Failcnt) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *MemoryOomControl) String() string {
- if this == nil {
- return "nil"
- }
- s := strings.Join([]string{`&MemoryOomControl{`,
- `OomKillDisable:` + fmt.Sprintf("%v", this.OomKillDisable) + `,`,
- `UnderOom:` + fmt.Sprintf("%v", this.UnderOom) + `,`,
- `OomKill:` + fmt.Sprintf("%v", this.OomKill) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *BlkIOStat) String() string {
- if this == nil {
- return "nil"
- }
- repeatedStringForIoServiceBytesRecursive := "[]*BlkIOEntry{"
- for _, f := range this.IoServiceBytesRecursive {
- repeatedStringForIoServiceBytesRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
- }
- repeatedStringForIoServiceBytesRecursive += "}"
- repeatedStringForIoServicedRecursive := "[]*BlkIOEntry{"
- for _, f := range this.IoServicedRecursive {
- repeatedStringForIoServicedRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
- }
- repeatedStringForIoServicedRecursive += "}"
- repeatedStringForIoQueuedRecursive := "[]*BlkIOEntry{"
- for _, f := range this.IoQueuedRecursive {
- repeatedStringForIoQueuedRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
- }
- repeatedStringForIoQueuedRecursive += "}"
- repeatedStringForIoServiceTimeRecursive := "[]*BlkIOEntry{"
- for _, f := range this.IoServiceTimeRecursive {
- repeatedStringForIoServiceTimeRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
- }
- repeatedStringForIoServiceTimeRecursive += "}"
- repeatedStringForIoWaitTimeRecursive := "[]*BlkIOEntry{"
- for _, f := range this.IoWaitTimeRecursive {
- repeatedStringForIoWaitTimeRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
- }
- repeatedStringForIoWaitTimeRecursive += "}"
- repeatedStringForIoMergedRecursive := "[]*BlkIOEntry{"
- for _, f := range this.IoMergedRecursive {
- repeatedStringForIoMergedRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
- }
- repeatedStringForIoMergedRecursive += "}"
- repeatedStringForIoTimeRecursive := "[]*BlkIOEntry{"
- for _, f := range this.IoTimeRecursive {
- repeatedStringForIoTimeRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
- }
- repeatedStringForIoTimeRecursive += "}"
- repeatedStringForSectorsRecursive := "[]*BlkIOEntry{"
- for _, f := range this.SectorsRecursive {
- repeatedStringForSectorsRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
- }
- repeatedStringForSectorsRecursive += "}"
- s := strings.Join([]string{`&BlkIOStat{`,
- `IoServiceBytesRecursive:` + repeatedStringForIoServiceBytesRecursive + `,`,
- `IoServicedRecursive:` + repeatedStringForIoServicedRecursive + `,`,
- `IoQueuedRecursive:` + repeatedStringForIoQueuedRecursive + `,`,
- `IoServiceTimeRecursive:` + repeatedStringForIoServiceTimeRecursive + `,`,
- `IoWaitTimeRecursive:` + repeatedStringForIoWaitTimeRecursive + `,`,
- `IoMergedRecursive:` + repeatedStringForIoMergedRecursive + `,`,
- `IoTimeRecursive:` + repeatedStringForIoTimeRecursive + `,`,
- `SectorsRecursive:` + repeatedStringForSectorsRecursive + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *BlkIOEntry) String() string {
- if this == nil {
- return "nil"
- }
- s := strings.Join([]string{`&BlkIOEntry{`,
- `Op:` + fmt.Sprintf("%v", this.Op) + `,`,
- `Device:` + fmt.Sprintf("%v", this.Device) + `,`,
- `Major:` + fmt.Sprintf("%v", this.Major) + `,`,
- `Minor:` + fmt.Sprintf("%v", this.Minor) + `,`,
- `Value:` + fmt.Sprintf("%v", this.Value) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *RdmaStat) String() string {
- if this == nil {
- return "nil"
- }
- repeatedStringForCurrent := "[]*RdmaEntry{"
- for _, f := range this.Current {
- repeatedStringForCurrent += strings.Replace(f.String(), "RdmaEntry", "RdmaEntry", 1) + ","
- }
- repeatedStringForCurrent += "}"
- repeatedStringForLimit := "[]*RdmaEntry{"
- for _, f := range this.Limit {
- repeatedStringForLimit += strings.Replace(f.String(), "RdmaEntry", "RdmaEntry", 1) + ","
- }
- repeatedStringForLimit += "}"
- s := strings.Join([]string{`&RdmaStat{`,
- `Current:` + repeatedStringForCurrent + `,`,
- `Limit:` + repeatedStringForLimit + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *RdmaEntry) String() string {
- if this == nil {
- return "nil"
- }
- s := strings.Join([]string{`&RdmaEntry{`,
- `Device:` + fmt.Sprintf("%v", this.Device) + `,`,
- `HcaHandles:` + fmt.Sprintf("%v", this.HcaHandles) + `,`,
- `HcaObjects:` + fmt.Sprintf("%v", this.HcaObjects) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *NetworkStat) String() string {
- if this == nil {
- return "nil"
- }
- s := strings.Join([]string{`&NetworkStat{`,
- `Name:` + fmt.Sprintf("%v", this.Name) + `,`,
- `RxBytes:` + fmt.Sprintf("%v", this.RxBytes) + `,`,
- `RxPackets:` + fmt.Sprintf("%v", this.RxPackets) + `,`,
- `RxErrors:` + fmt.Sprintf("%v", this.RxErrors) + `,`,
- `RxDropped:` + fmt.Sprintf("%v", this.RxDropped) + `,`,
- `TxBytes:` + fmt.Sprintf("%v", this.TxBytes) + `,`,
- `TxPackets:` + fmt.Sprintf("%v", this.TxPackets) + `,`,
- `TxErrors:` + fmt.Sprintf("%v", this.TxErrors) + `,`,
- `TxDropped:` + fmt.Sprintf("%v", this.TxDropped) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func (this *CgroupStats) String() string {
- if this == nil {
- return "nil"
- }
- s := strings.Join([]string{`&CgroupStats{`,
- `NrSleeping:` + fmt.Sprintf("%v", this.NrSleeping) + `,`,
- `NrRunning:` + fmt.Sprintf("%v", this.NrRunning) + `,`,
- `NrStopped:` + fmt.Sprintf("%v", this.NrStopped) + `,`,
- `NrUninterruptible:` + fmt.Sprintf("%v", this.NrUninterruptible) + `,`,
- `NrIoWait:` + fmt.Sprintf("%v", this.NrIoWait) + `,`,
- `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
- `}`,
- }, "")
- return s
-}
-func valueToStringMetrics(v interface{}) string {
- rv := reflect.ValueOf(v)
- if rv.IsNil() {
- return "nil"
- }
- pv := reflect.Indirect(rv).Interface()
- return fmt.Sprintf("*%v", pv)
-}
-func (m *Metrics) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: Metrics: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: Metrics: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Hugetlb", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Hugetlb = append(m.Hugetlb, &HugetlbStat{})
- if err := m.Hugetlb[len(m.Hugetlb)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 2:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Pids", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.Pids == nil {
- m.Pids = &PidsStat{}
- }
- if err := m.Pids.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 3:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field CPU", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.CPU == nil {
- m.CPU = &CPUStat{}
- }
- if err := m.CPU.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 4:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Memory", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.Memory == nil {
- m.Memory = &MemoryStat{}
- }
- if err := m.Memory.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 5:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Blkio", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.Blkio == nil {
- m.Blkio = &BlkIOStat{}
- }
- if err := m.Blkio.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 6:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Rdma", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.Rdma == nil {
- m.Rdma = &RdmaStat{}
- }
- if err := m.Rdma.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 7:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Network", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Network = append(m.Network, &NetworkStat{})
- if err := m.Network[len(m.Network)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 8:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field CgroupStats", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.CgroupStats == nil {
- m.CgroupStats = &CgroupStats{}
- }
- if err := m.CgroupStats.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 9:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field MemoryOomControl", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.MemoryOomControl == nil {
- m.MemoryOomControl = &MemoryOomControl{}
- }
- if err := m.MemoryOomControl.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *HugetlbStat) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: HugetlbStat: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: HugetlbStat: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Usage", wireType)
- }
- m.Usage = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Usage |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Max", wireType)
- }
- m.Max = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Max |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Failcnt", wireType)
- }
- m.Failcnt = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Failcnt |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 4:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Pagesize", wireType)
- }
- var stringLen uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- stringLen |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- intStringLen := int(stringLen)
- if intStringLen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + intStringLen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Pagesize = string(dAtA[iNdEx:postIndex])
- iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *PidsStat) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: PidsStat: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: PidsStat: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Current", wireType)
- }
- m.Current = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Current |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType)
- }
- m.Limit = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Limit |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *CPUStat) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: CPUStat: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: CPUStat: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Usage", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.Usage == nil {
- m.Usage = &CPUUsage{}
- }
- if err := m.Usage.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 2:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Throttling", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.Throttling == nil {
- m.Throttling = &Throttle{}
- }
- if err := m.Throttling.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *CPUUsage) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: CPUUsage: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: CPUUsage: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Total", wireType)
- }
- m.Total = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Total |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Kernel", wireType)
- }
- m.Kernel = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Kernel |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field User", wireType)
- }
- m.User = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.User |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 4:
- if wireType == 0 {
- var v uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- v |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- m.PerCPU = append(m.PerCPU, v)
- } else if wireType == 2 {
- var packedLen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- packedLen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if packedLen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + packedLen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- var elementCount int
- var count int
- for _, integer := range dAtA[iNdEx:postIndex] {
- if integer < 128 {
- count++
- }
- }
- elementCount = count
- if elementCount != 0 && len(m.PerCPU) == 0 {
- m.PerCPU = make([]uint64, 0, elementCount)
- }
- for iNdEx < postIndex {
- var v uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- v |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- m.PerCPU = append(m.PerCPU, v)
- }
- } else {
- return fmt.Errorf("proto: wrong wireType = %d for field PerCPU", wireType)
- }
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *Throttle) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: Throttle: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: Throttle: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Periods", wireType)
- }
- m.Periods = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Periods |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field ThrottledPeriods", wireType)
- }
- m.ThrottledPeriods = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.ThrottledPeriods |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field ThrottledTime", wireType)
- }
- m.ThrottledTime = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.ThrottledTime |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *MemoryStat) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: MemoryStat: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: MemoryStat: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Cache", wireType)
- }
- m.Cache = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Cache |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field RSS", wireType)
- }
- m.RSS = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.RSS |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field RSSHuge", wireType)
- }
- m.RSSHuge = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.RSSHuge |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 4:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field MappedFile", wireType)
- }
- m.MappedFile = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.MappedFile |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 5:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Dirty", wireType)
- }
- m.Dirty = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Dirty |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 6:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Writeback", wireType)
- }
- m.Writeback = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Writeback |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 7:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field PgPgIn", wireType)
- }
- m.PgPgIn = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.PgPgIn |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 8:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field PgPgOut", wireType)
- }
- m.PgPgOut = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.PgPgOut |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 9:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field PgFault", wireType)
- }
- m.PgFault = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.PgFault |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 10:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field PgMajFault", wireType)
- }
- m.PgMajFault = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.PgMajFault |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 11:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field InactiveAnon", wireType)
- }
- m.InactiveAnon = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.InactiveAnon |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 12:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field ActiveAnon", wireType)
- }
- m.ActiveAnon = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.ActiveAnon |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 13:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field InactiveFile", wireType)
- }
- m.InactiveFile = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.InactiveFile |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 14:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field ActiveFile", wireType)
- }
- m.ActiveFile = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.ActiveFile |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 15:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Unevictable", wireType)
- }
- m.Unevictable = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Unevictable |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 16:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field HierarchicalMemoryLimit", wireType)
- }
- m.HierarchicalMemoryLimit = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.HierarchicalMemoryLimit |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 17:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field HierarchicalSwapLimit", wireType)
- }
- m.HierarchicalSwapLimit = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.HierarchicalSwapLimit |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 18:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalCache", wireType)
- }
- m.TotalCache = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalCache |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 19:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalRSS", wireType)
- }
- m.TotalRSS = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalRSS |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 20:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalRSSHuge", wireType)
- }
- m.TotalRSSHuge = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalRSSHuge |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 21:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalMappedFile", wireType)
- }
- m.TotalMappedFile = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalMappedFile |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 22:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalDirty", wireType)
- }
- m.TotalDirty = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalDirty |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 23:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalWriteback", wireType)
- }
- m.TotalWriteback = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalWriteback |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 24:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalPgPgIn", wireType)
- }
- m.TotalPgPgIn = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalPgPgIn |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 25:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalPgPgOut", wireType)
- }
- m.TotalPgPgOut = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalPgPgOut |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 26:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalPgFault", wireType)
- }
- m.TotalPgFault = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalPgFault |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 27:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalPgMajFault", wireType)
- }
- m.TotalPgMajFault = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalPgMajFault |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 28:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalInactiveAnon", wireType)
- }
- m.TotalInactiveAnon = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalInactiveAnon |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 29:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalActiveAnon", wireType)
- }
- m.TotalActiveAnon = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalActiveAnon |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 30:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalInactiveFile", wireType)
- }
- m.TotalInactiveFile = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalInactiveFile |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 31:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalActiveFile", wireType)
- }
- m.TotalActiveFile = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalActiveFile |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 32:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TotalUnevictable", wireType)
- }
- m.TotalUnevictable = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TotalUnevictable |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 33:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Usage", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.Usage == nil {
- m.Usage = &MemoryEntry{}
- }
- if err := m.Usage.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 34:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Swap", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.Swap == nil {
- m.Swap = &MemoryEntry{}
- }
- if err := m.Swap.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 35:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Kernel", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.Kernel == nil {
- m.Kernel = &MemoryEntry{}
- }
- if err := m.Kernel.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 36:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field KernelTCP", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.KernelTCP == nil {
- m.KernelTCP = &MemoryEntry{}
- }
- if err := m.KernelTCP.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *MemoryEntry) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: MemoryEntry: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: MemoryEntry: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType)
- }
- m.Limit = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Limit |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Usage", wireType)
- }
- m.Usage = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Usage |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Max", wireType)
- }
- m.Max = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Max |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 4:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Failcnt", wireType)
- }
- m.Failcnt = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Failcnt |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *MemoryOomControl) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: MemoryOomControl: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: MemoryOomControl: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field OomKillDisable", wireType)
- }
- m.OomKillDisable = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.OomKillDisable |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field UnderOom", wireType)
- }
- m.UnderOom = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.UnderOom |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field OomKill", wireType)
- }
- m.OomKill = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.OomKill |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *BlkIOStat) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: BlkIOStat: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: BlkIOStat: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field IoServiceBytesRecursive", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.IoServiceBytesRecursive = append(m.IoServiceBytesRecursive, &BlkIOEntry{})
- if err := m.IoServiceBytesRecursive[len(m.IoServiceBytesRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 2:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field IoServicedRecursive", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.IoServicedRecursive = append(m.IoServicedRecursive, &BlkIOEntry{})
- if err := m.IoServicedRecursive[len(m.IoServicedRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 3:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field IoQueuedRecursive", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.IoQueuedRecursive = append(m.IoQueuedRecursive, &BlkIOEntry{})
- if err := m.IoQueuedRecursive[len(m.IoQueuedRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 4:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field IoServiceTimeRecursive", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.IoServiceTimeRecursive = append(m.IoServiceTimeRecursive, &BlkIOEntry{})
- if err := m.IoServiceTimeRecursive[len(m.IoServiceTimeRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 5:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field IoWaitTimeRecursive", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.IoWaitTimeRecursive = append(m.IoWaitTimeRecursive, &BlkIOEntry{})
- if err := m.IoWaitTimeRecursive[len(m.IoWaitTimeRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 6:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field IoMergedRecursive", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.IoMergedRecursive = append(m.IoMergedRecursive, &BlkIOEntry{})
- if err := m.IoMergedRecursive[len(m.IoMergedRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 7:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field IoTimeRecursive", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.IoTimeRecursive = append(m.IoTimeRecursive, &BlkIOEntry{})
- if err := m.IoTimeRecursive[len(m.IoTimeRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 8:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field SectorsRecursive", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.SectorsRecursive = append(m.SectorsRecursive, &BlkIOEntry{})
- if err := m.SectorsRecursive[len(m.SectorsRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *BlkIOEntry) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: BlkIOEntry: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: BlkIOEntry: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Op", wireType)
- }
- var stringLen uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- stringLen |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- intStringLen := int(stringLen)
- if intStringLen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + intStringLen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Op = string(dAtA[iNdEx:postIndex])
- iNdEx = postIndex
- case 2:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Device", wireType)
- }
- var stringLen uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- stringLen |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- intStringLen := int(stringLen)
- if intStringLen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + intStringLen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Device = string(dAtA[iNdEx:postIndex])
- iNdEx = postIndex
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Major", wireType)
- }
- m.Major = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Major |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 4:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Minor", wireType)
- }
- m.Minor = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Minor |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 5:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType)
- }
- m.Value = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Value |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *RdmaStat) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: RdmaStat: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: RdmaStat: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Current", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Current = append(m.Current, &RdmaEntry{})
- if err := m.Current[len(m.Current)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 2:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Limit = append(m.Limit, &RdmaEntry{})
- if err := m.Limit[len(m.Limit)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *RdmaEntry) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: RdmaEntry: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: RdmaEntry: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Device", wireType)
- }
- var stringLen uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- stringLen |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- intStringLen := int(stringLen)
- if intStringLen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + intStringLen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Device = string(dAtA[iNdEx:postIndex])
- iNdEx = postIndex
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field HcaHandles", wireType)
- }
- m.HcaHandles = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.HcaHandles |= uint32(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field HcaObjects", wireType)
- }
- m.HcaObjects = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.HcaObjects |= uint32(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *NetworkStat) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: NetworkStat: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: NetworkStat: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType)
- }
- var stringLen uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- stringLen |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- intStringLen := int(stringLen)
- if intStringLen < 0 {
- return ErrInvalidLengthMetrics
- }
- postIndex := iNdEx + intStringLen
- if postIndex < 0 {
- return ErrInvalidLengthMetrics
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Name = string(dAtA[iNdEx:postIndex])
- iNdEx = postIndex
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field RxBytes", wireType)
- }
- m.RxBytes = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.RxBytes |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field RxPackets", wireType)
- }
- m.RxPackets = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.RxPackets |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 4:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field RxErrors", wireType)
- }
- m.RxErrors = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.RxErrors |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 5:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field RxDropped", wireType)
- }
- m.RxDropped = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.RxDropped |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 6:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TxBytes", wireType)
- }
- m.TxBytes = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TxBytes |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 7:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TxPackets", wireType)
- }
- m.TxPackets = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TxPackets |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 8:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TxErrors", wireType)
- }
- m.TxErrors = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TxErrors |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 9:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field TxDropped", wireType)
- }
- m.TxDropped = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.TxDropped |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *CgroupStats) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: CgroupStats: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: CgroupStats: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field NrSleeping", wireType)
- }
- m.NrSleeping = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.NrSleeping |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field NrRunning", wireType)
- }
- m.NrRunning = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.NrRunning |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field NrStopped", wireType)
- }
- m.NrStopped = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.NrStopped |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 4:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field NrUninterruptible", wireType)
- }
- m.NrUninterruptible = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.NrUninterruptible |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 5:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field NrIoWait", wireType)
- }
- m.NrIoWait = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.NrIoWait |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- default:
- iNdEx = preIndex
- skippy, err := skipMetrics(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthMetrics
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func skipMetrics(dAtA []byte) (n int, err error) {
- l := len(dAtA)
- iNdEx := 0
- depth := 0
- for iNdEx < l {
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= (uint64(b) & 0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- wireType := int(wire & 0x7)
- switch wireType {
- case 0:
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- iNdEx++
- if dAtA[iNdEx-1] < 0x80 {
- break
- }
- }
- case 1:
- iNdEx += 8
- case 2:
- var length int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowMetrics
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- length |= (int(b) & 0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if length < 0 {
- return 0, ErrInvalidLengthMetrics
- }
- iNdEx += length
- case 3:
- depth++
- case 4:
- if depth == 0 {
- return 0, ErrUnexpectedEndOfGroupMetrics
- }
- depth--
- case 5:
- iNdEx += 4
- default:
- return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
- }
- if iNdEx < 0 {
- return 0, ErrInvalidLengthMetrics
- }
- if depth == 0 {
- return iNdEx, nil
- }
- }
- return 0, io.ErrUnexpectedEOF
-}
-
-var (
- ErrInvalidLengthMetrics = fmt.Errorf("proto: negative length found during unmarshaling")
- ErrIntOverflowMetrics = fmt.Errorf("proto: integer overflow")
- ErrUnexpectedEndOfGroupMetrics = fmt.Errorf("proto: unexpected end of group")
-)
diff --git a/tools/vendor/github.com/containerd/cgroups/stats/v1/metrics.proto b/tools/vendor/github.com/containerd/cgroups/stats/v1/metrics.proto
deleted file mode 100644
index b3f6cc37d..000000000
--- a/tools/vendor/github.com/containerd/cgroups/stats/v1/metrics.proto
+++ /dev/null
@@ -1,158 +0,0 @@
-syntax = "proto3";
-
-package io.containerd.cgroups.v1;
-
-import "gogoproto/gogo.proto";
-
-message Metrics {
- repeated HugetlbStat hugetlb = 1;
- PidsStat pids = 2;
- CPUStat cpu = 3 [(gogoproto.customname) = "CPU"];
- MemoryStat memory = 4;
- BlkIOStat blkio = 5;
- RdmaStat rdma = 6;
- repeated NetworkStat network = 7;
- CgroupStats cgroup_stats = 8;
- MemoryOomControl memory_oom_control = 9;
-}
-
-message HugetlbStat {
- uint64 usage = 1;
- uint64 max = 2;
- uint64 failcnt = 3;
- string pagesize = 4;
-}
-
-message PidsStat {
- uint64 current = 1;
- uint64 limit = 2;
-}
-
-message CPUStat {
- CPUUsage usage = 1;
- Throttle throttling = 2;
-}
-
-message CPUUsage {
- // values in nanoseconds
- uint64 total = 1;
- uint64 kernel = 2;
- uint64 user = 3;
- repeated uint64 per_cpu = 4 [(gogoproto.customname) = "PerCPU"];
-
-}
-
-message Throttle {
- uint64 periods = 1;
- uint64 throttled_periods = 2;
- uint64 throttled_time = 3;
-}
-
-message MemoryStat {
- uint64 cache = 1;
- uint64 rss = 2 [(gogoproto.customname) = "RSS"];
- uint64 rss_huge = 3 [(gogoproto.customname) = "RSSHuge"];
- uint64 mapped_file = 4;
- uint64 dirty = 5;
- uint64 writeback = 6;
- uint64 pg_pg_in = 7;
- uint64 pg_pg_out = 8;
- uint64 pg_fault = 9;
- uint64 pg_maj_fault = 10;
- uint64 inactive_anon = 11;
- uint64 active_anon = 12;
- uint64 inactive_file = 13;
- uint64 active_file = 14;
- uint64 unevictable = 15;
- uint64 hierarchical_memory_limit = 16;
- uint64 hierarchical_swap_limit = 17;
- uint64 total_cache = 18;
- uint64 total_rss = 19 [(gogoproto.customname) = "TotalRSS"];
- uint64 total_rss_huge = 20 [(gogoproto.customname) = "TotalRSSHuge"];
- uint64 total_mapped_file = 21;
- uint64 total_dirty = 22;
- uint64 total_writeback = 23;
- uint64 total_pg_pg_in = 24;
- uint64 total_pg_pg_out = 25;
- uint64 total_pg_fault = 26;
- uint64 total_pg_maj_fault = 27;
- uint64 total_inactive_anon = 28;
- uint64 total_active_anon = 29;
- uint64 total_inactive_file = 30;
- uint64 total_active_file = 31;
- uint64 total_unevictable = 32;
- MemoryEntry usage = 33;
- MemoryEntry swap = 34;
- MemoryEntry kernel = 35;
- MemoryEntry kernel_tcp = 36 [(gogoproto.customname) = "KernelTCP"];
-
-}
-
-message MemoryEntry {
- uint64 limit = 1;
- uint64 usage = 2;
- uint64 max = 3;
- uint64 failcnt = 4;
-}
-
-message MemoryOomControl {
- uint64 oom_kill_disable = 1;
- uint64 under_oom = 2;
- uint64 oom_kill = 3;
-}
-
-message BlkIOStat {
- repeated BlkIOEntry io_service_bytes_recursive = 1;
- repeated BlkIOEntry io_serviced_recursive = 2;
- repeated BlkIOEntry io_queued_recursive = 3;
- repeated BlkIOEntry io_service_time_recursive = 4;
- repeated BlkIOEntry io_wait_time_recursive = 5;
- repeated BlkIOEntry io_merged_recursive = 6;
- repeated BlkIOEntry io_time_recursive = 7;
- repeated BlkIOEntry sectors_recursive = 8;
-}
-
-message BlkIOEntry {
- string op = 1;
- string device = 2;
- uint64 major = 3;
- uint64 minor = 4;
- uint64 value = 5;
-}
-
-message RdmaStat {
- repeated RdmaEntry current = 1;
- repeated RdmaEntry limit = 2;
-}
-
-message RdmaEntry {
- string device = 1;
- uint32 hca_handles = 2;
- uint32 hca_objects = 3;
-}
-
-message NetworkStat {
- string name = 1;
- uint64 rx_bytes = 2;
- uint64 rx_packets = 3;
- uint64 rx_errors = 4;
- uint64 rx_dropped = 5;
- uint64 tx_bytes = 6;
- uint64 tx_packets = 7;
- uint64 tx_errors = 8;
- uint64 tx_dropped = 9;
-}
-
-// CgroupStats exports per-cgroup statistics.
-message CgroupStats {
- // number of tasks sleeping
- uint64 nr_sleeping = 1;
- // number of tasks running
- uint64 nr_running = 2;
- // number of tasks in stopped state
- uint64 nr_stopped = 3;
- // number of tasks in uninterruptible state
- uint64 nr_uninterruptible = 4;
- // number of tasks waiting on IO
- uint64 nr_io_wait = 5;
-}
diff --git a/tools/vendor/go.opentelemetry.io/otel/exporters/otlp/internal/retry/LICENSE b/tools/vendor/github.com/containerd/cgroups/v3/LICENSE
similarity index 100%
rename from tools/vendor/go.opentelemetry.io/otel/exporters/otlp/internal/retry/LICENSE
rename to tools/vendor/github.com/containerd/cgroups/v3/LICENSE
diff --git a/tools/vendor/github.com/containerd/cgroups/v3/cgroup1/stats/doc.go b/tools/vendor/github.com/containerd/cgroups/v3/cgroup1/stats/doc.go
new file mode 100644
index 000000000..e51e12f80
--- /dev/null
+++ b/tools/vendor/github.com/containerd/cgroups/v3/cgroup1/stats/doc.go
@@ -0,0 +1,17 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package stats
diff --git a/tools/vendor/github.com/containerd/cgroups/v3/cgroup1/stats/metrics.pb.go b/tools/vendor/github.com/containerd/cgroups/v3/cgroup1/stats/metrics.pb.go
new file mode 100644
index 000000000..75206889b
--- /dev/null
+++ b/tools/vendor/github.com/containerd/cgroups/v3/cgroup1/stats/metrics.pb.go
@@ -0,0 +1,1959 @@
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.21.5
+// source: github.com/containerd/cgroups/cgroup1/stats/metrics.proto
+
+package stats
+
+import (
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+type Metrics struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Hugetlb []*HugetlbStat `protobuf:"bytes,1,rep,name=hugetlb,proto3" json:"hugetlb,omitempty"`
+ Pids *PidsStat `protobuf:"bytes,2,opt,name=pids,proto3" json:"pids,omitempty"`
+ CPU *CPUStat `protobuf:"bytes,3,opt,name=cpu,proto3" json:"cpu,omitempty"`
+ Memory *MemoryStat `protobuf:"bytes,4,opt,name=memory,proto3" json:"memory,omitempty"`
+ Blkio *BlkIOStat `protobuf:"bytes,5,opt,name=blkio,proto3" json:"blkio,omitempty"`
+ Rdma *RdmaStat `protobuf:"bytes,6,opt,name=rdma,proto3" json:"rdma,omitempty"`
+ Network []*NetworkStat `protobuf:"bytes,7,rep,name=network,proto3" json:"network,omitempty"`
+ CgroupStats *CgroupStats `protobuf:"bytes,8,opt,name=cgroup_stats,json=cgroupStats,proto3" json:"cgroup_stats,omitempty"`
+ MemoryOomControl *MemoryOomControl `protobuf:"bytes,9,opt,name=memory_oom_control,json=memoryOomControl,proto3" json:"memory_oom_control,omitempty"`
+}
+
+func (x *Metrics) Reset() {
+ *x = Metrics{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Metrics) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Metrics) ProtoMessage() {}
+
+func (x *Metrics) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Metrics.ProtoReflect.Descriptor instead.
+func (*Metrics) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *Metrics) GetHugetlb() []*HugetlbStat {
+ if x != nil {
+ return x.Hugetlb
+ }
+ return nil
+}
+
+func (x *Metrics) GetPids() *PidsStat {
+ if x != nil {
+ return x.Pids
+ }
+ return nil
+}
+
+func (x *Metrics) GetCPU() *CPUStat {
+ if x != nil {
+ return x.CPU
+ }
+ return nil
+}
+
+func (x *Metrics) GetMemory() *MemoryStat {
+ if x != nil {
+ return x.Memory
+ }
+ return nil
+}
+
+func (x *Metrics) GetBlkio() *BlkIOStat {
+ if x != nil {
+ return x.Blkio
+ }
+ return nil
+}
+
+func (x *Metrics) GetRdma() *RdmaStat {
+ if x != nil {
+ return x.Rdma
+ }
+ return nil
+}
+
+func (x *Metrics) GetNetwork() []*NetworkStat {
+ if x != nil {
+ return x.Network
+ }
+ return nil
+}
+
+func (x *Metrics) GetCgroupStats() *CgroupStats {
+ if x != nil {
+ return x.CgroupStats
+ }
+ return nil
+}
+
+func (x *Metrics) GetMemoryOomControl() *MemoryOomControl {
+ if x != nil {
+ return x.MemoryOomControl
+ }
+ return nil
+}
+
+type HugetlbStat struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Usage uint64 `protobuf:"varint,1,opt,name=usage,proto3" json:"usage,omitempty"`
+ Max uint64 `protobuf:"varint,2,opt,name=max,proto3" json:"max,omitempty"`
+ Failcnt uint64 `protobuf:"varint,3,opt,name=failcnt,proto3" json:"failcnt,omitempty"`
+ Pagesize string `protobuf:"bytes,4,opt,name=pagesize,proto3" json:"pagesize,omitempty"`
+}
+
+func (x *HugetlbStat) Reset() {
+ *x = HugetlbStat{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[1]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *HugetlbStat) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*HugetlbStat) ProtoMessage() {}
+
+func (x *HugetlbStat) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[1]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use HugetlbStat.ProtoReflect.Descriptor instead.
+func (*HugetlbStat) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{1}
+}
+
+func (x *HugetlbStat) GetUsage() uint64 {
+ if x != nil {
+ return x.Usage
+ }
+ return 0
+}
+
+func (x *HugetlbStat) GetMax() uint64 {
+ if x != nil {
+ return x.Max
+ }
+ return 0
+}
+
+func (x *HugetlbStat) GetFailcnt() uint64 {
+ if x != nil {
+ return x.Failcnt
+ }
+ return 0
+}
+
+func (x *HugetlbStat) GetPagesize() string {
+ if x != nil {
+ return x.Pagesize
+ }
+ return ""
+}
+
+type PidsStat struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Current uint64 `protobuf:"varint,1,opt,name=current,proto3" json:"current,omitempty"`
+ Limit uint64 `protobuf:"varint,2,opt,name=limit,proto3" json:"limit,omitempty"`
+}
+
+func (x *PidsStat) Reset() {
+ *x = PidsStat{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[2]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *PidsStat) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*PidsStat) ProtoMessage() {}
+
+func (x *PidsStat) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[2]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use PidsStat.ProtoReflect.Descriptor instead.
+func (*PidsStat) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{2}
+}
+
+func (x *PidsStat) GetCurrent() uint64 {
+ if x != nil {
+ return x.Current
+ }
+ return 0
+}
+
+func (x *PidsStat) GetLimit() uint64 {
+ if x != nil {
+ return x.Limit
+ }
+ return 0
+}
+
+type CPUStat struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Usage *CPUUsage `protobuf:"bytes,1,opt,name=usage,proto3" json:"usage,omitempty"`
+ Throttling *Throttle `protobuf:"bytes,2,opt,name=throttling,proto3" json:"throttling,omitempty"`
+}
+
+func (x *CPUStat) Reset() {
+ *x = CPUStat{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[3]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *CPUStat) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*CPUStat) ProtoMessage() {}
+
+func (x *CPUStat) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[3]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use CPUStat.ProtoReflect.Descriptor instead.
+func (*CPUStat) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{3}
+}
+
+func (x *CPUStat) GetUsage() *CPUUsage {
+ if x != nil {
+ return x.Usage
+ }
+ return nil
+}
+
+func (x *CPUStat) GetThrottling() *Throttle {
+ if x != nil {
+ return x.Throttling
+ }
+ return nil
+}
+
+type CPUUsage struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ // values in nanoseconds
+ Total uint64 `protobuf:"varint,1,opt,name=total,proto3" json:"total,omitempty"`
+ Kernel uint64 `protobuf:"varint,2,opt,name=kernel,proto3" json:"kernel,omitempty"`
+ User uint64 `protobuf:"varint,3,opt,name=user,proto3" json:"user,omitempty"`
+ PerCPU []uint64 `protobuf:"varint,4,rep,packed,name=per_cpu,json=perCpu,proto3" json:"per_cpu,omitempty"`
+}
+
+func (x *CPUUsage) Reset() {
+ *x = CPUUsage{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[4]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *CPUUsage) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*CPUUsage) ProtoMessage() {}
+
+func (x *CPUUsage) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[4]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use CPUUsage.ProtoReflect.Descriptor instead.
+func (*CPUUsage) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{4}
+}
+
+func (x *CPUUsage) GetTotal() uint64 {
+ if x != nil {
+ return x.Total
+ }
+ return 0
+}
+
+func (x *CPUUsage) GetKernel() uint64 {
+ if x != nil {
+ return x.Kernel
+ }
+ return 0
+}
+
+func (x *CPUUsage) GetUser() uint64 {
+ if x != nil {
+ return x.User
+ }
+ return 0
+}
+
+func (x *CPUUsage) GetPerCPU() []uint64 {
+ if x != nil {
+ return x.PerCPU
+ }
+ return nil
+}
+
+type Throttle struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Periods uint64 `protobuf:"varint,1,opt,name=periods,proto3" json:"periods,omitempty"`
+ ThrottledPeriods uint64 `protobuf:"varint,2,opt,name=throttled_periods,json=throttledPeriods,proto3" json:"throttled_periods,omitempty"`
+ ThrottledTime uint64 `protobuf:"varint,3,opt,name=throttled_time,json=throttledTime,proto3" json:"throttled_time,omitempty"`
+}
+
+func (x *Throttle) Reset() {
+ *x = Throttle{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[5]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Throttle) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Throttle) ProtoMessage() {}
+
+func (x *Throttle) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[5]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Throttle.ProtoReflect.Descriptor instead.
+func (*Throttle) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{5}
+}
+
+func (x *Throttle) GetPeriods() uint64 {
+ if x != nil {
+ return x.Periods
+ }
+ return 0
+}
+
+func (x *Throttle) GetThrottledPeriods() uint64 {
+ if x != nil {
+ return x.ThrottledPeriods
+ }
+ return 0
+}
+
+func (x *Throttle) GetThrottledTime() uint64 {
+ if x != nil {
+ return x.ThrottledTime
+ }
+ return 0
+}
+
+type MemoryStat struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Cache uint64 `protobuf:"varint,1,opt,name=cache,proto3" json:"cache,omitempty"`
+ RSS uint64 `protobuf:"varint,2,opt,name=rss,proto3" json:"rss,omitempty"`
+ RSSHuge uint64 `protobuf:"varint,3,opt,name=rss_huge,json=rssHuge,proto3" json:"rss_huge,omitempty"`
+ MappedFile uint64 `protobuf:"varint,4,opt,name=mapped_file,json=mappedFile,proto3" json:"mapped_file,omitempty"`
+ Dirty uint64 `protobuf:"varint,5,opt,name=dirty,proto3" json:"dirty,omitempty"`
+ Writeback uint64 `protobuf:"varint,6,opt,name=writeback,proto3" json:"writeback,omitempty"`
+ PgPgIn uint64 `protobuf:"varint,7,opt,name=pg_pg_in,json=pgPgIn,proto3" json:"pg_pg_in,omitempty"`
+ PgPgOut uint64 `protobuf:"varint,8,opt,name=pg_pg_out,json=pgPgOut,proto3" json:"pg_pg_out,omitempty"`
+ PgFault uint64 `protobuf:"varint,9,opt,name=pg_fault,json=pgFault,proto3" json:"pg_fault,omitempty"`
+ PgMajFault uint64 `protobuf:"varint,10,opt,name=pg_maj_fault,json=pgMajFault,proto3" json:"pg_maj_fault,omitempty"`
+ InactiveAnon uint64 `protobuf:"varint,11,opt,name=inactive_anon,json=inactiveAnon,proto3" json:"inactive_anon,omitempty"`
+ ActiveAnon uint64 `protobuf:"varint,12,opt,name=active_anon,json=activeAnon,proto3" json:"active_anon,omitempty"`
+ InactiveFile uint64 `protobuf:"varint,13,opt,name=inactive_file,json=inactiveFile,proto3" json:"inactive_file,omitempty"`
+ ActiveFile uint64 `protobuf:"varint,14,opt,name=active_file,json=activeFile,proto3" json:"active_file,omitempty"`
+ Unevictable uint64 `protobuf:"varint,15,opt,name=unevictable,proto3" json:"unevictable,omitempty"`
+ HierarchicalMemoryLimit uint64 `protobuf:"varint,16,opt,name=hierarchical_memory_limit,json=hierarchicalMemoryLimit,proto3" json:"hierarchical_memory_limit,omitempty"`
+ HierarchicalSwapLimit uint64 `protobuf:"varint,17,opt,name=hierarchical_swap_limit,json=hierarchicalSwapLimit,proto3" json:"hierarchical_swap_limit,omitempty"`
+ TotalCache uint64 `protobuf:"varint,18,opt,name=total_cache,json=totalCache,proto3" json:"total_cache,omitempty"`
+ TotalRSS uint64 `protobuf:"varint,19,opt,name=total_rss,json=totalRss,proto3" json:"total_rss,omitempty"`
+ TotalRSSHuge uint64 `protobuf:"varint,20,opt,name=total_rss_huge,json=totalRssHuge,proto3" json:"total_rss_huge,omitempty"`
+ TotalMappedFile uint64 `protobuf:"varint,21,opt,name=total_mapped_file,json=totalMappedFile,proto3" json:"total_mapped_file,omitempty"`
+ TotalDirty uint64 `protobuf:"varint,22,opt,name=total_dirty,json=totalDirty,proto3" json:"total_dirty,omitempty"`
+ TotalWriteback uint64 `protobuf:"varint,23,opt,name=total_writeback,json=totalWriteback,proto3" json:"total_writeback,omitempty"`
+ TotalPgPgIn uint64 `protobuf:"varint,24,opt,name=total_pg_pg_in,json=totalPgPgIn,proto3" json:"total_pg_pg_in,omitempty"`
+ TotalPgPgOut uint64 `protobuf:"varint,25,opt,name=total_pg_pg_out,json=totalPgPgOut,proto3" json:"total_pg_pg_out,omitempty"`
+ TotalPgFault uint64 `protobuf:"varint,26,opt,name=total_pg_fault,json=totalPgFault,proto3" json:"total_pg_fault,omitempty"`
+ TotalPgMajFault uint64 `protobuf:"varint,27,opt,name=total_pg_maj_fault,json=totalPgMajFault,proto3" json:"total_pg_maj_fault,omitempty"`
+ TotalInactiveAnon uint64 `protobuf:"varint,28,opt,name=total_inactive_anon,json=totalInactiveAnon,proto3" json:"total_inactive_anon,omitempty"`
+ TotalActiveAnon uint64 `protobuf:"varint,29,opt,name=total_active_anon,json=totalActiveAnon,proto3" json:"total_active_anon,omitempty"`
+ TotalInactiveFile uint64 `protobuf:"varint,30,opt,name=total_inactive_file,json=totalInactiveFile,proto3" json:"total_inactive_file,omitempty"`
+ TotalActiveFile uint64 `protobuf:"varint,31,opt,name=total_active_file,json=totalActiveFile,proto3" json:"total_active_file,omitempty"`
+ TotalUnevictable uint64 `protobuf:"varint,32,opt,name=total_unevictable,json=totalUnevictable,proto3" json:"total_unevictable,omitempty"`
+ Usage *MemoryEntry `protobuf:"bytes,33,opt,name=usage,proto3" json:"usage,omitempty"`
+ Swap *MemoryEntry `protobuf:"bytes,34,opt,name=swap,proto3" json:"swap,omitempty"`
+ Kernel *MemoryEntry `protobuf:"bytes,35,opt,name=kernel,proto3" json:"kernel,omitempty"`
+ KernelTCP *MemoryEntry `protobuf:"bytes,36,opt,name=kernel_tcp,json=kernelTcp,proto3" json:"kernel_tcp,omitempty"`
+}
+
+func (x *MemoryStat) Reset() {
+ *x = MemoryStat{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[6]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *MemoryStat) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*MemoryStat) ProtoMessage() {}
+
+func (x *MemoryStat) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[6]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use MemoryStat.ProtoReflect.Descriptor instead.
+func (*MemoryStat) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{6}
+}
+
+func (x *MemoryStat) GetCache() uint64 {
+ if x != nil {
+ return x.Cache
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetRSS() uint64 {
+ if x != nil {
+ return x.RSS
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetRSSHuge() uint64 {
+ if x != nil {
+ return x.RSSHuge
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetMappedFile() uint64 {
+ if x != nil {
+ return x.MappedFile
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetDirty() uint64 {
+ if x != nil {
+ return x.Dirty
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetWriteback() uint64 {
+ if x != nil {
+ return x.Writeback
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetPgPgIn() uint64 {
+ if x != nil {
+ return x.PgPgIn
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetPgPgOut() uint64 {
+ if x != nil {
+ return x.PgPgOut
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetPgFault() uint64 {
+ if x != nil {
+ return x.PgFault
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetPgMajFault() uint64 {
+ if x != nil {
+ return x.PgMajFault
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetInactiveAnon() uint64 {
+ if x != nil {
+ return x.InactiveAnon
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetActiveAnon() uint64 {
+ if x != nil {
+ return x.ActiveAnon
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetInactiveFile() uint64 {
+ if x != nil {
+ return x.InactiveFile
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetActiveFile() uint64 {
+ if x != nil {
+ return x.ActiveFile
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetUnevictable() uint64 {
+ if x != nil {
+ return x.Unevictable
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetHierarchicalMemoryLimit() uint64 {
+ if x != nil {
+ return x.HierarchicalMemoryLimit
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetHierarchicalSwapLimit() uint64 {
+ if x != nil {
+ return x.HierarchicalSwapLimit
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalCache() uint64 {
+ if x != nil {
+ return x.TotalCache
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalRSS() uint64 {
+ if x != nil {
+ return x.TotalRSS
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalRSSHuge() uint64 {
+ if x != nil {
+ return x.TotalRSSHuge
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalMappedFile() uint64 {
+ if x != nil {
+ return x.TotalMappedFile
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalDirty() uint64 {
+ if x != nil {
+ return x.TotalDirty
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalWriteback() uint64 {
+ if x != nil {
+ return x.TotalWriteback
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalPgPgIn() uint64 {
+ if x != nil {
+ return x.TotalPgPgIn
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalPgPgOut() uint64 {
+ if x != nil {
+ return x.TotalPgPgOut
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalPgFault() uint64 {
+ if x != nil {
+ return x.TotalPgFault
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalPgMajFault() uint64 {
+ if x != nil {
+ return x.TotalPgMajFault
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalInactiveAnon() uint64 {
+ if x != nil {
+ return x.TotalInactiveAnon
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalActiveAnon() uint64 {
+ if x != nil {
+ return x.TotalActiveAnon
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalInactiveFile() uint64 {
+ if x != nil {
+ return x.TotalInactiveFile
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalActiveFile() uint64 {
+ if x != nil {
+ return x.TotalActiveFile
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetTotalUnevictable() uint64 {
+ if x != nil {
+ return x.TotalUnevictable
+ }
+ return 0
+}
+
+func (x *MemoryStat) GetUsage() *MemoryEntry {
+ if x != nil {
+ return x.Usage
+ }
+ return nil
+}
+
+func (x *MemoryStat) GetSwap() *MemoryEntry {
+ if x != nil {
+ return x.Swap
+ }
+ return nil
+}
+
+func (x *MemoryStat) GetKernel() *MemoryEntry {
+ if x != nil {
+ return x.Kernel
+ }
+ return nil
+}
+
+func (x *MemoryStat) GetKernelTCP() *MemoryEntry {
+ if x != nil {
+ return x.KernelTCP
+ }
+ return nil
+}
+
+type MemoryEntry struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Limit uint64 `protobuf:"varint,1,opt,name=limit,proto3" json:"limit,omitempty"`
+ Usage uint64 `protobuf:"varint,2,opt,name=usage,proto3" json:"usage,omitempty"`
+ Max uint64 `protobuf:"varint,3,opt,name=max,proto3" json:"max,omitempty"`
+ Failcnt uint64 `protobuf:"varint,4,opt,name=failcnt,proto3" json:"failcnt,omitempty"`
+}
+
+func (x *MemoryEntry) Reset() {
+ *x = MemoryEntry{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[7]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *MemoryEntry) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*MemoryEntry) ProtoMessage() {}
+
+func (x *MemoryEntry) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[7]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use MemoryEntry.ProtoReflect.Descriptor instead.
+func (*MemoryEntry) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{7}
+}
+
+func (x *MemoryEntry) GetLimit() uint64 {
+ if x != nil {
+ return x.Limit
+ }
+ return 0
+}
+
+func (x *MemoryEntry) GetUsage() uint64 {
+ if x != nil {
+ return x.Usage
+ }
+ return 0
+}
+
+func (x *MemoryEntry) GetMax() uint64 {
+ if x != nil {
+ return x.Max
+ }
+ return 0
+}
+
+func (x *MemoryEntry) GetFailcnt() uint64 {
+ if x != nil {
+ return x.Failcnt
+ }
+ return 0
+}
+
+type MemoryOomControl struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ OomKillDisable uint64 `protobuf:"varint,1,opt,name=oom_kill_disable,json=oomKillDisable,proto3" json:"oom_kill_disable,omitempty"`
+ UnderOom uint64 `protobuf:"varint,2,opt,name=under_oom,json=underOom,proto3" json:"under_oom,omitempty"`
+ OomKill uint64 `protobuf:"varint,3,opt,name=oom_kill,json=oomKill,proto3" json:"oom_kill,omitempty"`
+}
+
+func (x *MemoryOomControl) Reset() {
+ *x = MemoryOomControl{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[8]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *MemoryOomControl) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*MemoryOomControl) ProtoMessage() {}
+
+func (x *MemoryOomControl) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[8]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use MemoryOomControl.ProtoReflect.Descriptor instead.
+func (*MemoryOomControl) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{8}
+}
+
+func (x *MemoryOomControl) GetOomKillDisable() uint64 {
+ if x != nil {
+ return x.OomKillDisable
+ }
+ return 0
+}
+
+func (x *MemoryOomControl) GetUnderOom() uint64 {
+ if x != nil {
+ return x.UnderOom
+ }
+ return 0
+}
+
+func (x *MemoryOomControl) GetOomKill() uint64 {
+ if x != nil {
+ return x.OomKill
+ }
+ return 0
+}
+
+type BlkIOStat struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ IoServiceBytesRecursive []*BlkIOEntry `protobuf:"bytes,1,rep,name=io_service_bytes_recursive,json=ioServiceBytesRecursive,proto3" json:"io_service_bytes_recursive,omitempty"`
+ IoServicedRecursive []*BlkIOEntry `protobuf:"bytes,2,rep,name=io_serviced_recursive,json=ioServicedRecursive,proto3" json:"io_serviced_recursive,omitempty"`
+ IoQueuedRecursive []*BlkIOEntry `protobuf:"bytes,3,rep,name=io_queued_recursive,json=ioQueuedRecursive,proto3" json:"io_queued_recursive,omitempty"`
+ IoServiceTimeRecursive []*BlkIOEntry `protobuf:"bytes,4,rep,name=io_service_time_recursive,json=ioServiceTimeRecursive,proto3" json:"io_service_time_recursive,omitempty"`
+ IoWaitTimeRecursive []*BlkIOEntry `protobuf:"bytes,5,rep,name=io_wait_time_recursive,json=ioWaitTimeRecursive,proto3" json:"io_wait_time_recursive,omitempty"`
+ IoMergedRecursive []*BlkIOEntry `protobuf:"bytes,6,rep,name=io_merged_recursive,json=ioMergedRecursive,proto3" json:"io_merged_recursive,omitempty"`
+ IoTimeRecursive []*BlkIOEntry `protobuf:"bytes,7,rep,name=io_time_recursive,json=ioTimeRecursive,proto3" json:"io_time_recursive,omitempty"`
+ SectorsRecursive []*BlkIOEntry `protobuf:"bytes,8,rep,name=sectors_recursive,json=sectorsRecursive,proto3" json:"sectors_recursive,omitempty"`
+}
+
+func (x *BlkIOStat) Reset() {
+ *x = BlkIOStat{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[9]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *BlkIOStat) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*BlkIOStat) ProtoMessage() {}
+
+func (x *BlkIOStat) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[9]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use BlkIOStat.ProtoReflect.Descriptor instead.
+func (*BlkIOStat) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{9}
+}
+
+func (x *BlkIOStat) GetIoServiceBytesRecursive() []*BlkIOEntry {
+ if x != nil {
+ return x.IoServiceBytesRecursive
+ }
+ return nil
+}
+
+func (x *BlkIOStat) GetIoServicedRecursive() []*BlkIOEntry {
+ if x != nil {
+ return x.IoServicedRecursive
+ }
+ return nil
+}
+
+func (x *BlkIOStat) GetIoQueuedRecursive() []*BlkIOEntry {
+ if x != nil {
+ return x.IoQueuedRecursive
+ }
+ return nil
+}
+
+func (x *BlkIOStat) GetIoServiceTimeRecursive() []*BlkIOEntry {
+ if x != nil {
+ return x.IoServiceTimeRecursive
+ }
+ return nil
+}
+
+func (x *BlkIOStat) GetIoWaitTimeRecursive() []*BlkIOEntry {
+ if x != nil {
+ return x.IoWaitTimeRecursive
+ }
+ return nil
+}
+
+func (x *BlkIOStat) GetIoMergedRecursive() []*BlkIOEntry {
+ if x != nil {
+ return x.IoMergedRecursive
+ }
+ return nil
+}
+
+func (x *BlkIOStat) GetIoTimeRecursive() []*BlkIOEntry {
+ if x != nil {
+ return x.IoTimeRecursive
+ }
+ return nil
+}
+
+func (x *BlkIOStat) GetSectorsRecursive() []*BlkIOEntry {
+ if x != nil {
+ return x.SectorsRecursive
+ }
+ return nil
+}
+
+type BlkIOEntry struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Op string `protobuf:"bytes,1,opt,name=op,proto3" json:"op,omitempty"`
+ Device string `protobuf:"bytes,2,opt,name=device,proto3" json:"device,omitempty"`
+ Major uint64 `protobuf:"varint,3,opt,name=major,proto3" json:"major,omitempty"`
+ Minor uint64 `protobuf:"varint,4,opt,name=minor,proto3" json:"minor,omitempty"`
+ Value uint64 `protobuf:"varint,5,opt,name=value,proto3" json:"value,omitempty"`
+}
+
+func (x *BlkIOEntry) Reset() {
+ *x = BlkIOEntry{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[10]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *BlkIOEntry) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*BlkIOEntry) ProtoMessage() {}
+
+func (x *BlkIOEntry) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[10]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use BlkIOEntry.ProtoReflect.Descriptor instead.
+func (*BlkIOEntry) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{10}
+}
+
+func (x *BlkIOEntry) GetOp() string {
+ if x != nil {
+ return x.Op
+ }
+ return ""
+}
+
+func (x *BlkIOEntry) GetDevice() string {
+ if x != nil {
+ return x.Device
+ }
+ return ""
+}
+
+func (x *BlkIOEntry) GetMajor() uint64 {
+ if x != nil {
+ return x.Major
+ }
+ return 0
+}
+
+func (x *BlkIOEntry) GetMinor() uint64 {
+ if x != nil {
+ return x.Minor
+ }
+ return 0
+}
+
+func (x *BlkIOEntry) GetValue() uint64 {
+ if x != nil {
+ return x.Value
+ }
+ return 0
+}
+
+type RdmaStat struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Current []*RdmaEntry `protobuf:"bytes,1,rep,name=current,proto3" json:"current,omitempty"`
+ Limit []*RdmaEntry `protobuf:"bytes,2,rep,name=limit,proto3" json:"limit,omitempty"`
+}
+
+func (x *RdmaStat) Reset() {
+ *x = RdmaStat{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[11]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *RdmaStat) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*RdmaStat) ProtoMessage() {}
+
+func (x *RdmaStat) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[11]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use RdmaStat.ProtoReflect.Descriptor instead.
+func (*RdmaStat) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{11}
+}
+
+func (x *RdmaStat) GetCurrent() []*RdmaEntry {
+ if x != nil {
+ return x.Current
+ }
+ return nil
+}
+
+func (x *RdmaStat) GetLimit() []*RdmaEntry {
+ if x != nil {
+ return x.Limit
+ }
+ return nil
+}
+
+type RdmaEntry struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Device string `protobuf:"bytes,1,opt,name=device,proto3" json:"device,omitempty"`
+ HcaHandles uint32 `protobuf:"varint,2,opt,name=hca_handles,json=hcaHandles,proto3" json:"hca_handles,omitempty"`
+ HcaObjects uint32 `protobuf:"varint,3,opt,name=hca_objects,json=hcaObjects,proto3" json:"hca_objects,omitempty"`
+}
+
+func (x *RdmaEntry) Reset() {
+ *x = RdmaEntry{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[12]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *RdmaEntry) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*RdmaEntry) ProtoMessage() {}
+
+func (x *RdmaEntry) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[12]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use RdmaEntry.ProtoReflect.Descriptor instead.
+func (*RdmaEntry) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{12}
+}
+
+func (x *RdmaEntry) GetDevice() string {
+ if x != nil {
+ return x.Device
+ }
+ return ""
+}
+
+func (x *RdmaEntry) GetHcaHandles() uint32 {
+ if x != nil {
+ return x.HcaHandles
+ }
+ return 0
+}
+
+func (x *RdmaEntry) GetHcaObjects() uint32 {
+ if x != nil {
+ return x.HcaObjects
+ }
+ return 0
+}
+
+type NetworkStat struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+ RxBytes uint64 `protobuf:"varint,2,opt,name=rx_bytes,json=rxBytes,proto3" json:"rx_bytes,omitempty"`
+ RxPackets uint64 `protobuf:"varint,3,opt,name=rx_packets,json=rxPackets,proto3" json:"rx_packets,omitempty"`
+ RxErrors uint64 `protobuf:"varint,4,opt,name=rx_errors,json=rxErrors,proto3" json:"rx_errors,omitempty"`
+ RxDropped uint64 `protobuf:"varint,5,opt,name=rx_dropped,json=rxDropped,proto3" json:"rx_dropped,omitempty"`
+ TxBytes uint64 `protobuf:"varint,6,opt,name=tx_bytes,json=txBytes,proto3" json:"tx_bytes,omitempty"`
+ TxPackets uint64 `protobuf:"varint,7,opt,name=tx_packets,json=txPackets,proto3" json:"tx_packets,omitempty"`
+ TxErrors uint64 `protobuf:"varint,8,opt,name=tx_errors,json=txErrors,proto3" json:"tx_errors,omitempty"`
+ TxDropped uint64 `protobuf:"varint,9,opt,name=tx_dropped,json=txDropped,proto3" json:"tx_dropped,omitempty"`
+}
+
+func (x *NetworkStat) Reset() {
+ *x = NetworkStat{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[13]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *NetworkStat) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*NetworkStat) ProtoMessage() {}
+
+func (x *NetworkStat) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[13]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use NetworkStat.ProtoReflect.Descriptor instead.
+func (*NetworkStat) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{13}
+}
+
+func (x *NetworkStat) GetName() string {
+ if x != nil {
+ return x.Name
+ }
+ return ""
+}
+
+func (x *NetworkStat) GetRxBytes() uint64 {
+ if x != nil {
+ return x.RxBytes
+ }
+ return 0
+}
+
+func (x *NetworkStat) GetRxPackets() uint64 {
+ if x != nil {
+ return x.RxPackets
+ }
+ return 0
+}
+
+func (x *NetworkStat) GetRxErrors() uint64 {
+ if x != nil {
+ return x.RxErrors
+ }
+ return 0
+}
+
+func (x *NetworkStat) GetRxDropped() uint64 {
+ if x != nil {
+ return x.RxDropped
+ }
+ return 0
+}
+
+func (x *NetworkStat) GetTxBytes() uint64 {
+ if x != nil {
+ return x.TxBytes
+ }
+ return 0
+}
+
+func (x *NetworkStat) GetTxPackets() uint64 {
+ if x != nil {
+ return x.TxPackets
+ }
+ return 0
+}
+
+func (x *NetworkStat) GetTxErrors() uint64 {
+ if x != nil {
+ return x.TxErrors
+ }
+ return 0
+}
+
+func (x *NetworkStat) GetTxDropped() uint64 {
+ if x != nil {
+ return x.TxDropped
+ }
+ return 0
+}
+
+// CgroupStats exports per-cgroup statistics.
+type CgroupStats struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ // number of tasks sleeping
+ NrSleeping uint64 `protobuf:"varint,1,opt,name=nr_sleeping,json=nrSleeping,proto3" json:"nr_sleeping,omitempty"`
+ // number of tasks running
+ NrRunning uint64 `protobuf:"varint,2,opt,name=nr_running,json=nrRunning,proto3" json:"nr_running,omitempty"`
+ // number of tasks in stopped state
+ NrStopped uint64 `protobuf:"varint,3,opt,name=nr_stopped,json=nrStopped,proto3" json:"nr_stopped,omitempty"`
+ // number of tasks in uninterruptible state
+ NrUninterruptible uint64 `protobuf:"varint,4,opt,name=nr_uninterruptible,json=nrUninterruptible,proto3" json:"nr_uninterruptible,omitempty"`
+ // number of tasks waiting on IO
+ NrIoWait uint64 `protobuf:"varint,5,opt,name=nr_io_wait,json=nrIoWait,proto3" json:"nr_io_wait,omitempty"`
+}
+
+func (x *CgroupStats) Reset() {
+ *x = CgroupStats{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[14]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *CgroupStats) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*CgroupStats) ProtoMessage() {}
+
+func (x *CgroupStats) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[14]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use CgroupStats.ProtoReflect.Descriptor instead.
+func (*CgroupStats) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP(), []int{14}
+}
+
+func (x *CgroupStats) GetNrSleeping() uint64 {
+ if x != nil {
+ return x.NrSleeping
+ }
+ return 0
+}
+
+func (x *CgroupStats) GetNrRunning() uint64 {
+ if x != nil {
+ return x.NrRunning
+ }
+ return 0
+}
+
+func (x *CgroupStats) GetNrStopped() uint64 {
+ if x != nil {
+ return x.NrStopped
+ }
+ return 0
+}
+
+func (x *CgroupStats) GetNrUninterruptible() uint64 {
+ if x != nil {
+ return x.NrUninterruptible
+ }
+ return 0
+}
+
+func (x *CgroupStats) GetNrIoWait() uint64 {
+ if x != nil {
+ return x.NrIoWait
+ }
+ return 0
+}
+
+var File_github_com_containerd_cgroups_cgroup1_stats_metrics_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDesc = []byte{
+ 0x0a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2f,
+ 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x31, 0x2f, 0x73, 0x74, 0x61, 0x74, 0x73, 0x2f, 0x6d, 0x65,
+ 0x74, 0x72, 0x69, 0x63, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x18, 0x69, 0x6f, 0x2e,
+ 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75,
+ 0x70, 0x73, 0x2e, 0x76, 0x31, 0x22, 0xcd, 0x04, 0x0a, 0x07, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63,
+ 0x73, 0x12, 0x3f, 0x0a, 0x07, 0x68, 0x75, 0x67, 0x65, 0x74, 0x6c, 0x62, 0x18, 0x01, 0x20, 0x03,
+ 0x28, 0x0b, 0x32, 0x25, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x48, 0x75,
+ 0x67, 0x65, 0x74, 0x6c, 0x62, 0x53, 0x74, 0x61, 0x74, 0x52, 0x07, 0x68, 0x75, 0x67, 0x65, 0x74,
+ 0x6c, 0x62, 0x12, 0x36, 0x0a, 0x04, 0x70, 0x69, 0x64, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b,
+ 0x32, 0x22, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64,
+ 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x50, 0x69, 0x64, 0x73,
+ 0x53, 0x74, 0x61, 0x74, 0x52, 0x04, 0x70, 0x69, 0x64, 0x73, 0x12, 0x33, 0x0a, 0x03, 0x63, 0x70,
+ 0x75, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e,
+ 0x76, 0x31, 0x2e, 0x43, 0x50, 0x55, 0x53, 0x74, 0x61, 0x74, 0x52, 0x03, 0x63, 0x70, 0x75, 0x12,
+ 0x3c, 0x0a, 0x06, 0x6d, 0x65, 0x6d, 0x6f, 0x72, 0x79, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32,
+ 0x24, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e,
+ 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x4d, 0x65, 0x6d, 0x6f, 0x72,
+ 0x79, 0x53, 0x74, 0x61, 0x74, 0x52, 0x06, 0x6d, 0x65, 0x6d, 0x6f, 0x72, 0x79, 0x12, 0x39, 0x0a,
+ 0x05, 0x62, 0x6c, 0x6b, 0x69, 0x6f, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x23, 0x2e, 0x69,
+ 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72,
+ 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x42, 0x6c, 0x6b, 0x49, 0x4f, 0x53, 0x74, 0x61,
+ 0x74, 0x52, 0x05, 0x62, 0x6c, 0x6b, 0x69, 0x6f, 0x12, 0x36, 0x0a, 0x04, 0x72, 0x64, 0x6d, 0x61,
+ 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74,
+ 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76,
+ 0x31, 0x2e, 0x52, 0x64, 0x6d, 0x61, 0x53, 0x74, 0x61, 0x74, 0x52, 0x04, 0x72, 0x64, 0x6d, 0x61,
+ 0x12, 0x3f, 0x0a, 0x07, 0x6e, 0x65, 0x74, 0x77, 0x6f, 0x72, 0x6b, 0x18, 0x07, 0x20, 0x03, 0x28,
+ 0x0b, 0x32, 0x25, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72,
+ 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x4e, 0x65, 0x74,
+ 0x77, 0x6f, 0x72, 0x6b, 0x53, 0x74, 0x61, 0x74, 0x52, 0x07, 0x6e, 0x65, 0x74, 0x77, 0x6f, 0x72,
+ 0x6b, 0x12, 0x48, 0x0a, 0x0c, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x5f, 0x73, 0x74, 0x61, 0x74,
+ 0x73, 0x18, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x25, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e,
+ 0x76, 0x31, 0x2e, 0x43, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x53, 0x74, 0x61, 0x74, 0x73, 0x52, 0x0b,
+ 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x53, 0x74, 0x61, 0x74, 0x73, 0x12, 0x58, 0x0a, 0x12, 0x6d,
+ 0x65, 0x6d, 0x6f, 0x72, 0x79, 0x5f, 0x6f, 0x6f, 0x6d, 0x5f, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f,
+ 0x6c, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x2a, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e,
+ 0x76, 0x31, 0x2e, 0x4d, 0x65, 0x6d, 0x6f, 0x72, 0x79, 0x4f, 0x6f, 0x6d, 0x43, 0x6f, 0x6e, 0x74,
+ 0x72, 0x6f, 0x6c, 0x52, 0x10, 0x6d, 0x65, 0x6d, 0x6f, 0x72, 0x79, 0x4f, 0x6f, 0x6d, 0x43, 0x6f,
+ 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x22, 0x6b, 0x0a, 0x0b, 0x48, 0x75, 0x67, 0x65, 0x74, 0x6c, 0x62,
+ 0x53, 0x74, 0x61, 0x74, 0x12, 0x14, 0x0a, 0x05, 0x75, 0x73, 0x61, 0x67, 0x65, 0x18, 0x01, 0x20,
+ 0x01, 0x28, 0x04, 0x52, 0x05, 0x75, 0x73, 0x61, 0x67, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x6d, 0x61,
+ 0x78, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x03, 0x6d, 0x61, 0x78, 0x12, 0x18, 0x0a, 0x07,
+ 0x66, 0x61, 0x69, 0x6c, 0x63, 0x6e, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x07, 0x66,
+ 0x61, 0x69, 0x6c, 0x63, 0x6e, 0x74, 0x12, 0x1a, 0x0a, 0x08, 0x70, 0x61, 0x67, 0x65, 0x73, 0x69,
+ 0x7a, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x70, 0x61, 0x67, 0x65, 0x73, 0x69,
+ 0x7a, 0x65, 0x22, 0x3a, 0x0a, 0x08, 0x50, 0x69, 0x64, 0x73, 0x53, 0x74, 0x61, 0x74, 0x12, 0x18,
+ 0x0a, 0x07, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52,
+ 0x07, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x12, 0x14, 0x0a, 0x05, 0x6c, 0x69, 0x6d, 0x69,
+ 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x22, 0x87,
+ 0x01, 0x0a, 0x07, 0x43, 0x50, 0x55, 0x53, 0x74, 0x61, 0x74, 0x12, 0x38, 0x0a, 0x05, 0x75, 0x73,
+ 0x61, 0x67, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x69, 0x6f, 0x2e, 0x63,
+ 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70,
+ 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x43, 0x50, 0x55, 0x55, 0x73, 0x61, 0x67, 0x65, 0x52, 0x05, 0x75,
+ 0x73, 0x61, 0x67, 0x65, 0x12, 0x42, 0x0a, 0x0a, 0x74, 0x68, 0x72, 0x6f, 0x74, 0x74, 0x6c, 0x69,
+ 0x6e, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f,
+ 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73,
+ 0x2e, 0x76, 0x31, 0x2e, 0x54, 0x68, 0x72, 0x6f, 0x74, 0x74, 0x6c, 0x65, 0x52, 0x0a, 0x74, 0x68,
+ 0x72, 0x6f, 0x74, 0x74, 0x6c, 0x69, 0x6e, 0x67, 0x22, 0x65, 0x0a, 0x08, 0x43, 0x50, 0x55, 0x55,
+ 0x73, 0x61, 0x67, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x18, 0x01, 0x20,
+ 0x01, 0x28, 0x04, 0x52, 0x05, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x12, 0x16, 0x0a, 0x06, 0x6b, 0x65,
+ 0x72, 0x6e, 0x65, 0x6c, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x06, 0x6b, 0x65, 0x72, 0x6e,
+ 0x65, 0x6c, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x73, 0x65, 0x72, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04,
+ 0x52, 0x04, 0x75, 0x73, 0x65, 0x72, 0x12, 0x17, 0x0a, 0x07, 0x70, 0x65, 0x72, 0x5f, 0x63, 0x70,
+ 0x75, 0x18, 0x04, 0x20, 0x03, 0x28, 0x04, 0x52, 0x06, 0x70, 0x65, 0x72, 0x43, 0x70, 0x75, 0x22,
+ 0x78, 0x0a, 0x08, 0x54, 0x68, 0x72, 0x6f, 0x74, 0x74, 0x6c, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70,
+ 0x65, 0x72, 0x69, 0x6f, 0x64, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x07, 0x70, 0x65,
+ 0x72, 0x69, 0x6f, 0x64, 0x73, 0x12, 0x2b, 0x0a, 0x11, 0x74, 0x68, 0x72, 0x6f, 0x74, 0x74, 0x6c,
+ 0x65, 0x64, 0x5f, 0x70, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04,
+ 0x52, 0x10, 0x74, 0x68, 0x72, 0x6f, 0x74, 0x74, 0x6c, 0x65, 0x64, 0x50, 0x65, 0x72, 0x69, 0x6f,
+ 0x64, 0x73, 0x12, 0x25, 0x0a, 0x0e, 0x74, 0x68, 0x72, 0x6f, 0x74, 0x74, 0x6c, 0x65, 0x64, 0x5f,
+ 0x74, 0x69, 0x6d, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0d, 0x74, 0x68, 0x72, 0x6f,
+ 0x74, 0x74, 0x6c, 0x65, 0x64, 0x54, 0x69, 0x6d, 0x65, 0x22, 0x94, 0x0b, 0x0a, 0x0a, 0x4d, 0x65,
+ 0x6d, 0x6f, 0x72, 0x79, 0x53, 0x74, 0x61, 0x74, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x61, 0x63, 0x68,
+ 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63, 0x61, 0x63, 0x68, 0x65, 0x12, 0x10,
+ 0x0a, 0x03, 0x72, 0x73, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x03, 0x72, 0x73, 0x73,
+ 0x12, 0x19, 0x0a, 0x08, 0x72, 0x73, 0x73, 0x5f, 0x68, 0x75, 0x67, 0x65, 0x18, 0x03, 0x20, 0x01,
+ 0x28, 0x04, 0x52, 0x07, 0x72, 0x73, 0x73, 0x48, 0x75, 0x67, 0x65, 0x12, 0x1f, 0x0a, 0x0b, 0x6d,
+ 0x61, 0x70, 0x70, 0x65, 0x64, 0x5f, 0x66, 0x69, 0x6c, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04,
+ 0x52, 0x0a, 0x6d, 0x61, 0x70, 0x70, 0x65, 0x64, 0x46, 0x69, 0x6c, 0x65, 0x12, 0x14, 0x0a, 0x05,
+ 0x64, 0x69, 0x72, 0x74, 0x79, 0x18, 0x05, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x64, 0x69, 0x72,
+ 0x74, 0x79, 0x12, 0x1c, 0x0a, 0x09, 0x77, 0x72, 0x69, 0x74, 0x65, 0x62, 0x61, 0x63, 0x6b, 0x18,
+ 0x06, 0x20, 0x01, 0x28, 0x04, 0x52, 0x09, 0x77, 0x72, 0x69, 0x74, 0x65, 0x62, 0x61, 0x63, 0x6b,
+ 0x12, 0x18, 0x0a, 0x08, 0x70, 0x67, 0x5f, 0x70, 0x67, 0x5f, 0x69, 0x6e, 0x18, 0x07, 0x20, 0x01,
+ 0x28, 0x04, 0x52, 0x06, 0x70, 0x67, 0x50, 0x67, 0x49, 0x6e, 0x12, 0x1a, 0x0a, 0x09, 0x70, 0x67,
+ 0x5f, 0x70, 0x67, 0x5f, 0x6f, 0x75, 0x74, 0x18, 0x08, 0x20, 0x01, 0x28, 0x04, 0x52, 0x07, 0x70,
+ 0x67, 0x50, 0x67, 0x4f, 0x75, 0x74, 0x12, 0x19, 0x0a, 0x08, 0x70, 0x67, 0x5f, 0x66, 0x61, 0x75,
+ 0x6c, 0x74, 0x18, 0x09, 0x20, 0x01, 0x28, 0x04, 0x52, 0x07, 0x70, 0x67, 0x46, 0x61, 0x75, 0x6c,
+ 0x74, 0x12, 0x20, 0x0a, 0x0c, 0x70, 0x67, 0x5f, 0x6d, 0x61, 0x6a, 0x5f, 0x66, 0x61, 0x75, 0x6c,
+ 0x74, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0a, 0x70, 0x67, 0x4d, 0x61, 0x6a, 0x46, 0x61,
+ 0x75, 0x6c, 0x74, 0x12, 0x23, 0x0a, 0x0d, 0x69, 0x6e, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x5f,
+ 0x61, 0x6e, 0x6f, 0x6e, 0x18, 0x0b, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0c, 0x69, 0x6e, 0x61, 0x63,
+ 0x74, 0x69, 0x76, 0x65, 0x41, 0x6e, 0x6f, 0x6e, 0x12, 0x1f, 0x0a, 0x0b, 0x61, 0x63, 0x74, 0x69,
+ 0x76, 0x65, 0x5f, 0x61, 0x6e, 0x6f, 0x6e, 0x18, 0x0c, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0a, 0x61,
+ 0x63, 0x74, 0x69, 0x76, 0x65, 0x41, 0x6e, 0x6f, 0x6e, 0x12, 0x23, 0x0a, 0x0d, 0x69, 0x6e, 0x61,
+ 0x63, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x66, 0x69, 0x6c, 0x65, 0x18, 0x0d, 0x20, 0x01, 0x28, 0x04,
+ 0x52, 0x0c, 0x69, 0x6e, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x46, 0x69, 0x6c, 0x65, 0x12, 0x1f,
+ 0x0a, 0x0b, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x66, 0x69, 0x6c, 0x65, 0x18, 0x0e, 0x20,
+ 0x01, 0x28, 0x04, 0x52, 0x0a, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x46, 0x69, 0x6c, 0x65, 0x12,
+ 0x20, 0x0a, 0x0b, 0x75, 0x6e, 0x65, 0x76, 0x69, 0x63, 0x74, 0x61, 0x62, 0x6c, 0x65, 0x18, 0x0f,
+ 0x20, 0x01, 0x28, 0x04, 0x52, 0x0b, 0x75, 0x6e, 0x65, 0x76, 0x69, 0x63, 0x74, 0x61, 0x62, 0x6c,
+ 0x65, 0x12, 0x3a, 0x0a, 0x19, 0x68, 0x69, 0x65, 0x72, 0x61, 0x72, 0x63, 0x68, 0x69, 0x63, 0x61,
+ 0x6c, 0x5f, 0x6d, 0x65, 0x6d, 0x6f, 0x72, 0x79, 0x5f, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x18, 0x10,
+ 0x20, 0x01, 0x28, 0x04, 0x52, 0x17, 0x68, 0x69, 0x65, 0x72, 0x61, 0x72, 0x63, 0x68, 0x69, 0x63,
+ 0x61, 0x6c, 0x4d, 0x65, 0x6d, 0x6f, 0x72, 0x79, 0x4c, 0x69, 0x6d, 0x69, 0x74, 0x12, 0x36, 0x0a,
+ 0x17, 0x68, 0x69, 0x65, 0x72, 0x61, 0x72, 0x63, 0x68, 0x69, 0x63, 0x61, 0x6c, 0x5f, 0x73, 0x77,
+ 0x61, 0x70, 0x5f, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x18, 0x11, 0x20, 0x01, 0x28, 0x04, 0x52, 0x15,
+ 0x68, 0x69, 0x65, 0x72, 0x61, 0x72, 0x63, 0x68, 0x69, 0x63, 0x61, 0x6c, 0x53, 0x77, 0x61, 0x70,
+ 0x4c, 0x69, 0x6d, 0x69, 0x74, 0x12, 0x1f, 0x0a, 0x0b, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f, 0x63,
+ 0x61, 0x63, 0x68, 0x65, 0x18, 0x12, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0a, 0x74, 0x6f, 0x74, 0x61,
+ 0x6c, 0x43, 0x61, 0x63, 0x68, 0x65, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f,
+ 0x72, 0x73, 0x73, 0x18, 0x13, 0x20, 0x01, 0x28, 0x04, 0x52, 0x08, 0x74, 0x6f, 0x74, 0x61, 0x6c,
+ 0x52, 0x73, 0x73, 0x12, 0x24, 0x0a, 0x0e, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f, 0x72, 0x73, 0x73,
+ 0x5f, 0x68, 0x75, 0x67, 0x65, 0x18, 0x14, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0c, 0x74, 0x6f, 0x74,
+ 0x61, 0x6c, 0x52, 0x73, 0x73, 0x48, 0x75, 0x67, 0x65, 0x12, 0x2a, 0x0a, 0x11, 0x74, 0x6f, 0x74,
+ 0x61, 0x6c, 0x5f, 0x6d, 0x61, 0x70, 0x70, 0x65, 0x64, 0x5f, 0x66, 0x69, 0x6c, 0x65, 0x18, 0x15,
+ 0x20, 0x01, 0x28, 0x04, 0x52, 0x0f, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x4d, 0x61, 0x70, 0x70, 0x65,
+ 0x64, 0x46, 0x69, 0x6c, 0x65, 0x12, 0x1f, 0x0a, 0x0b, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f, 0x64,
+ 0x69, 0x72, 0x74, 0x79, 0x18, 0x16, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0a, 0x74, 0x6f, 0x74, 0x61,
+ 0x6c, 0x44, 0x69, 0x72, 0x74, 0x79, 0x12, 0x27, 0x0a, 0x0f, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f,
+ 0x77, 0x72, 0x69, 0x74, 0x65, 0x62, 0x61, 0x63, 0x6b, 0x18, 0x17, 0x20, 0x01, 0x28, 0x04, 0x52,
+ 0x0e, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x57, 0x72, 0x69, 0x74, 0x65, 0x62, 0x61, 0x63, 0x6b, 0x12,
+ 0x23, 0x0a, 0x0e, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f, 0x70, 0x67, 0x5f, 0x70, 0x67, 0x5f, 0x69,
+ 0x6e, 0x18, 0x18, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0b, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x50, 0x67,
+ 0x50, 0x67, 0x49, 0x6e, 0x12, 0x25, 0x0a, 0x0f, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f, 0x70, 0x67,
+ 0x5f, 0x70, 0x67, 0x5f, 0x6f, 0x75, 0x74, 0x18, 0x19, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0c, 0x74,
+ 0x6f, 0x74, 0x61, 0x6c, 0x50, 0x67, 0x50, 0x67, 0x4f, 0x75, 0x74, 0x12, 0x24, 0x0a, 0x0e, 0x74,
+ 0x6f, 0x74, 0x61, 0x6c, 0x5f, 0x70, 0x67, 0x5f, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x18, 0x1a, 0x20,
+ 0x01, 0x28, 0x04, 0x52, 0x0c, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x50, 0x67, 0x46, 0x61, 0x75, 0x6c,
+ 0x74, 0x12, 0x2b, 0x0a, 0x12, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f, 0x70, 0x67, 0x5f, 0x6d, 0x61,
+ 0x6a, 0x5f, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x18, 0x1b, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0f, 0x74,
+ 0x6f, 0x74, 0x61, 0x6c, 0x50, 0x67, 0x4d, 0x61, 0x6a, 0x46, 0x61, 0x75, 0x6c, 0x74, 0x12, 0x2e,
+ 0x0a, 0x13, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f, 0x69, 0x6e, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65,
+ 0x5f, 0x61, 0x6e, 0x6f, 0x6e, 0x18, 0x1c, 0x20, 0x01, 0x28, 0x04, 0x52, 0x11, 0x74, 0x6f, 0x74,
+ 0x61, 0x6c, 0x49, 0x6e, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x41, 0x6e, 0x6f, 0x6e, 0x12, 0x2a,
+ 0x0a, 0x11, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x61,
+ 0x6e, 0x6f, 0x6e, 0x18, 0x1d, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0f, 0x74, 0x6f, 0x74, 0x61, 0x6c,
+ 0x41, 0x63, 0x74, 0x69, 0x76, 0x65, 0x41, 0x6e, 0x6f, 0x6e, 0x12, 0x2e, 0x0a, 0x13, 0x74, 0x6f,
+ 0x74, 0x61, 0x6c, 0x5f, 0x69, 0x6e, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x66, 0x69, 0x6c,
+ 0x65, 0x18, 0x1e, 0x20, 0x01, 0x28, 0x04, 0x52, 0x11, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x49, 0x6e,
+ 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x46, 0x69, 0x6c, 0x65, 0x12, 0x2a, 0x0a, 0x11, 0x74, 0x6f,
+ 0x74, 0x61, 0x6c, 0x5f, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x66, 0x69, 0x6c, 0x65, 0x18,
+ 0x1f, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0f, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x41, 0x63, 0x74, 0x69,
+ 0x76, 0x65, 0x46, 0x69, 0x6c, 0x65, 0x12, 0x2b, 0x0a, 0x11, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f,
+ 0x75, 0x6e, 0x65, 0x76, 0x69, 0x63, 0x74, 0x61, 0x62, 0x6c, 0x65, 0x18, 0x20, 0x20, 0x01, 0x28,
+ 0x04, 0x52, 0x10, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x55, 0x6e, 0x65, 0x76, 0x69, 0x63, 0x74, 0x61,
+ 0x62, 0x6c, 0x65, 0x12, 0x3b, 0x0a, 0x05, 0x75, 0x73, 0x61, 0x67, 0x65, 0x18, 0x21, 0x20, 0x01,
+ 0x28, 0x0b, 0x32, 0x25, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x4d, 0x65,
+ 0x6d, 0x6f, 0x72, 0x79, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x05, 0x75, 0x73, 0x61, 0x67, 0x65,
+ 0x12, 0x39, 0x0a, 0x04, 0x73, 0x77, 0x61, 0x70, 0x18, 0x22, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x25,
+ 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63,
+ 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x4d, 0x65, 0x6d, 0x6f, 0x72, 0x79,
+ 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x04, 0x73, 0x77, 0x61, 0x70, 0x12, 0x3d, 0x0a, 0x06, 0x6b,
+ 0x65, 0x72, 0x6e, 0x65, 0x6c, 0x18, 0x23, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x25, 0x2e, 0x69, 0x6f,
+ 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f,
+ 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x4d, 0x65, 0x6d, 0x6f, 0x72, 0x79, 0x45, 0x6e, 0x74,
+ 0x72, 0x79, 0x52, 0x06, 0x6b, 0x65, 0x72, 0x6e, 0x65, 0x6c, 0x12, 0x44, 0x0a, 0x0a, 0x6b, 0x65,
+ 0x72, 0x6e, 0x65, 0x6c, 0x5f, 0x74, 0x63, 0x70, 0x18, 0x24, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x25,
+ 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63,
+ 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x4d, 0x65, 0x6d, 0x6f, 0x72, 0x79,
+ 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x09, 0x6b, 0x65, 0x72, 0x6e, 0x65, 0x6c, 0x54, 0x63, 0x70,
+ 0x22, 0x65, 0x0a, 0x0b, 0x4d, 0x65, 0x6d, 0x6f, 0x72, 0x79, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12,
+ 0x14, 0x0a, 0x05, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05,
+ 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x12, 0x14, 0x0a, 0x05, 0x75, 0x73, 0x61, 0x67, 0x65, 0x18, 0x02,
+ 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x75, 0x73, 0x61, 0x67, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x6d,
+ 0x61, 0x78, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x03, 0x6d, 0x61, 0x78, 0x12, 0x18, 0x0a,
+ 0x07, 0x66, 0x61, 0x69, 0x6c, 0x63, 0x6e, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04, 0x52, 0x07,
+ 0x66, 0x61, 0x69, 0x6c, 0x63, 0x6e, 0x74, 0x22, 0x74, 0x0a, 0x10, 0x4d, 0x65, 0x6d, 0x6f, 0x72,
+ 0x79, 0x4f, 0x6f, 0x6d, 0x43, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x12, 0x28, 0x0a, 0x10, 0x6f,
+ 0x6f, 0x6d, 0x5f, 0x6b, 0x69, 0x6c, 0x6c, 0x5f, 0x64, 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, 0x18,
+ 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0e, 0x6f, 0x6f, 0x6d, 0x4b, 0x69, 0x6c, 0x6c, 0x44, 0x69,
+ 0x73, 0x61, 0x62, 0x6c, 0x65, 0x12, 0x1b, 0x0a, 0x09, 0x75, 0x6e, 0x64, 0x65, 0x72, 0x5f, 0x6f,
+ 0x6f, 0x6d, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x08, 0x75, 0x6e, 0x64, 0x65, 0x72, 0x4f,
+ 0x6f, 0x6d, 0x12, 0x19, 0x0a, 0x08, 0x6f, 0x6f, 0x6d, 0x5f, 0x6b, 0x69, 0x6c, 0x6c, 0x18, 0x03,
+ 0x20, 0x01, 0x28, 0x04, 0x52, 0x07, 0x6f, 0x6f, 0x6d, 0x4b, 0x69, 0x6c, 0x6c, 0x22, 0xd5, 0x05,
+ 0x0a, 0x09, 0x42, 0x6c, 0x6b, 0x49, 0x4f, 0x53, 0x74, 0x61, 0x74, 0x12, 0x61, 0x0a, 0x1a, 0x69,
+ 0x6f, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x5f, 0x62, 0x79, 0x74, 0x65, 0x73, 0x5f,
+ 0x72, 0x65, 0x63, 0x75, 0x72, 0x73, 0x69, 0x76, 0x65, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32,
+ 0x24, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e,
+ 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x42, 0x6c, 0x6b, 0x49, 0x4f,
+ 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x17, 0x69, 0x6f, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65,
+ 0x42, 0x79, 0x74, 0x65, 0x73, 0x52, 0x65, 0x63, 0x75, 0x72, 0x73, 0x69, 0x76, 0x65, 0x12, 0x58,
+ 0x0a, 0x15, 0x69, 0x6f, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x64, 0x5f, 0x72, 0x65,
+ 0x63, 0x75, 0x72, 0x73, 0x69, 0x76, 0x65, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e,
+ 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67,
+ 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x42, 0x6c, 0x6b, 0x49, 0x4f, 0x45, 0x6e,
+ 0x74, 0x72, 0x79, 0x52, 0x13, 0x69, 0x6f, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x64, 0x52,
+ 0x65, 0x63, 0x75, 0x72, 0x73, 0x69, 0x76, 0x65, 0x12, 0x54, 0x0a, 0x13, 0x69, 0x6f, 0x5f, 0x71,
+ 0x75, 0x65, 0x75, 0x65, 0x64, 0x5f, 0x72, 0x65, 0x63, 0x75, 0x72, 0x73, 0x69, 0x76, 0x65, 0x18,
+ 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61,
+ 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31,
+ 0x2e, 0x42, 0x6c, 0x6b, 0x49, 0x4f, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x11, 0x69, 0x6f, 0x51,
+ 0x75, 0x65, 0x75, 0x65, 0x64, 0x52, 0x65, 0x63, 0x75, 0x72, 0x73, 0x69, 0x76, 0x65, 0x12, 0x5f,
+ 0x0a, 0x19, 0x69, 0x6f, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x5f, 0x74, 0x69, 0x6d,
+ 0x65, 0x5f, 0x72, 0x65, 0x63, 0x75, 0x72, 0x73, 0x69, 0x76, 0x65, 0x18, 0x04, 0x20, 0x03, 0x28,
+ 0x0b, 0x32, 0x24, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72,
+ 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x42, 0x6c, 0x6b,
+ 0x49, 0x4f, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x16, 0x69, 0x6f, 0x53, 0x65, 0x72, 0x76, 0x69,
+ 0x63, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x52, 0x65, 0x63, 0x75, 0x72, 0x73, 0x69, 0x76, 0x65, 0x12,
+ 0x59, 0x0a, 0x16, 0x69, 0x6f, 0x5f, 0x77, 0x61, 0x69, 0x74, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x5f,
+ 0x72, 0x65, 0x63, 0x75, 0x72, 0x73, 0x69, 0x76, 0x65, 0x18, 0x05, 0x20, 0x03, 0x28, 0x0b, 0x32,
+ 0x24, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e,
+ 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x42, 0x6c, 0x6b, 0x49, 0x4f,
+ 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x13, 0x69, 0x6f, 0x57, 0x61, 0x69, 0x74, 0x54, 0x69, 0x6d,
+ 0x65, 0x52, 0x65, 0x63, 0x75, 0x72, 0x73, 0x69, 0x76, 0x65, 0x12, 0x54, 0x0a, 0x13, 0x69, 0x6f,
+ 0x5f, 0x6d, 0x65, 0x72, 0x67, 0x65, 0x64, 0x5f, 0x72, 0x65, 0x63, 0x75, 0x72, 0x73, 0x69, 0x76,
+ 0x65, 0x18, 0x06, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e,
+ 0x76, 0x31, 0x2e, 0x42, 0x6c, 0x6b, 0x49, 0x4f, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x11, 0x69,
+ 0x6f, 0x4d, 0x65, 0x72, 0x67, 0x65, 0x64, 0x52, 0x65, 0x63, 0x75, 0x72, 0x73, 0x69, 0x76, 0x65,
+ 0x12, 0x50, 0x0a, 0x11, 0x69, 0x6f, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x5f, 0x72, 0x65, 0x63, 0x75,
+ 0x72, 0x73, 0x69, 0x76, 0x65, 0x18, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x69, 0x6f,
+ 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f,
+ 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x42, 0x6c, 0x6b, 0x49, 0x4f, 0x45, 0x6e, 0x74, 0x72,
+ 0x79, 0x52, 0x0f, 0x69, 0x6f, 0x54, 0x69, 0x6d, 0x65, 0x52, 0x65, 0x63, 0x75, 0x72, 0x73, 0x69,
+ 0x76, 0x65, 0x12, 0x51, 0x0a, 0x11, 0x73, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x73, 0x5f, 0x72, 0x65,
+ 0x63, 0x75, 0x72, 0x73, 0x69, 0x76, 0x65, 0x18, 0x08, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e,
+ 0x69, 0x6f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67,
+ 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x42, 0x6c, 0x6b, 0x49, 0x4f, 0x45, 0x6e,
+ 0x74, 0x72, 0x79, 0x52, 0x10, 0x73, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x73, 0x52, 0x65, 0x63, 0x75,
+ 0x72, 0x73, 0x69, 0x76, 0x65, 0x22, 0x76, 0x0a, 0x0a, 0x42, 0x6c, 0x6b, 0x49, 0x4f, 0x45, 0x6e,
+ 0x74, 0x72, 0x79, 0x12, 0x0e, 0x0a, 0x02, 0x6f, 0x70, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
+ 0x02, 0x6f, 0x70, 0x12, 0x16, 0x0a, 0x06, 0x64, 0x65, 0x76, 0x69, 0x63, 0x65, 0x18, 0x02, 0x20,
+ 0x01, 0x28, 0x09, 0x52, 0x06, 0x64, 0x65, 0x76, 0x69, 0x63, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x6d,
+ 0x61, 0x6a, 0x6f, 0x72, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x6d, 0x61, 0x6a, 0x6f,
+ 0x72, 0x12, 0x14, 0x0a, 0x05, 0x6d, 0x69, 0x6e, 0x6f, 0x72, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04,
+ 0x52, 0x05, 0x6d, 0x69, 0x6e, 0x6f, 0x72, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65,
+ 0x18, 0x05, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x22, 0x84, 0x01,
+ 0x0a, 0x08, 0x52, 0x64, 0x6d, 0x61, 0x53, 0x74, 0x61, 0x74, 0x12, 0x3d, 0x0a, 0x07, 0x63, 0x75,
+ 0x72, 0x72, 0x65, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x23, 0x2e, 0x69, 0x6f,
+ 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f,
+ 0x75, 0x70, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x52, 0x64, 0x6d, 0x61, 0x45, 0x6e, 0x74, 0x72, 0x79,
+ 0x52, 0x07, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x12, 0x39, 0x0a, 0x05, 0x6c, 0x69, 0x6d,
+ 0x69, 0x74, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x23, 0x2e, 0x69, 0x6f, 0x2e, 0x63, 0x6f,
+ 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73,
+ 0x2e, 0x76, 0x31, 0x2e, 0x52, 0x64, 0x6d, 0x61, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x05, 0x6c,
+ 0x69, 0x6d, 0x69, 0x74, 0x22, 0x65, 0x0a, 0x09, 0x52, 0x64, 0x6d, 0x61, 0x45, 0x6e, 0x74, 0x72,
+ 0x79, 0x12, 0x16, 0x0a, 0x06, 0x64, 0x65, 0x76, 0x69, 0x63, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28,
+ 0x09, 0x52, 0x06, 0x64, 0x65, 0x76, 0x69, 0x63, 0x65, 0x12, 0x1f, 0x0a, 0x0b, 0x68, 0x63, 0x61,
+ 0x5f, 0x68, 0x61, 0x6e, 0x64, 0x6c, 0x65, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0a,
+ 0x68, 0x63, 0x61, 0x48, 0x61, 0x6e, 0x64, 0x6c, 0x65, 0x73, 0x12, 0x1f, 0x0a, 0x0b, 0x68, 0x63,
+ 0x61, 0x5f, 0x6f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d, 0x52,
+ 0x0a, 0x68, 0x63, 0x61, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x73, 0x22, 0x8d, 0x02, 0x0a, 0x0b,
+ 0x4e, 0x65, 0x74, 0x77, 0x6f, 0x72, 0x6b, 0x53, 0x74, 0x61, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x6e,
+ 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12,
+ 0x19, 0x0a, 0x08, 0x72, 0x78, 0x5f, 0x62, 0x79, 0x74, 0x65, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28,
+ 0x04, 0x52, 0x07, 0x72, 0x78, 0x42, 0x79, 0x74, 0x65, 0x73, 0x12, 0x1d, 0x0a, 0x0a, 0x72, 0x78,
+ 0x5f, 0x70, 0x61, 0x63, 0x6b, 0x65, 0x74, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x09,
+ 0x72, 0x78, 0x50, 0x61, 0x63, 0x6b, 0x65, 0x74, 0x73, 0x12, 0x1b, 0x0a, 0x09, 0x72, 0x78, 0x5f,
+ 0x65, 0x72, 0x72, 0x6f, 0x72, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04, 0x52, 0x08, 0x72, 0x78,
+ 0x45, 0x72, 0x72, 0x6f, 0x72, 0x73, 0x12, 0x1d, 0x0a, 0x0a, 0x72, 0x78, 0x5f, 0x64, 0x72, 0x6f,
+ 0x70, 0x70, 0x65, 0x64, 0x18, 0x05, 0x20, 0x01, 0x28, 0x04, 0x52, 0x09, 0x72, 0x78, 0x44, 0x72,
+ 0x6f, 0x70, 0x70, 0x65, 0x64, 0x12, 0x19, 0x0a, 0x08, 0x74, 0x78, 0x5f, 0x62, 0x79, 0x74, 0x65,
+ 0x73, 0x18, 0x06, 0x20, 0x01, 0x28, 0x04, 0x52, 0x07, 0x74, 0x78, 0x42, 0x79, 0x74, 0x65, 0x73,
+ 0x12, 0x1d, 0x0a, 0x0a, 0x74, 0x78, 0x5f, 0x70, 0x61, 0x63, 0x6b, 0x65, 0x74, 0x73, 0x18, 0x07,
+ 0x20, 0x01, 0x28, 0x04, 0x52, 0x09, 0x74, 0x78, 0x50, 0x61, 0x63, 0x6b, 0x65, 0x74, 0x73, 0x12,
+ 0x1b, 0x0a, 0x09, 0x74, 0x78, 0x5f, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x73, 0x18, 0x08, 0x20, 0x01,
+ 0x28, 0x04, 0x52, 0x08, 0x74, 0x78, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x73, 0x12, 0x1d, 0x0a, 0x0a,
+ 0x74, 0x78, 0x5f, 0x64, 0x72, 0x6f, 0x70, 0x70, 0x65, 0x64, 0x18, 0x09, 0x20, 0x01, 0x28, 0x04,
+ 0x52, 0x09, 0x74, 0x78, 0x44, 0x72, 0x6f, 0x70, 0x70, 0x65, 0x64, 0x22, 0xb9, 0x01, 0x0a, 0x0b,
+ 0x43, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x53, 0x74, 0x61, 0x74, 0x73, 0x12, 0x1f, 0x0a, 0x0b, 0x6e,
+ 0x72, 0x5f, 0x73, 0x6c, 0x65, 0x65, 0x70, 0x69, 0x6e, 0x67, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04,
+ 0x52, 0x0a, 0x6e, 0x72, 0x53, 0x6c, 0x65, 0x65, 0x70, 0x69, 0x6e, 0x67, 0x12, 0x1d, 0x0a, 0x0a,
+ 0x6e, 0x72, 0x5f, 0x72, 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04,
+ 0x52, 0x09, 0x6e, 0x72, 0x52, 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x12, 0x1d, 0x0a, 0x0a, 0x6e,
+ 0x72, 0x5f, 0x73, 0x74, 0x6f, 0x70, 0x70, 0x65, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52,
+ 0x09, 0x6e, 0x72, 0x53, 0x74, 0x6f, 0x70, 0x70, 0x65, 0x64, 0x12, 0x2d, 0x0a, 0x12, 0x6e, 0x72,
+ 0x5f, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x72, 0x75, 0x70, 0x74, 0x69, 0x62, 0x6c, 0x65,
+ 0x18, 0x04, 0x20, 0x01, 0x28, 0x04, 0x52, 0x11, 0x6e, 0x72, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65,
+ 0x72, 0x72, 0x75, 0x70, 0x74, 0x69, 0x62, 0x6c, 0x65, 0x12, 0x1c, 0x0a, 0x0a, 0x6e, 0x72, 0x5f,
+ 0x69, 0x6f, 0x5f, 0x77, 0x61, 0x69, 0x74, 0x18, 0x05, 0x20, 0x01, 0x28, 0x04, 0x52, 0x08, 0x6e,
+ 0x72, 0x49, 0x6f, 0x57, 0x61, 0x69, 0x74, 0x42, 0x2d, 0x5a, 0x2b, 0x67, 0x69, 0x74, 0x68, 0x75,
+ 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64,
+ 0x2f, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x73, 0x2f, 0x63, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x31,
+ 0x2f, 0x73, 0x74, 0x61, 0x74, 0x73, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
+}
+
+var (
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescOnce sync.Once
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescData = file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDesc
+)
+
+func file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescData)
+ })
+ return file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDescData
+}
+
+var file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes = make([]protoimpl.MessageInfo, 15)
+var file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_goTypes = []interface{}{
+ (*Metrics)(nil), // 0: io.containerd.cgroups.v1.Metrics
+ (*HugetlbStat)(nil), // 1: io.containerd.cgroups.v1.HugetlbStat
+ (*PidsStat)(nil), // 2: io.containerd.cgroups.v1.PidsStat
+ (*CPUStat)(nil), // 3: io.containerd.cgroups.v1.CPUStat
+ (*CPUUsage)(nil), // 4: io.containerd.cgroups.v1.CPUUsage
+ (*Throttle)(nil), // 5: io.containerd.cgroups.v1.Throttle
+ (*MemoryStat)(nil), // 6: io.containerd.cgroups.v1.MemoryStat
+ (*MemoryEntry)(nil), // 7: io.containerd.cgroups.v1.MemoryEntry
+ (*MemoryOomControl)(nil), // 8: io.containerd.cgroups.v1.MemoryOomControl
+ (*BlkIOStat)(nil), // 9: io.containerd.cgroups.v1.BlkIOStat
+ (*BlkIOEntry)(nil), // 10: io.containerd.cgroups.v1.BlkIOEntry
+ (*RdmaStat)(nil), // 11: io.containerd.cgroups.v1.RdmaStat
+ (*RdmaEntry)(nil), // 12: io.containerd.cgroups.v1.RdmaEntry
+ (*NetworkStat)(nil), // 13: io.containerd.cgroups.v1.NetworkStat
+ (*CgroupStats)(nil), // 14: io.containerd.cgroups.v1.CgroupStats
+}
+var file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_depIdxs = []int32{
+ 1, // 0: io.containerd.cgroups.v1.Metrics.hugetlb:type_name -> io.containerd.cgroups.v1.HugetlbStat
+ 2, // 1: io.containerd.cgroups.v1.Metrics.pids:type_name -> io.containerd.cgroups.v1.PidsStat
+ 3, // 2: io.containerd.cgroups.v1.Metrics.cpu:type_name -> io.containerd.cgroups.v1.CPUStat
+ 6, // 3: io.containerd.cgroups.v1.Metrics.memory:type_name -> io.containerd.cgroups.v1.MemoryStat
+ 9, // 4: io.containerd.cgroups.v1.Metrics.blkio:type_name -> io.containerd.cgroups.v1.BlkIOStat
+ 11, // 5: io.containerd.cgroups.v1.Metrics.rdma:type_name -> io.containerd.cgroups.v1.RdmaStat
+ 13, // 6: io.containerd.cgroups.v1.Metrics.network:type_name -> io.containerd.cgroups.v1.NetworkStat
+ 14, // 7: io.containerd.cgroups.v1.Metrics.cgroup_stats:type_name -> io.containerd.cgroups.v1.CgroupStats
+ 8, // 8: io.containerd.cgroups.v1.Metrics.memory_oom_control:type_name -> io.containerd.cgroups.v1.MemoryOomControl
+ 4, // 9: io.containerd.cgroups.v1.CPUStat.usage:type_name -> io.containerd.cgroups.v1.CPUUsage
+ 5, // 10: io.containerd.cgroups.v1.CPUStat.throttling:type_name -> io.containerd.cgroups.v1.Throttle
+ 7, // 11: io.containerd.cgroups.v1.MemoryStat.usage:type_name -> io.containerd.cgroups.v1.MemoryEntry
+ 7, // 12: io.containerd.cgroups.v1.MemoryStat.swap:type_name -> io.containerd.cgroups.v1.MemoryEntry
+ 7, // 13: io.containerd.cgroups.v1.MemoryStat.kernel:type_name -> io.containerd.cgroups.v1.MemoryEntry
+ 7, // 14: io.containerd.cgroups.v1.MemoryStat.kernel_tcp:type_name -> io.containerd.cgroups.v1.MemoryEntry
+ 10, // 15: io.containerd.cgroups.v1.BlkIOStat.io_service_bytes_recursive:type_name -> io.containerd.cgroups.v1.BlkIOEntry
+ 10, // 16: io.containerd.cgroups.v1.BlkIOStat.io_serviced_recursive:type_name -> io.containerd.cgroups.v1.BlkIOEntry
+ 10, // 17: io.containerd.cgroups.v1.BlkIOStat.io_queued_recursive:type_name -> io.containerd.cgroups.v1.BlkIOEntry
+ 10, // 18: io.containerd.cgroups.v1.BlkIOStat.io_service_time_recursive:type_name -> io.containerd.cgroups.v1.BlkIOEntry
+ 10, // 19: io.containerd.cgroups.v1.BlkIOStat.io_wait_time_recursive:type_name -> io.containerd.cgroups.v1.BlkIOEntry
+ 10, // 20: io.containerd.cgroups.v1.BlkIOStat.io_merged_recursive:type_name -> io.containerd.cgroups.v1.BlkIOEntry
+ 10, // 21: io.containerd.cgroups.v1.BlkIOStat.io_time_recursive:type_name -> io.containerd.cgroups.v1.BlkIOEntry
+ 10, // 22: io.containerd.cgroups.v1.BlkIOStat.sectors_recursive:type_name -> io.containerd.cgroups.v1.BlkIOEntry
+ 12, // 23: io.containerd.cgroups.v1.RdmaStat.current:type_name -> io.containerd.cgroups.v1.RdmaEntry
+ 12, // 24: io.containerd.cgroups.v1.RdmaStat.limit:type_name -> io.containerd.cgroups.v1.RdmaEntry
+ 25, // [25:25] is the sub-list for method output_type
+ 25, // [25:25] is the sub-list for method input_type
+ 25, // [25:25] is the sub-list for extension type_name
+ 25, // [25:25] is the sub-list for extension extendee
+ 0, // [0:25] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_init() }
+func file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_init() {
+ if File_github_com_containerd_cgroups_cgroup1_stats_metrics_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*Metrics); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*HugetlbStat); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*PidsStat); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*CPUStat); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*CPUUsage); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*Throttle); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*MemoryStat); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*MemoryEntry); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*MemoryOomControl); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*BlkIOStat); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*BlkIOEntry); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*RdmaStat); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[12].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*RdmaEntry); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[13].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*NetworkStat); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes[14].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*CgroupStats); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 15,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_cgroups_cgroup1_stats_metrics_proto = out.File
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_rawDesc = nil
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_goTypes = nil
+ file_github_com_containerd_cgroups_cgroup1_stats_metrics_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.txt b/tools/vendor/github.com/containerd/cgroups/v3/cgroup1/stats/metrics.pb.txt
similarity index 97%
rename from tools/vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.txt
rename to tools/vendor/github.com/containerd/cgroups/v3/cgroup1/stats/metrics.pb.txt
index e476cea64..7e4313ea5 100644
--- a/tools/vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.txt
+++ b/tools/vendor/github.com/containerd/cgroups/v3/cgroup1/stats/metrics.pb.txt
@@ -1,7 +1,6 @@
file {
- name: "github.com/containerd/cgroups/stats/v1/metrics.proto"
+ name: "github.com/containerd/cgroups/cgroup1/stats/metrics.proto"
package: "io.containerd.cgroups.v1"
- dependency: "gogoproto/gogo.proto"
message_type {
name: "Metrics"
field {
@@ -26,9 +25,6 @@ file {
label: LABEL_OPTIONAL
type: TYPE_MESSAGE
type_name: ".io.containerd.cgroups.v1.CPUStat"
- options {
- 65004: "CPU"
- }
json_name: "cpu"
}
field {
@@ -175,9 +171,6 @@ file {
number: 4
label: LABEL_REPEATED
type: TYPE_UINT64
- options {
- 65004: "PerCPU"
- }
json_name: "perCpu"
}
}
@@ -219,9 +212,6 @@ file {
number: 2
label: LABEL_OPTIONAL
type: TYPE_UINT64
- options {
- 65004: "RSS"
- }
json_name: "rss"
}
field {
@@ -229,9 +219,6 @@ file {
number: 3
label: LABEL_OPTIONAL
type: TYPE_UINT64
- options {
- 65004: "RSSHuge"
- }
json_name: "rssHuge"
}
field {
@@ -344,9 +331,6 @@ file {
number: 19
label: LABEL_OPTIONAL
type: TYPE_UINT64
- options {
- 65004: "TotalRSS"
- }
json_name: "totalRss"
}
field {
@@ -354,9 +338,6 @@ file {
number: 20
label: LABEL_OPTIONAL
type: TYPE_UINT64
- options {
- 65004: "TotalRSSHuge"
- }
json_name: "totalRssHuge"
}
field {
@@ -473,9 +454,6 @@ file {
label: LABEL_OPTIONAL
type: TYPE_MESSAGE
type_name: ".io.containerd.cgroups.v1.MemoryEntry"
- options {
- 65004: "KernelTCP"
- }
json_name: "kernelTcp"
}
}
@@ -786,5 +764,8 @@ file {
json_name: "nrIoWait"
}
}
+ options {
+ go_package: "github.com/containerd/cgroups/cgroup1/stats"
+ }
syntax: "proto3"
}
diff --git a/tools/vendor/github.com/containerd/cgroups/v3/cgroup1/stats/metrics.proto b/tools/vendor/github.com/containerd/cgroups/v3/cgroup1/stats/metrics.proto
new file mode 100644
index 000000000..e6e4444b1
--- /dev/null
+++ b/tools/vendor/github.com/containerd/cgroups/v3/cgroup1/stats/metrics.proto
@@ -0,0 +1,158 @@
+syntax = "proto3";
+
+package io.containerd.cgroups.v1;
+
+option go_package = "github.com/containerd/cgroups/cgroup1/stats";
+
+message Metrics {
+ repeated HugetlbStat hugetlb = 1;
+ PidsStat pids = 2;
+ CPUStat cpu = 3;
+ MemoryStat memory = 4;
+ BlkIOStat blkio = 5;
+ RdmaStat rdma = 6;
+ repeated NetworkStat network = 7;
+ CgroupStats cgroup_stats = 8;
+ MemoryOomControl memory_oom_control = 9;
+}
+
+message HugetlbStat {
+ uint64 usage = 1;
+ uint64 max = 2;
+ uint64 failcnt = 3;
+ string pagesize = 4;
+}
+
+message PidsStat {
+ uint64 current = 1;
+ uint64 limit = 2;
+}
+
+message CPUStat {
+ CPUUsage usage = 1;
+ Throttle throttling = 2;
+}
+
+message CPUUsage {
+ // values in nanoseconds
+ uint64 total = 1;
+ uint64 kernel = 2;
+ uint64 user = 3;
+ repeated uint64 per_cpu = 4;
+
+}
+
+message Throttle {
+ uint64 periods = 1;
+ uint64 throttled_periods = 2;
+ uint64 throttled_time = 3;
+}
+
+message MemoryStat {
+ uint64 cache = 1;
+ uint64 rss = 2;
+ uint64 rss_huge = 3;
+ uint64 mapped_file = 4;
+ uint64 dirty = 5;
+ uint64 writeback = 6;
+ uint64 pg_pg_in = 7;
+ uint64 pg_pg_out = 8;
+ uint64 pg_fault = 9;
+ uint64 pg_maj_fault = 10;
+ uint64 inactive_anon = 11;
+ uint64 active_anon = 12;
+ uint64 inactive_file = 13;
+ uint64 active_file = 14;
+ uint64 unevictable = 15;
+ uint64 hierarchical_memory_limit = 16;
+ uint64 hierarchical_swap_limit = 17;
+ uint64 total_cache = 18;
+ uint64 total_rss = 19;
+ uint64 total_rss_huge = 20;
+ uint64 total_mapped_file = 21;
+ uint64 total_dirty = 22;
+ uint64 total_writeback = 23;
+ uint64 total_pg_pg_in = 24;
+ uint64 total_pg_pg_out = 25;
+ uint64 total_pg_fault = 26;
+ uint64 total_pg_maj_fault = 27;
+ uint64 total_inactive_anon = 28;
+ uint64 total_active_anon = 29;
+ uint64 total_inactive_file = 30;
+ uint64 total_active_file = 31;
+ uint64 total_unevictable = 32;
+ MemoryEntry usage = 33;
+ MemoryEntry swap = 34;
+ MemoryEntry kernel = 35;
+ MemoryEntry kernel_tcp = 36;
+
+}
+
+message MemoryEntry {
+ uint64 limit = 1;
+ uint64 usage = 2;
+ uint64 max = 3;
+ uint64 failcnt = 4;
+}
+
+message MemoryOomControl {
+ uint64 oom_kill_disable = 1;
+ uint64 under_oom = 2;
+ uint64 oom_kill = 3;
+}
+
+message BlkIOStat {
+ repeated BlkIOEntry io_service_bytes_recursive = 1;
+ repeated BlkIOEntry io_serviced_recursive = 2;
+ repeated BlkIOEntry io_queued_recursive = 3;
+ repeated BlkIOEntry io_service_time_recursive = 4;
+ repeated BlkIOEntry io_wait_time_recursive = 5;
+ repeated BlkIOEntry io_merged_recursive = 6;
+ repeated BlkIOEntry io_time_recursive = 7;
+ repeated BlkIOEntry sectors_recursive = 8;
+}
+
+message BlkIOEntry {
+ string op = 1;
+ string device = 2;
+ uint64 major = 3;
+ uint64 minor = 4;
+ uint64 value = 5;
+}
+
+message RdmaStat {
+ repeated RdmaEntry current = 1;
+ repeated RdmaEntry limit = 2;
+}
+
+message RdmaEntry {
+ string device = 1;
+ uint32 hca_handles = 2;
+ uint32 hca_objects = 3;
+}
+
+message NetworkStat {
+ string name = 1;
+ uint64 rx_bytes = 2;
+ uint64 rx_packets = 3;
+ uint64 rx_errors = 4;
+ uint64 rx_dropped = 5;
+ uint64 tx_bytes = 6;
+ uint64 tx_packets = 7;
+ uint64 tx_errors = 8;
+ uint64 tx_dropped = 9;
+}
+
+// CgroupStats exports per-cgroup statistics.
+message CgroupStats {
+ // number of tasks sleeping
+ uint64 nr_sleeping = 1;
+ // number of tasks running
+ uint64 nr_running = 2;
+ // number of tasks in stopped state
+ uint64 nr_stopped = 3;
+ // number of tasks in uninterruptible state
+ uint64 nr_uninterruptible = 4;
+ // number of tasks waiting on IO
+ uint64 nr_io_wait = 5;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/container.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/container.pb.go
new file mode 100644
index 000000000..d7d40258c
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/container.pb.go
@@ -0,0 +1,431 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/events/container.proto
+
+package events
+
+import (
+ _ "github.com/containerd/containerd/protobuf/plugin"
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ anypb "google.golang.org/protobuf/types/known/anypb"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+type ContainerCreate struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ID string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
+ Image string `protobuf:"bytes,2,opt,name=image,proto3" json:"image,omitempty"`
+ Runtime *ContainerCreate_Runtime `protobuf:"bytes,3,opt,name=runtime,proto3" json:"runtime,omitempty"`
+}
+
+func (x *ContainerCreate) Reset() {
+ *x = ContainerCreate{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_container_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *ContainerCreate) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*ContainerCreate) ProtoMessage() {}
+
+func (x *ContainerCreate) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_container_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use ContainerCreate.ProtoReflect.Descriptor instead.
+func (*ContainerCreate) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_container_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *ContainerCreate) GetID() string {
+ if x != nil {
+ return x.ID
+ }
+ return ""
+}
+
+func (x *ContainerCreate) GetImage() string {
+ if x != nil {
+ return x.Image
+ }
+ return ""
+}
+
+func (x *ContainerCreate) GetRuntime() *ContainerCreate_Runtime {
+ if x != nil {
+ return x.Runtime
+ }
+ return nil
+}
+
+type ContainerUpdate struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ID string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
+ Image string `protobuf:"bytes,2,opt,name=image,proto3" json:"image,omitempty"`
+ Labels map[string]string `protobuf:"bytes,3,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+ SnapshotKey string `protobuf:"bytes,4,opt,name=snapshot_key,json=snapshotKey,proto3" json:"snapshot_key,omitempty"`
+}
+
+func (x *ContainerUpdate) Reset() {
+ *x = ContainerUpdate{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_container_proto_msgTypes[1]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *ContainerUpdate) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*ContainerUpdate) ProtoMessage() {}
+
+func (x *ContainerUpdate) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_container_proto_msgTypes[1]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use ContainerUpdate.ProtoReflect.Descriptor instead.
+func (*ContainerUpdate) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_container_proto_rawDescGZIP(), []int{1}
+}
+
+func (x *ContainerUpdate) GetID() string {
+ if x != nil {
+ return x.ID
+ }
+ return ""
+}
+
+func (x *ContainerUpdate) GetImage() string {
+ if x != nil {
+ return x.Image
+ }
+ return ""
+}
+
+func (x *ContainerUpdate) GetLabels() map[string]string {
+ if x != nil {
+ return x.Labels
+ }
+ return nil
+}
+
+func (x *ContainerUpdate) GetSnapshotKey() string {
+ if x != nil {
+ return x.SnapshotKey
+ }
+ return ""
+}
+
+type ContainerDelete struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ID string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
+}
+
+func (x *ContainerDelete) Reset() {
+ *x = ContainerDelete{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_container_proto_msgTypes[2]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *ContainerDelete) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*ContainerDelete) ProtoMessage() {}
+
+func (x *ContainerDelete) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_container_proto_msgTypes[2]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use ContainerDelete.ProtoReflect.Descriptor instead.
+func (*ContainerDelete) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_container_proto_rawDescGZIP(), []int{2}
+}
+
+func (x *ContainerDelete) GetID() string {
+ if x != nil {
+ return x.ID
+ }
+ return ""
+}
+
+type ContainerCreate_Runtime struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+ Options *anypb.Any `protobuf:"bytes,2,opt,name=options,proto3" json:"options,omitempty"`
+}
+
+func (x *ContainerCreate_Runtime) Reset() {
+ *x = ContainerCreate_Runtime{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_container_proto_msgTypes[3]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *ContainerCreate_Runtime) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*ContainerCreate_Runtime) ProtoMessage() {}
+
+func (x *ContainerCreate_Runtime) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_container_proto_msgTypes[3]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use ContainerCreate_Runtime.ProtoReflect.Descriptor instead.
+func (*ContainerCreate_Runtime) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_container_proto_rawDescGZIP(), []int{0, 0}
+}
+
+func (x *ContainerCreate_Runtime) GetName() string {
+ if x != nil {
+ return x.Name
+ }
+ return ""
+}
+
+func (x *ContainerCreate_Runtime) GetOptions() *anypb.Any {
+ if x != nil {
+ return x.Options
+ }
+ return nil
+}
+
+var File_github_com_containerd_containerd_api_events_container_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_events_container_proto_rawDesc = []byte{
+ 0x0a, 0x3b, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x2f, 0x63, 0x6f,
+ 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x11, 0x63,
+ 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73,
+ 0x1a, 0x19, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
+ 0x66, 0x2f, 0x61, 0x6e, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x40, 0x67, 0x69, 0x74,
+ 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x70, 0x72,
+ 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2f, 0x66, 0x69,
+ 0x65, 0x6c, 0x64, 0x70, 0x61, 0x74, 0x68, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xcc, 0x01,
+ 0x0a, 0x0f, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x43, 0x72, 0x65, 0x61, 0x74,
+ 0x65, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69,
+ 0x64, 0x12, 0x14, 0x0a, 0x05, 0x69, 0x6d, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09,
+ 0x52, 0x05, 0x69, 0x6d, 0x61, 0x67, 0x65, 0x12, 0x44, 0x0a, 0x07, 0x72, 0x75, 0x6e, 0x74, 0x69,
+ 0x6d, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x2a, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61,
+ 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x2e, 0x43, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x2e, 0x52, 0x75, 0x6e,
+ 0x74, 0x69, 0x6d, 0x65, 0x52, 0x07, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x1a, 0x4d, 0x0a,
+ 0x07, 0x52, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65,
+ 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x2e, 0x0a, 0x07,
+ 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e,
+ 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e,
+ 0x41, 0x6e, 0x79, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0xdd, 0x01, 0x0a,
+ 0x0f, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65,
+ 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x64,
+ 0x12, 0x14, 0x0a, 0x05, 0x69, 0x6d, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52,
+ 0x05, 0x69, 0x6d, 0x61, 0x67, 0x65, 0x12, 0x46, 0x0a, 0x06, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73,
+ 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2e, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e,
+ 0x65, 0x72, 0x64, 0x2e, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x2e, 0x43, 0x6f, 0x6e, 0x74, 0x61,
+ 0x69, 0x6e, 0x65, 0x72, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x2e, 0x4c, 0x61, 0x62, 0x65, 0x6c,
+ 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x06, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x12, 0x21,
+ 0x0a, 0x0c, 0x73, 0x6e, 0x61, 0x70, 0x73, 0x68, 0x6f, 0x74, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x04,
+ 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x73, 0x6e, 0x61, 0x70, 0x73, 0x68, 0x6f, 0x74, 0x4b, 0x65,
+ 0x79, 0x1a, 0x39, 0x0a, 0x0b, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79,
+ 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b,
+ 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28,
+ 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x21, 0x0a, 0x0f,
+ 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x44, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x12,
+ 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x64, 0x42,
+ 0x38, 0x5a, 0x32, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f,
+ 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e,
+ 0x65, 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x3b, 0x65,
+ 0x76, 0x65, 0x6e, 0x74, 0x73, 0xa0, 0xf4, 0x1e, 0x01, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f,
+ 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_events_container_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_events_container_proto_rawDescData = file_github_com_containerd_containerd_api_events_container_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_events_container_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_events_container_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_events_container_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_events_container_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_events_container_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_events_container_proto_msgTypes = make([]protoimpl.MessageInfo, 5)
+var file_github_com_containerd_containerd_api_events_container_proto_goTypes = []interface{}{
+ (*ContainerCreate)(nil), // 0: containerd.events.ContainerCreate
+ (*ContainerUpdate)(nil), // 1: containerd.events.ContainerUpdate
+ (*ContainerDelete)(nil), // 2: containerd.events.ContainerDelete
+ (*ContainerCreate_Runtime)(nil), // 3: containerd.events.ContainerCreate.Runtime
+ nil, // 4: containerd.events.ContainerUpdate.LabelsEntry
+ (*anypb.Any)(nil), // 5: google.protobuf.Any
+}
+var file_github_com_containerd_containerd_api_events_container_proto_depIdxs = []int32{
+ 3, // 0: containerd.events.ContainerCreate.runtime:type_name -> containerd.events.ContainerCreate.Runtime
+ 4, // 1: containerd.events.ContainerUpdate.labels:type_name -> containerd.events.ContainerUpdate.LabelsEntry
+ 5, // 2: containerd.events.ContainerCreate.Runtime.options:type_name -> google.protobuf.Any
+ 3, // [3:3] is the sub-list for method output_type
+ 3, // [3:3] is the sub-list for method input_type
+ 3, // [3:3] is the sub-list for extension type_name
+ 3, // [3:3] is the sub-list for extension extendee
+ 0, // [0:3] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_events_container_proto_init() }
+func file_github_com_containerd_containerd_api_events_container_proto_init() {
+ if File_github_com_containerd_containerd_api_events_container_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_events_container_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*ContainerCreate); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_container_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*ContainerUpdate); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_container_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*ContainerDelete); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_container_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*ContainerCreate_Runtime); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_events_container_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 5,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_events_container_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_events_container_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_events_container_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_events_container_proto = out.File
+ file_github_com_containerd_containerd_api_events_container_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_events_container_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_events_container_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/container.proto b/tools/vendor/github.com/containerd/containerd/api/events/container.proto
new file mode 100644
index 000000000..29fd18f88
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/container.proto
@@ -0,0 +1,46 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.events;
+
+import "google/protobuf/any.proto";
+import "github.com/containerd/containerd/protobuf/plugin/fieldpath.proto";
+
+option go_package = "github.com/containerd/containerd/api/events;events";
+option (containerd.plugin.fieldpath_all) = true;
+
+message ContainerCreate {
+ string id = 1;
+ string image = 2;
+ message Runtime {
+ string name = 1;
+ google.protobuf.Any options = 2;
+ }
+ Runtime runtime = 3;
+}
+
+message ContainerUpdate {
+ string id = 1;
+ string image = 2;
+ map labels = 3;
+ string snapshot_key = 4;
+}
+
+message ContainerDelete {
+ string id = 1;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/container_fieldpath.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/container_fieldpath.pb.go
new file mode 100644
index 000000000..c23b07762
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/container_fieldpath.pb.go
@@ -0,0 +1,95 @@
+// Code generated by protoc-gen-go-fieldpath. DO NOT EDIT.
+// source: github.com/containerd/containerd/api/events/container.proto
+package events
+
+import (
+ v2 "github.com/containerd/typeurl/v2"
+ strings "strings"
+)
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *ContainerCreate) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "id":
+ return string(m.ID), len(m.ID) > 0
+ case "image":
+ return string(m.Image), len(m.Image) > 0
+ case "runtime":
+ // NOTE(stevvooe): This is probably not correct in many cases.
+ // We assume that the target message also implements the Field
+ // method, which isn't likely true in a lot of cases.
+ //
+ // If you have a broken build and have found this comment,
+ // you may be closer to a solution.
+ if m.Runtime == nil {
+ return "", false
+ }
+ return m.Runtime.Field(fieldpath[1:])
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *ContainerCreate_Runtime) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "name":
+ return string(m.Name), len(m.Name) > 0
+ case "options":
+ decoded, err := v2.UnmarshalAny(m.Options)
+ if err != nil {
+ return "", false
+ }
+ adaptor, ok := decoded.(interface{ Field([]string) (string, bool) })
+ if !ok {
+ return "", false
+ }
+ return adaptor.Field(fieldpath[1:])
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *ContainerUpdate) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "id":
+ return string(m.ID), len(m.ID) > 0
+ case "image":
+ return string(m.Image), len(m.Image) > 0
+ case "labels":
+ // Labels fields have been special-cased by name. If this breaks,
+ // add better special casing to fieldpath plugin.
+ if len(m.Labels) == 0 {
+ return "", false
+ }
+ value, ok := m.Labels[strings.Join(fieldpath[1:], ".")]
+ return value, ok
+ case "snapshot_key":
+ return string(m.SnapshotKey), len(m.SnapshotKey) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *ContainerDelete) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "id":
+ return string(m.ID), len(m.ID) > 0
+ }
+ return "", false
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/content.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/content.pb.go
new file mode 100644
index 000000000..9e3f843e2
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/content.pb.go
@@ -0,0 +1,168 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/events/content.proto
+
+package events
+
+import (
+ _ "github.com/containerd/containerd/protobuf/plugin"
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+type ContentDelete struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Digest string `protobuf:"bytes,1,opt,name=digest,proto3" json:"digest,omitempty"`
+}
+
+func (x *ContentDelete) Reset() {
+ *x = ContentDelete{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_content_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *ContentDelete) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*ContentDelete) ProtoMessage() {}
+
+func (x *ContentDelete) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_content_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use ContentDelete.ProtoReflect.Descriptor instead.
+func (*ContentDelete) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_content_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *ContentDelete) GetDigest() string {
+ if x != nil {
+ return x.Digest
+ }
+ return ""
+}
+
+var File_github_com_containerd_containerd_api_events_content_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_events_content_proto_rawDesc = []byte{
+ 0x0a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x2f, 0x63, 0x6f,
+ 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x11, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x1a, 0x40,
+ 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61,
+ 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64,
+ 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e,
+ 0x2f, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x70, 0x61, 0x74, 0x68, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
+ 0x22, 0x27, 0x0a, 0x0d, 0x43, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x44, 0x65, 0x6c, 0x65, 0x74,
+ 0x65, 0x12, 0x16, 0x0a, 0x06, 0x64, 0x69, 0x67, 0x65, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28,
+ 0x09, 0x52, 0x06, 0x64, 0x69, 0x67, 0x65, 0x73, 0x74, 0x42, 0x38, 0x5a, 0x32, 0x67, 0x69, 0x74,
+ 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x61, 0x70,
+ 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x3b, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0xa0,
+ 0xf4, 0x1e, 0x01, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_events_content_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_events_content_proto_rawDescData = file_github_com_containerd_containerd_api_events_content_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_events_content_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_events_content_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_events_content_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_events_content_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_events_content_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_events_content_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
+var file_github_com_containerd_containerd_api_events_content_proto_goTypes = []interface{}{
+ (*ContentDelete)(nil), // 0: containerd.events.ContentDelete
+}
+var file_github_com_containerd_containerd_api_events_content_proto_depIdxs = []int32{
+ 0, // [0:0] is the sub-list for method output_type
+ 0, // [0:0] is the sub-list for method input_type
+ 0, // [0:0] is the sub-list for extension type_name
+ 0, // [0:0] is the sub-list for extension extendee
+ 0, // [0:0] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_events_content_proto_init() }
+func file_github_com_containerd_containerd_api_events_content_proto_init() {
+ if File_github_com_containerd_containerd_api_events_content_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_events_content_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*ContentDelete); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_events_content_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 1,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_events_content_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_events_content_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_events_content_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_events_content_proto = out.File
+ file_github_com_containerd_containerd_api_events_content_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_events_content_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_events_content_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/content.proto b/tools/vendor/github.com/containerd/containerd/api/events/content.proto
new file mode 100644
index 000000000..97d424139
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/content.proto
@@ -0,0 +1,28 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.events;
+
+import "github.com/containerd/containerd/protobuf/plugin/fieldpath.proto";
+
+option go_package = "github.com/containerd/containerd/api/events;events";
+option (containerd.plugin.fieldpath_all) = true;
+
+message ContentDelete {
+ string digest = 1;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/content_fieldpath.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/content_fieldpath.pb.go
new file mode 100644
index 000000000..9485b664c
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/content_fieldpath.pb.go
@@ -0,0 +1,16 @@
+// Code generated by protoc-gen-go-fieldpath. DO NOT EDIT.
+// source: github.com/containerd/containerd/api/events/content.proto
+package events
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *ContentDelete) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "digest":
+ return string(m.Digest), len(m.Digest) > 0
+ }
+ return "", false
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/doc.go b/tools/vendor/github.com/containerd/containerd/api/events/doc.go
new file mode 100644
index 000000000..354bef79f
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/doc.go
@@ -0,0 +1,19 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+// Package events has protobuf types for various events that are used in
+// containerd.
+package events
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/image.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/image.pb.go
new file mode 100644
index 000000000..111af29e6
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/image.pb.go
@@ -0,0 +1,330 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/events/image.proto
+
+package events
+
+import (
+ _ "github.com/containerd/containerd/protobuf/plugin"
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+type ImageCreate struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+ Labels map[string]string `protobuf:"bytes,2,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+}
+
+func (x *ImageCreate) Reset() {
+ *x = ImageCreate{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_image_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *ImageCreate) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*ImageCreate) ProtoMessage() {}
+
+func (x *ImageCreate) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_image_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use ImageCreate.ProtoReflect.Descriptor instead.
+func (*ImageCreate) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_image_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *ImageCreate) GetName() string {
+ if x != nil {
+ return x.Name
+ }
+ return ""
+}
+
+func (x *ImageCreate) GetLabels() map[string]string {
+ if x != nil {
+ return x.Labels
+ }
+ return nil
+}
+
+type ImageUpdate struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+ Labels map[string]string `protobuf:"bytes,2,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+}
+
+func (x *ImageUpdate) Reset() {
+ *x = ImageUpdate{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_image_proto_msgTypes[1]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *ImageUpdate) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*ImageUpdate) ProtoMessage() {}
+
+func (x *ImageUpdate) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_image_proto_msgTypes[1]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use ImageUpdate.ProtoReflect.Descriptor instead.
+func (*ImageUpdate) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_image_proto_rawDescGZIP(), []int{1}
+}
+
+func (x *ImageUpdate) GetName() string {
+ if x != nil {
+ return x.Name
+ }
+ return ""
+}
+
+func (x *ImageUpdate) GetLabels() map[string]string {
+ if x != nil {
+ return x.Labels
+ }
+ return nil
+}
+
+type ImageDelete struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+}
+
+func (x *ImageDelete) Reset() {
+ *x = ImageDelete{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_image_proto_msgTypes[2]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *ImageDelete) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*ImageDelete) ProtoMessage() {}
+
+func (x *ImageDelete) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_image_proto_msgTypes[2]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use ImageDelete.ProtoReflect.Descriptor instead.
+func (*ImageDelete) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_image_proto_rawDescGZIP(), []int{2}
+}
+
+func (x *ImageDelete) GetName() string {
+ if x != nil {
+ return x.Name
+ }
+ return ""
+}
+
+var File_github_com_containerd_containerd_api_events_image_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_events_image_proto_rawDesc = []byte{
+ 0x0a, 0x37, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x2f, 0x69, 0x6d,
+ 0x61, 0x67, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x1d, 0x63, 0x6f, 0x6e, 0x74, 0x61,
+ 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x2e, 0x69,
+ 0x6d, 0x61, 0x67, 0x65, 0x73, 0x2e, 0x76, 0x31, 0x1a, 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62,
+ 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f,
+ 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f,
+ 0x62, 0x75, 0x66, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2f, 0x66, 0x69, 0x65, 0x6c, 0x64,
+ 0x70, 0x61, 0x74, 0x68, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xac, 0x01, 0x0a, 0x0b, 0x49,
+ 0x6d, 0x61, 0x67, 0x65, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61,
+ 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x4e,
+ 0x0a, 0x06, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x36,
+ 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x73, 0x65, 0x72, 0x76,
+ 0x69, 0x63, 0x65, 0x73, 0x2e, 0x69, 0x6d, 0x61, 0x67, 0x65, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x49,
+ 0x6d, 0x61, 0x67, 0x65, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x2e, 0x4c, 0x61, 0x62, 0x65, 0x6c,
+ 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x06, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x1a, 0x39,
+ 0x0a, 0x0b, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a,
+ 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12,
+ 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05,
+ 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0xac, 0x01, 0x0a, 0x0b, 0x49, 0x6d,
+ 0x61, 0x67, 0x65, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d,
+ 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x4e, 0x0a,
+ 0x06, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x36, 0x2e,
+ 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69,
+ 0x63, 0x65, 0x73, 0x2e, 0x69, 0x6d, 0x61, 0x67, 0x65, 0x73, 0x2e, 0x76, 0x31, 0x2e, 0x49, 0x6d,
+ 0x61, 0x67, 0x65, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x2e, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x73,
+ 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x06, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x1a, 0x39, 0x0a,
+ 0x0b, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03,
+ 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14,
+ 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76,
+ 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x21, 0x0a, 0x0b, 0x49, 0x6d, 0x61, 0x67,
+ 0x65, 0x44, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18,
+ 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x42, 0x38, 0x5a, 0x32, 0x67,
+ 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69,
+ 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f,
+ 0x61, 0x70, 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x3b, 0x65, 0x76, 0x65, 0x6e, 0x74,
+ 0x73, 0xa0, 0xf4, 0x1e, 0x01, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_events_image_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_events_image_proto_rawDescData = file_github_com_containerd_containerd_api_events_image_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_events_image_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_events_image_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_events_image_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_events_image_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_events_image_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_events_image_proto_msgTypes = make([]protoimpl.MessageInfo, 5)
+var file_github_com_containerd_containerd_api_events_image_proto_goTypes = []interface{}{
+ (*ImageCreate)(nil), // 0: containerd.services.images.v1.ImageCreate
+ (*ImageUpdate)(nil), // 1: containerd.services.images.v1.ImageUpdate
+ (*ImageDelete)(nil), // 2: containerd.services.images.v1.ImageDelete
+ nil, // 3: containerd.services.images.v1.ImageCreate.LabelsEntry
+ nil, // 4: containerd.services.images.v1.ImageUpdate.LabelsEntry
+}
+var file_github_com_containerd_containerd_api_events_image_proto_depIdxs = []int32{
+ 3, // 0: containerd.services.images.v1.ImageCreate.labels:type_name -> containerd.services.images.v1.ImageCreate.LabelsEntry
+ 4, // 1: containerd.services.images.v1.ImageUpdate.labels:type_name -> containerd.services.images.v1.ImageUpdate.LabelsEntry
+ 2, // [2:2] is the sub-list for method output_type
+ 2, // [2:2] is the sub-list for method input_type
+ 2, // [2:2] is the sub-list for extension type_name
+ 2, // [2:2] is the sub-list for extension extendee
+ 0, // [0:2] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_events_image_proto_init() }
+func file_github_com_containerd_containerd_api_events_image_proto_init() {
+ if File_github_com_containerd_containerd_api_events_image_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_events_image_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*ImageCreate); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_image_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*ImageUpdate); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_image_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*ImageDelete); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_events_image_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 5,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_events_image_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_events_image_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_events_image_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_events_image_proto = out.File
+ file_github_com_containerd_containerd_api_events_image_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_events_image_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_events_image_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/image.proto b/tools/vendor/github.com/containerd/containerd/api/events/image.proto
new file mode 100644
index 000000000..c09c7384f
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/image.proto
@@ -0,0 +1,38 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.services.images.v1;
+
+import "github.com/containerd/containerd/protobuf/plugin/fieldpath.proto";
+
+option go_package = "github.com/containerd/containerd/api/events;events";
+option (containerd.plugin.fieldpath_all) = true;
+
+message ImageCreate {
+ string name = 1;
+ map labels = 2;
+}
+
+message ImageUpdate {
+ string name = 1;
+ map labels = 2;
+}
+
+message ImageDelete {
+ string name = 1;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/image_fieldpath.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/image_fieldpath.pb.go
new file mode 100644
index 000000000..2a56fcf9d
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/image_fieldpath.pb.go
@@ -0,0 +1,62 @@
+// Code generated by protoc-gen-go-fieldpath. DO NOT EDIT.
+// source: github.com/containerd/containerd/api/events/image.proto
+package events
+
+import (
+ strings "strings"
+)
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *ImageCreate) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "name":
+ return string(m.Name), len(m.Name) > 0
+ case "labels":
+ // Labels fields have been special-cased by name. If this breaks,
+ // add better special casing to fieldpath plugin.
+ if len(m.Labels) == 0 {
+ return "", false
+ }
+ value, ok := m.Labels[strings.Join(fieldpath[1:], ".")]
+ return value, ok
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *ImageUpdate) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "name":
+ return string(m.Name), len(m.Name) > 0
+ case "labels":
+ // Labels fields have been special-cased by name. If this breaks,
+ // add better special casing to fieldpath plugin.
+ if len(m.Labels) == 0 {
+ return "", false
+ }
+ value, ok := m.Labels[strings.Join(fieldpath[1:], ".")]
+ return value, ok
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *ImageDelete) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "name":
+ return string(m.Name), len(m.Name) > 0
+ }
+ return "", false
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/namespace.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/namespace.pb.go
new file mode 100644
index 000000000..801c1219e
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/namespace.pb.go
@@ -0,0 +1,330 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/events/namespace.proto
+
+package events
+
+import (
+ _ "github.com/containerd/containerd/protobuf/plugin"
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+type NamespaceCreate struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+ Labels map[string]string `protobuf:"bytes,2,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+}
+
+func (x *NamespaceCreate) Reset() {
+ *x = NamespaceCreate{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_namespace_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *NamespaceCreate) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*NamespaceCreate) ProtoMessage() {}
+
+func (x *NamespaceCreate) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_namespace_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use NamespaceCreate.ProtoReflect.Descriptor instead.
+func (*NamespaceCreate) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_namespace_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *NamespaceCreate) GetName() string {
+ if x != nil {
+ return x.Name
+ }
+ return ""
+}
+
+func (x *NamespaceCreate) GetLabels() map[string]string {
+ if x != nil {
+ return x.Labels
+ }
+ return nil
+}
+
+type NamespaceUpdate struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+ Labels map[string]string `protobuf:"bytes,2,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+}
+
+func (x *NamespaceUpdate) Reset() {
+ *x = NamespaceUpdate{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_namespace_proto_msgTypes[1]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *NamespaceUpdate) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*NamespaceUpdate) ProtoMessage() {}
+
+func (x *NamespaceUpdate) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_namespace_proto_msgTypes[1]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use NamespaceUpdate.ProtoReflect.Descriptor instead.
+func (*NamespaceUpdate) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_namespace_proto_rawDescGZIP(), []int{1}
+}
+
+func (x *NamespaceUpdate) GetName() string {
+ if x != nil {
+ return x.Name
+ }
+ return ""
+}
+
+func (x *NamespaceUpdate) GetLabels() map[string]string {
+ if x != nil {
+ return x.Labels
+ }
+ return nil
+}
+
+type NamespaceDelete struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+}
+
+func (x *NamespaceDelete) Reset() {
+ *x = NamespaceDelete{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_namespace_proto_msgTypes[2]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *NamespaceDelete) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*NamespaceDelete) ProtoMessage() {}
+
+func (x *NamespaceDelete) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_namespace_proto_msgTypes[2]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use NamespaceDelete.ProtoReflect.Descriptor instead.
+func (*NamespaceDelete) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_namespace_proto_rawDescGZIP(), []int{2}
+}
+
+func (x *NamespaceDelete) GetName() string {
+ if x != nil {
+ return x.Name
+ }
+ return ""
+}
+
+var File_github_com_containerd_containerd_api_events_namespace_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_events_namespace_proto_rawDesc = []byte{
+ 0x0a, 0x3b, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x2f, 0x6e, 0x61,
+ 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x11, 0x63,
+ 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73,
+ 0x1a, 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x6c, 0x75, 0x67,
+ 0x69, 0x6e, 0x2f, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x70, 0x61, 0x74, 0x68, 0x2e, 0x70, 0x72, 0x6f,
+ 0x74, 0x6f, 0x22, 0xa8, 0x01, 0x0a, 0x0f, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65,
+ 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01,
+ 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x46, 0x0a, 0x06, 0x6c, 0x61,
+ 0x62, 0x65, 0x6c, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2e, 0x2e, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x2e, 0x4e,
+ 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x2e, 0x4c,
+ 0x61, 0x62, 0x65, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x06, 0x6c, 0x61, 0x62, 0x65,
+ 0x6c, 0x73, 0x1a, 0x39, 0x0a, 0x0b, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72,
+ 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03,
+ 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01,
+ 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0xa8, 0x01,
+ 0x0a, 0x0f, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x55, 0x70, 0x64, 0x61, 0x74,
+ 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
+ 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x46, 0x0a, 0x06, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x18,
+ 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2e, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2e, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x2e, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70,
+ 0x61, 0x63, 0x65, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x2e, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x73,
+ 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x06, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x1a, 0x39, 0x0a,
+ 0x0b, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03,
+ 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14,
+ 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76,
+ 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x25, 0x0a, 0x0f, 0x4e, 0x61, 0x6d, 0x65,
+ 0x73, 0x70, 0x61, 0x63, 0x65, 0x44, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e,
+ 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x42,
+ 0x38, 0x5a, 0x32, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f,
+ 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e,
+ 0x65, 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x3b, 0x65,
+ 0x76, 0x65, 0x6e, 0x74, 0x73, 0xa0, 0xf4, 0x1e, 0x01, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f,
+ 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_events_namespace_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_events_namespace_proto_rawDescData = file_github_com_containerd_containerd_api_events_namespace_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_events_namespace_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_events_namespace_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_events_namespace_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_events_namespace_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_events_namespace_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_events_namespace_proto_msgTypes = make([]protoimpl.MessageInfo, 5)
+var file_github_com_containerd_containerd_api_events_namespace_proto_goTypes = []interface{}{
+ (*NamespaceCreate)(nil), // 0: containerd.events.NamespaceCreate
+ (*NamespaceUpdate)(nil), // 1: containerd.events.NamespaceUpdate
+ (*NamespaceDelete)(nil), // 2: containerd.events.NamespaceDelete
+ nil, // 3: containerd.events.NamespaceCreate.LabelsEntry
+ nil, // 4: containerd.events.NamespaceUpdate.LabelsEntry
+}
+var file_github_com_containerd_containerd_api_events_namespace_proto_depIdxs = []int32{
+ 3, // 0: containerd.events.NamespaceCreate.labels:type_name -> containerd.events.NamespaceCreate.LabelsEntry
+ 4, // 1: containerd.events.NamespaceUpdate.labels:type_name -> containerd.events.NamespaceUpdate.LabelsEntry
+ 2, // [2:2] is the sub-list for method output_type
+ 2, // [2:2] is the sub-list for method input_type
+ 2, // [2:2] is the sub-list for extension type_name
+ 2, // [2:2] is the sub-list for extension extendee
+ 0, // [0:2] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_events_namespace_proto_init() }
+func file_github_com_containerd_containerd_api_events_namespace_proto_init() {
+ if File_github_com_containerd_containerd_api_events_namespace_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_events_namespace_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*NamespaceCreate); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_namespace_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*NamespaceUpdate); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_namespace_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*NamespaceDelete); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_events_namespace_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 5,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_events_namespace_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_events_namespace_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_events_namespace_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_events_namespace_proto = out.File
+ file_github_com_containerd_containerd_api_events_namespace_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_events_namespace_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_events_namespace_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/namespace.proto b/tools/vendor/github.com/containerd/containerd/api/events/namespace.proto
new file mode 100644
index 000000000..9bae531d7
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/namespace.proto
@@ -0,0 +1,38 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.events;
+
+import "github.com/containerd/containerd/protobuf/plugin/fieldpath.proto";
+
+option go_package = "github.com/containerd/containerd/api/events;events";
+option (containerd.plugin.fieldpath_all) = true;
+
+message NamespaceCreate {
+ string name = 1;
+ map labels = 2;
+}
+
+message NamespaceUpdate {
+ string name = 1;
+ map labels = 2;
+}
+
+message NamespaceDelete {
+ string name = 1;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/namespace_fieldpath.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/namespace_fieldpath.pb.go
new file mode 100644
index 000000000..93d20a676
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/namespace_fieldpath.pb.go
@@ -0,0 +1,62 @@
+// Code generated by protoc-gen-go-fieldpath. DO NOT EDIT.
+// source: github.com/containerd/containerd/api/events/namespace.proto
+package events
+
+import (
+ strings "strings"
+)
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *NamespaceCreate) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "name":
+ return string(m.Name), len(m.Name) > 0
+ case "labels":
+ // Labels fields have been special-cased by name. If this breaks,
+ // add better special casing to fieldpath plugin.
+ if len(m.Labels) == 0 {
+ return "", false
+ }
+ value, ok := m.Labels[strings.Join(fieldpath[1:], ".")]
+ return value, ok
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *NamespaceUpdate) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "name":
+ return string(m.Name), len(m.Name) > 0
+ case "labels":
+ // Labels fields have been special-cased by name. If this breaks,
+ // add better special casing to fieldpath plugin.
+ if len(m.Labels) == 0 {
+ return "", false
+ }
+ value, ok := m.Labels[strings.Join(fieldpath[1:], ".")]
+ return value, ok
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *NamespaceDelete) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "name":
+ return string(m.Name), len(m.Name) > 0
+ }
+ return "", false
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/sandbox.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/sandbox.pb.go
new file mode 100644
index 000000000..08f5e70e4
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/sandbox.pb.go
@@ -0,0 +1,316 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/events/sandbox.proto
+
+package events
+
+import (
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ timestamppb "google.golang.org/protobuf/types/known/timestamppb"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+type SandboxCreate struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+}
+
+func (x *SandboxCreate) Reset() {
+ *x = SandboxCreate{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_sandbox_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *SandboxCreate) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*SandboxCreate) ProtoMessage() {}
+
+func (x *SandboxCreate) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_sandbox_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use SandboxCreate.ProtoReflect.Descriptor instead.
+func (*SandboxCreate) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_sandbox_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *SandboxCreate) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+type SandboxStart struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+}
+
+func (x *SandboxStart) Reset() {
+ *x = SandboxStart{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_sandbox_proto_msgTypes[1]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *SandboxStart) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*SandboxStart) ProtoMessage() {}
+
+func (x *SandboxStart) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_sandbox_proto_msgTypes[1]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use SandboxStart.ProtoReflect.Descriptor instead.
+func (*SandboxStart) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_sandbox_proto_rawDescGZIP(), []int{1}
+}
+
+func (x *SandboxStart) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+type SandboxExit struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+ ExitStatus uint32 `protobuf:"varint,2,opt,name=exit_status,json=exitStatus,proto3" json:"exit_status,omitempty"`
+ ExitedAt *timestamppb.Timestamp `protobuf:"bytes,3,opt,name=exited_at,json=exitedAt,proto3" json:"exited_at,omitempty"`
+}
+
+func (x *SandboxExit) Reset() {
+ *x = SandboxExit{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_sandbox_proto_msgTypes[2]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *SandboxExit) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*SandboxExit) ProtoMessage() {}
+
+func (x *SandboxExit) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_sandbox_proto_msgTypes[2]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use SandboxExit.ProtoReflect.Descriptor instead.
+func (*SandboxExit) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_sandbox_proto_rawDescGZIP(), []int{2}
+}
+
+func (x *SandboxExit) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+func (x *SandboxExit) GetExitStatus() uint32 {
+ if x != nil {
+ return x.ExitStatus
+ }
+ return 0
+}
+
+func (x *SandboxExit) GetExitedAt() *timestamppb.Timestamp {
+ if x != nil {
+ return x.ExitedAt
+ }
+ return nil
+}
+
+var File_github_com_containerd_containerd_api_events_sandbox_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_events_sandbox_proto_rawDesc = []byte{
+ 0x0a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x2f, 0x73, 0x61,
+ 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x11, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x1a, 0x1f,
+ 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f,
+ 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22,
+ 0x2e, 0x0a, 0x0d, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65,
+ 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x5f, 0x69, 0x64, 0x18, 0x01,
+ 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x49, 0x64, 0x22,
+ 0x2d, 0x0a, 0x0c, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x53, 0x74, 0x61, 0x72, 0x74, 0x12,
+ 0x1d, 0x0a, 0x0a, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20,
+ 0x01, 0x28, 0x09, 0x52, 0x09, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x49, 0x64, 0x22, 0x86,
+ 0x01, 0x0a, 0x0b, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x45, 0x78, 0x69, 0x74, 0x12, 0x1d,
+ 0x0a, 0x0a, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01,
+ 0x28, 0x09, 0x52, 0x09, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x49, 0x64, 0x12, 0x1f, 0x0a,
+ 0x0b, 0x65, 0x78, 0x69, 0x74, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x02, 0x20, 0x01,
+ 0x28, 0x0d, 0x52, 0x0a, 0x65, 0x78, 0x69, 0x74, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x37,
+ 0x0a, 0x09, 0x65, 0x78, 0x69, 0x74, 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28,
+ 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
+ 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x08, 0x65,
+ 0x78, 0x69, 0x74, 0x65, 0x64, 0x41, 0x74, 0x42, 0x34, 0x5a, 0x32, 0x67, 0x69, 0x74, 0x68, 0x75,
+ 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64,
+ 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f,
+ 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x3b, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x62, 0x06, 0x70,
+ 0x72, 0x6f, 0x74, 0x6f, 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_events_sandbox_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_events_sandbox_proto_rawDescData = file_github_com_containerd_containerd_api_events_sandbox_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_events_sandbox_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_events_sandbox_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_events_sandbox_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_events_sandbox_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_events_sandbox_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_events_sandbox_proto_msgTypes = make([]protoimpl.MessageInfo, 3)
+var file_github_com_containerd_containerd_api_events_sandbox_proto_goTypes = []interface{}{
+ (*SandboxCreate)(nil), // 0: containerd.events.SandboxCreate
+ (*SandboxStart)(nil), // 1: containerd.events.SandboxStart
+ (*SandboxExit)(nil), // 2: containerd.events.SandboxExit
+ (*timestamppb.Timestamp)(nil), // 3: google.protobuf.Timestamp
+}
+var file_github_com_containerd_containerd_api_events_sandbox_proto_depIdxs = []int32{
+ 3, // 0: containerd.events.SandboxExit.exited_at:type_name -> google.protobuf.Timestamp
+ 1, // [1:1] is the sub-list for method output_type
+ 1, // [1:1] is the sub-list for method input_type
+ 1, // [1:1] is the sub-list for extension type_name
+ 1, // [1:1] is the sub-list for extension extendee
+ 0, // [0:1] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_events_sandbox_proto_init() }
+func file_github_com_containerd_containerd_api_events_sandbox_proto_init() {
+ if File_github_com_containerd_containerd_api_events_sandbox_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_events_sandbox_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*SandboxCreate); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_sandbox_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*SandboxStart); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_sandbox_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*SandboxExit); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_events_sandbox_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 3,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_events_sandbox_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_events_sandbox_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_events_sandbox_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_events_sandbox_proto = out.File
+ file_github_com_containerd_containerd_api_events_sandbox_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_events_sandbox_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_events_sandbox_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/sandbox.proto b/tools/vendor/github.com/containerd/containerd/api/events/sandbox.proto
new file mode 100644
index 000000000..f1c5195e5
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/sandbox.proto
@@ -0,0 +1,37 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.events;
+
+import "google/protobuf/timestamp.proto";
+
+option go_package = "github.com/containerd/containerd/api/events;events";
+
+message SandboxCreate {
+ string sandbox_id = 1;
+}
+
+message SandboxStart {
+ string sandbox_id = 1;
+}
+
+message SandboxExit {
+ string sandbox_id = 1;
+ uint32 exit_status = 2;
+ google.protobuf.Timestamp exited_at = 3;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/sandbox_fieldpath.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/sandbox_fieldpath.pb.go
new file mode 100644
index 000000000..5afb99457
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/sandbox_fieldpath.pb.go
@@ -0,0 +1,44 @@
+// Code generated by protoc-gen-go-fieldpath. DO NOT EDIT.
+// source: github.com/containerd/containerd/api/events/sandbox.proto
+package events
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *SandboxCreate) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "sandbox_id":
+ return string(m.SandboxID), len(m.SandboxID) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *SandboxStart) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "sandbox_id":
+ return string(m.SandboxID), len(m.SandboxID) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *SandboxExit) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ // unhandled: exit_status
+ // unhandled: exited_at
+ case "sandbox_id":
+ return string(m.SandboxID), len(m.SandboxID) > 0
+ }
+ return "", false
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/snapshot.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/snapshot.pb.go
new file mode 100644
index 000000000..e074f9df2
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/snapshot.pb.go
@@ -0,0 +1,342 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/events/snapshot.proto
+
+package events
+
+import (
+ _ "github.com/containerd/containerd/protobuf/plugin"
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+type SnapshotPrepare struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
+ Parent string `protobuf:"bytes,2,opt,name=parent,proto3" json:"parent,omitempty"`
+ Snapshotter string `protobuf:"bytes,5,opt,name=snapshotter,proto3" json:"snapshotter,omitempty"`
+}
+
+func (x *SnapshotPrepare) Reset() {
+ *x = SnapshotPrepare{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_snapshot_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *SnapshotPrepare) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*SnapshotPrepare) ProtoMessage() {}
+
+func (x *SnapshotPrepare) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_snapshot_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use SnapshotPrepare.ProtoReflect.Descriptor instead.
+func (*SnapshotPrepare) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_snapshot_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *SnapshotPrepare) GetKey() string {
+ if x != nil {
+ return x.Key
+ }
+ return ""
+}
+
+func (x *SnapshotPrepare) GetParent() string {
+ if x != nil {
+ return x.Parent
+ }
+ return ""
+}
+
+func (x *SnapshotPrepare) GetSnapshotter() string {
+ if x != nil {
+ return x.Snapshotter
+ }
+ return ""
+}
+
+type SnapshotCommit struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
+ Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"`
+ Snapshotter string `protobuf:"bytes,5,opt,name=snapshotter,proto3" json:"snapshotter,omitempty"`
+}
+
+func (x *SnapshotCommit) Reset() {
+ *x = SnapshotCommit{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_snapshot_proto_msgTypes[1]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *SnapshotCommit) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*SnapshotCommit) ProtoMessage() {}
+
+func (x *SnapshotCommit) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_snapshot_proto_msgTypes[1]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use SnapshotCommit.ProtoReflect.Descriptor instead.
+func (*SnapshotCommit) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_snapshot_proto_rawDescGZIP(), []int{1}
+}
+
+func (x *SnapshotCommit) GetKey() string {
+ if x != nil {
+ return x.Key
+ }
+ return ""
+}
+
+func (x *SnapshotCommit) GetName() string {
+ if x != nil {
+ return x.Name
+ }
+ return ""
+}
+
+func (x *SnapshotCommit) GetSnapshotter() string {
+ if x != nil {
+ return x.Snapshotter
+ }
+ return ""
+}
+
+type SnapshotRemove struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
+ Snapshotter string `protobuf:"bytes,5,opt,name=snapshotter,proto3" json:"snapshotter,omitempty"`
+}
+
+func (x *SnapshotRemove) Reset() {
+ *x = SnapshotRemove{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_snapshot_proto_msgTypes[2]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *SnapshotRemove) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*SnapshotRemove) ProtoMessage() {}
+
+func (x *SnapshotRemove) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_snapshot_proto_msgTypes[2]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use SnapshotRemove.ProtoReflect.Descriptor instead.
+func (*SnapshotRemove) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_snapshot_proto_rawDescGZIP(), []int{2}
+}
+
+func (x *SnapshotRemove) GetKey() string {
+ if x != nil {
+ return x.Key
+ }
+ return ""
+}
+
+func (x *SnapshotRemove) GetSnapshotter() string {
+ if x != nil {
+ return x.Snapshotter
+ }
+ return ""
+}
+
+var File_github_com_containerd_containerd_api_events_snapshot_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_events_snapshot_proto_rawDesc = []byte{
+ 0x0a, 0x3a, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x2f, 0x73, 0x6e,
+ 0x61, 0x70, 0x73, 0x68, 0x6f, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x11, 0x63, 0x6f,
+ 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x1a,
+ 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74,
+ 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72,
+ 0x64, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69,
+ 0x6e, 0x2f, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x70, 0x61, 0x74, 0x68, 0x2e, 0x70, 0x72, 0x6f, 0x74,
+ 0x6f, 0x22, 0x5d, 0x0a, 0x0f, 0x53, 0x6e, 0x61, 0x70, 0x73, 0x68, 0x6f, 0x74, 0x50, 0x72, 0x65,
+ 0x70, 0x61, 0x72, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28,
+ 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x16, 0x0a, 0x06, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74,
+ 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x70, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x12, 0x20,
+ 0x0a, 0x0b, 0x73, 0x6e, 0x61, 0x70, 0x73, 0x68, 0x6f, 0x74, 0x74, 0x65, 0x72, 0x18, 0x05, 0x20,
+ 0x01, 0x28, 0x09, 0x52, 0x0b, 0x73, 0x6e, 0x61, 0x70, 0x73, 0x68, 0x6f, 0x74, 0x74, 0x65, 0x72,
+ 0x22, 0x58, 0x0a, 0x0e, 0x53, 0x6e, 0x61, 0x70, 0x73, 0x68, 0x6f, 0x74, 0x43, 0x6f, 0x6d, 0x6d,
+ 0x69, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
+ 0x03, 0x6b, 0x65, 0x79, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01,
+ 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x20, 0x0a, 0x0b, 0x73, 0x6e, 0x61, 0x70,
+ 0x73, 0x68, 0x6f, 0x74, 0x74, 0x65, 0x72, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x73,
+ 0x6e, 0x61, 0x70, 0x73, 0x68, 0x6f, 0x74, 0x74, 0x65, 0x72, 0x22, 0x44, 0x0a, 0x0e, 0x53, 0x6e,
+ 0x61, 0x70, 0x73, 0x68, 0x6f, 0x74, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x12, 0x10, 0x0a, 0x03,
+ 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x20,
+ 0x0a, 0x0b, 0x73, 0x6e, 0x61, 0x70, 0x73, 0x68, 0x6f, 0x74, 0x74, 0x65, 0x72, 0x18, 0x05, 0x20,
+ 0x01, 0x28, 0x09, 0x52, 0x0b, 0x73, 0x6e, 0x61, 0x70, 0x73, 0x68, 0x6f, 0x74, 0x74, 0x65, 0x72,
+ 0x42, 0x38, 0x5a, 0x32, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63,
+ 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69,
+ 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x3b,
+ 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0xa0, 0xf4, 0x1e, 0x01, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
+ 0x6f, 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_events_snapshot_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_events_snapshot_proto_rawDescData = file_github_com_containerd_containerd_api_events_snapshot_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_events_snapshot_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_events_snapshot_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_events_snapshot_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_events_snapshot_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_events_snapshot_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_events_snapshot_proto_msgTypes = make([]protoimpl.MessageInfo, 3)
+var file_github_com_containerd_containerd_api_events_snapshot_proto_goTypes = []interface{}{
+ (*SnapshotPrepare)(nil), // 0: containerd.events.SnapshotPrepare
+ (*SnapshotCommit)(nil), // 1: containerd.events.SnapshotCommit
+ (*SnapshotRemove)(nil), // 2: containerd.events.SnapshotRemove
+}
+var file_github_com_containerd_containerd_api_events_snapshot_proto_depIdxs = []int32{
+ 0, // [0:0] is the sub-list for method output_type
+ 0, // [0:0] is the sub-list for method input_type
+ 0, // [0:0] is the sub-list for extension type_name
+ 0, // [0:0] is the sub-list for extension extendee
+ 0, // [0:0] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_events_snapshot_proto_init() }
+func file_github_com_containerd_containerd_api_events_snapshot_proto_init() {
+ if File_github_com_containerd_containerd_api_events_snapshot_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_events_snapshot_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*SnapshotPrepare); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_snapshot_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*SnapshotCommit); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_snapshot_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*SnapshotRemove); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_events_snapshot_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 3,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_events_snapshot_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_events_snapshot_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_events_snapshot_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_events_snapshot_proto = out.File
+ file_github_com_containerd_containerd_api_events_snapshot_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_events_snapshot_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_events_snapshot_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/snapshot.proto b/tools/vendor/github.com/containerd/containerd/api/events/snapshot.proto
new file mode 100644
index 000000000..effd86913
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/snapshot.proto
@@ -0,0 +1,41 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.events;
+
+import "github.com/containerd/containerd/protobuf/plugin/fieldpath.proto";
+
+option go_package = "github.com/containerd/containerd/api/events;events";
+option (containerd.plugin.fieldpath_all) = true;
+
+message SnapshotPrepare {
+ string key = 1;
+ string parent = 2;
+ string snapshotter = 5;
+}
+
+message SnapshotCommit {
+ string key = 1;
+ string name = 2;
+ string snapshotter = 5;
+}
+
+message SnapshotRemove {
+ string key = 1;
+ string snapshotter = 5;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/snapshot_fieldpath.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/snapshot_fieldpath.pb.go
new file mode 100644
index 000000000..c89d1ab4a
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/snapshot_fieldpath.pb.go
@@ -0,0 +1,52 @@
+// Code generated by protoc-gen-go-fieldpath. DO NOT EDIT.
+// source: github.com/containerd/containerd/api/events/snapshot.proto
+package events
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *SnapshotPrepare) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "key":
+ return string(m.Key), len(m.Key) > 0
+ case "parent":
+ return string(m.Parent), len(m.Parent) > 0
+ case "snapshotter":
+ return string(m.Snapshotter), len(m.Snapshotter) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *SnapshotCommit) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "key":
+ return string(m.Key), len(m.Key) > 0
+ case "name":
+ return string(m.Name), len(m.Name) > 0
+ case "snapshotter":
+ return string(m.Snapshotter), len(m.Snapshotter) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *SnapshotRemove) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "key":
+ return string(m.Key), len(m.Key) > 0
+ case "snapshotter":
+ return string(m.Snapshotter), len(m.Snapshotter) > 0
+ }
+ return "", false
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/task.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/task.pb.go
new file mode 100644
index 000000000..33fd521b6
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/task.pb.go
@@ -0,0 +1,1020 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/events/task.proto
+
+package events
+
+import (
+ types "github.com/containerd/containerd/api/types"
+ _ "github.com/containerd/containerd/protobuf/plugin"
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ timestamppb "google.golang.org/protobuf/types/known/timestamppb"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+type TaskCreate struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ContainerID string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"`
+ Bundle string `protobuf:"bytes,2,opt,name=bundle,proto3" json:"bundle,omitempty"`
+ Rootfs []*types.Mount `protobuf:"bytes,3,rep,name=rootfs,proto3" json:"rootfs,omitempty"`
+ IO *TaskIO `protobuf:"bytes,4,opt,name=io,proto3" json:"io,omitempty"`
+ Checkpoint string `protobuf:"bytes,5,opt,name=checkpoint,proto3" json:"checkpoint,omitempty"`
+ Pid uint32 `protobuf:"varint,6,opt,name=pid,proto3" json:"pid,omitempty"`
+}
+
+func (x *TaskCreate) Reset() {
+ *x = TaskCreate{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *TaskCreate) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*TaskCreate) ProtoMessage() {}
+
+func (x *TaskCreate) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use TaskCreate.ProtoReflect.Descriptor instead.
+func (*TaskCreate) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_task_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *TaskCreate) GetContainerID() string {
+ if x != nil {
+ return x.ContainerID
+ }
+ return ""
+}
+
+func (x *TaskCreate) GetBundle() string {
+ if x != nil {
+ return x.Bundle
+ }
+ return ""
+}
+
+func (x *TaskCreate) GetRootfs() []*types.Mount {
+ if x != nil {
+ return x.Rootfs
+ }
+ return nil
+}
+
+func (x *TaskCreate) GetIO() *TaskIO {
+ if x != nil {
+ return x.IO
+ }
+ return nil
+}
+
+func (x *TaskCreate) GetCheckpoint() string {
+ if x != nil {
+ return x.Checkpoint
+ }
+ return ""
+}
+
+func (x *TaskCreate) GetPid() uint32 {
+ if x != nil {
+ return x.Pid
+ }
+ return 0
+}
+
+type TaskStart struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ContainerID string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"`
+ Pid uint32 `protobuf:"varint,2,opt,name=pid,proto3" json:"pid,omitempty"`
+}
+
+func (x *TaskStart) Reset() {
+ *x = TaskStart{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[1]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *TaskStart) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*TaskStart) ProtoMessage() {}
+
+func (x *TaskStart) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[1]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use TaskStart.ProtoReflect.Descriptor instead.
+func (*TaskStart) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_task_proto_rawDescGZIP(), []int{1}
+}
+
+func (x *TaskStart) GetContainerID() string {
+ if x != nil {
+ return x.ContainerID
+ }
+ return ""
+}
+
+func (x *TaskStart) GetPid() uint32 {
+ if x != nil {
+ return x.Pid
+ }
+ return 0
+}
+
+type TaskDelete struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ContainerID string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"`
+ Pid uint32 `protobuf:"varint,2,opt,name=pid,proto3" json:"pid,omitempty"`
+ ExitStatus uint32 `protobuf:"varint,3,opt,name=exit_status,json=exitStatus,proto3" json:"exit_status,omitempty"`
+ ExitedAt *timestamppb.Timestamp `protobuf:"bytes,4,opt,name=exited_at,json=exitedAt,proto3" json:"exited_at,omitempty"`
+ // id is the specific exec. By default if omitted will be `""` thus matches
+ // the init exec of the task matching `container_id`.
+ ID string `protobuf:"bytes,5,opt,name=id,proto3" json:"id,omitempty"`
+}
+
+func (x *TaskDelete) Reset() {
+ *x = TaskDelete{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[2]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *TaskDelete) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*TaskDelete) ProtoMessage() {}
+
+func (x *TaskDelete) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[2]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use TaskDelete.ProtoReflect.Descriptor instead.
+func (*TaskDelete) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_task_proto_rawDescGZIP(), []int{2}
+}
+
+func (x *TaskDelete) GetContainerID() string {
+ if x != nil {
+ return x.ContainerID
+ }
+ return ""
+}
+
+func (x *TaskDelete) GetPid() uint32 {
+ if x != nil {
+ return x.Pid
+ }
+ return 0
+}
+
+func (x *TaskDelete) GetExitStatus() uint32 {
+ if x != nil {
+ return x.ExitStatus
+ }
+ return 0
+}
+
+func (x *TaskDelete) GetExitedAt() *timestamppb.Timestamp {
+ if x != nil {
+ return x.ExitedAt
+ }
+ return nil
+}
+
+func (x *TaskDelete) GetID() string {
+ if x != nil {
+ return x.ID
+ }
+ return ""
+}
+
+type TaskIO struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Stdin string `protobuf:"bytes,1,opt,name=stdin,proto3" json:"stdin,omitempty"`
+ Stdout string `protobuf:"bytes,2,opt,name=stdout,proto3" json:"stdout,omitempty"`
+ Stderr string `protobuf:"bytes,3,opt,name=stderr,proto3" json:"stderr,omitempty"`
+ Terminal bool `protobuf:"varint,4,opt,name=terminal,proto3" json:"terminal,omitempty"`
+}
+
+func (x *TaskIO) Reset() {
+ *x = TaskIO{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[3]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *TaskIO) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*TaskIO) ProtoMessage() {}
+
+func (x *TaskIO) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[3]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use TaskIO.ProtoReflect.Descriptor instead.
+func (*TaskIO) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_task_proto_rawDescGZIP(), []int{3}
+}
+
+func (x *TaskIO) GetStdin() string {
+ if x != nil {
+ return x.Stdin
+ }
+ return ""
+}
+
+func (x *TaskIO) GetStdout() string {
+ if x != nil {
+ return x.Stdout
+ }
+ return ""
+}
+
+func (x *TaskIO) GetStderr() string {
+ if x != nil {
+ return x.Stderr
+ }
+ return ""
+}
+
+func (x *TaskIO) GetTerminal() bool {
+ if x != nil {
+ return x.Terminal
+ }
+ return false
+}
+
+type TaskExit struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ContainerID string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"`
+ ID string `protobuf:"bytes,2,opt,name=id,proto3" json:"id,omitempty"`
+ Pid uint32 `protobuf:"varint,3,opt,name=pid,proto3" json:"pid,omitempty"`
+ ExitStatus uint32 `protobuf:"varint,4,opt,name=exit_status,json=exitStatus,proto3" json:"exit_status,omitempty"`
+ ExitedAt *timestamppb.Timestamp `protobuf:"bytes,5,opt,name=exited_at,json=exitedAt,proto3" json:"exited_at,omitempty"`
+}
+
+func (x *TaskExit) Reset() {
+ *x = TaskExit{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[4]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *TaskExit) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*TaskExit) ProtoMessage() {}
+
+func (x *TaskExit) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[4]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use TaskExit.ProtoReflect.Descriptor instead.
+func (*TaskExit) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_task_proto_rawDescGZIP(), []int{4}
+}
+
+func (x *TaskExit) GetContainerID() string {
+ if x != nil {
+ return x.ContainerID
+ }
+ return ""
+}
+
+func (x *TaskExit) GetID() string {
+ if x != nil {
+ return x.ID
+ }
+ return ""
+}
+
+func (x *TaskExit) GetPid() uint32 {
+ if x != nil {
+ return x.Pid
+ }
+ return 0
+}
+
+func (x *TaskExit) GetExitStatus() uint32 {
+ if x != nil {
+ return x.ExitStatus
+ }
+ return 0
+}
+
+func (x *TaskExit) GetExitedAt() *timestamppb.Timestamp {
+ if x != nil {
+ return x.ExitedAt
+ }
+ return nil
+}
+
+type TaskOOM struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ContainerID string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"`
+}
+
+func (x *TaskOOM) Reset() {
+ *x = TaskOOM{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[5]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *TaskOOM) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*TaskOOM) ProtoMessage() {}
+
+func (x *TaskOOM) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[5]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use TaskOOM.ProtoReflect.Descriptor instead.
+func (*TaskOOM) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_task_proto_rawDescGZIP(), []int{5}
+}
+
+func (x *TaskOOM) GetContainerID() string {
+ if x != nil {
+ return x.ContainerID
+ }
+ return ""
+}
+
+type TaskExecAdded struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ContainerID string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"`
+ ExecID string `protobuf:"bytes,2,opt,name=exec_id,json=execId,proto3" json:"exec_id,omitempty"`
+}
+
+func (x *TaskExecAdded) Reset() {
+ *x = TaskExecAdded{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[6]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *TaskExecAdded) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*TaskExecAdded) ProtoMessage() {}
+
+func (x *TaskExecAdded) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[6]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use TaskExecAdded.ProtoReflect.Descriptor instead.
+func (*TaskExecAdded) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_task_proto_rawDescGZIP(), []int{6}
+}
+
+func (x *TaskExecAdded) GetContainerID() string {
+ if x != nil {
+ return x.ContainerID
+ }
+ return ""
+}
+
+func (x *TaskExecAdded) GetExecID() string {
+ if x != nil {
+ return x.ExecID
+ }
+ return ""
+}
+
+type TaskExecStarted struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ContainerID string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"`
+ ExecID string `protobuf:"bytes,2,opt,name=exec_id,json=execId,proto3" json:"exec_id,omitempty"`
+ Pid uint32 `protobuf:"varint,3,opt,name=pid,proto3" json:"pid,omitempty"`
+}
+
+func (x *TaskExecStarted) Reset() {
+ *x = TaskExecStarted{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[7]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *TaskExecStarted) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*TaskExecStarted) ProtoMessage() {}
+
+func (x *TaskExecStarted) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[7]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use TaskExecStarted.ProtoReflect.Descriptor instead.
+func (*TaskExecStarted) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_task_proto_rawDescGZIP(), []int{7}
+}
+
+func (x *TaskExecStarted) GetContainerID() string {
+ if x != nil {
+ return x.ContainerID
+ }
+ return ""
+}
+
+func (x *TaskExecStarted) GetExecID() string {
+ if x != nil {
+ return x.ExecID
+ }
+ return ""
+}
+
+func (x *TaskExecStarted) GetPid() uint32 {
+ if x != nil {
+ return x.Pid
+ }
+ return 0
+}
+
+type TaskPaused struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ContainerID string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"`
+}
+
+func (x *TaskPaused) Reset() {
+ *x = TaskPaused{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[8]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *TaskPaused) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*TaskPaused) ProtoMessage() {}
+
+func (x *TaskPaused) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[8]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use TaskPaused.ProtoReflect.Descriptor instead.
+func (*TaskPaused) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_task_proto_rawDescGZIP(), []int{8}
+}
+
+func (x *TaskPaused) GetContainerID() string {
+ if x != nil {
+ return x.ContainerID
+ }
+ return ""
+}
+
+type TaskResumed struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ContainerID string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"`
+}
+
+func (x *TaskResumed) Reset() {
+ *x = TaskResumed{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[9]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *TaskResumed) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*TaskResumed) ProtoMessage() {}
+
+func (x *TaskResumed) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[9]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use TaskResumed.ProtoReflect.Descriptor instead.
+func (*TaskResumed) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_task_proto_rawDescGZIP(), []int{9}
+}
+
+func (x *TaskResumed) GetContainerID() string {
+ if x != nil {
+ return x.ContainerID
+ }
+ return ""
+}
+
+type TaskCheckpointed struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ContainerID string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"`
+ Checkpoint string `protobuf:"bytes,2,opt,name=checkpoint,proto3" json:"checkpoint,omitempty"`
+}
+
+func (x *TaskCheckpointed) Reset() {
+ *x = TaskCheckpointed{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[10]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *TaskCheckpointed) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*TaskCheckpointed) ProtoMessage() {}
+
+func (x *TaskCheckpointed) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_events_task_proto_msgTypes[10]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use TaskCheckpointed.ProtoReflect.Descriptor instead.
+func (*TaskCheckpointed) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_events_task_proto_rawDescGZIP(), []int{10}
+}
+
+func (x *TaskCheckpointed) GetContainerID() string {
+ if x != nil {
+ return x.ContainerID
+ }
+ return ""
+}
+
+func (x *TaskCheckpointed) GetCheckpoint() string {
+ if x != nil {
+ return x.Checkpoint
+ }
+ return ""
+}
+
+var File_github_com_containerd_containerd_api_events_task_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_events_task_proto_rawDesc = []byte{
+ 0x0a, 0x36, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x2f, 0x74, 0x61,
+ 0x73, 0x6b, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x11, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69,
+ 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x1a, 0x1f, 0x67, 0x6f, 0x6f,
+ 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d,
+ 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x36, 0x67, 0x69,
+ 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e,
+ 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x61,
+ 0x70, 0x69, 0x2f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x2e, 0x70,
+ 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d,
+ 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74,
+ 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f,
+ 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2f, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x70, 0x61, 0x74, 0x68,
+ 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xd5, 0x01, 0x0a, 0x0a, 0x54, 0x61, 0x73, 0x6b, 0x43,
+ 0x72, 0x65, 0x61, 0x74, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e,
+ 0x65, 0x72, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x49, 0x64, 0x12, 0x16, 0x0a, 0x06, 0x62, 0x75, 0x6e, 0x64,
+ 0x6c, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x62, 0x75, 0x6e, 0x64, 0x6c, 0x65,
+ 0x12, 0x2f, 0x0a, 0x06, 0x72, 0x6f, 0x6f, 0x74, 0x66, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b,
+ 0x32, 0x17, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x74, 0x79,
+ 0x70, 0x65, 0x73, 0x2e, 0x4d, 0x6f, 0x75, 0x6e, 0x74, 0x52, 0x06, 0x72, 0x6f, 0x6f, 0x74, 0x66,
+ 0x73, 0x12, 0x29, 0x0a, 0x02, 0x69, 0x6f, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e,
+ 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x65, 0x76, 0x65, 0x6e, 0x74,
+ 0x73, 0x2e, 0x54, 0x61, 0x73, 0x6b, 0x49, 0x4f, 0x52, 0x02, 0x69, 0x6f, 0x12, 0x1e, 0x0a, 0x0a,
+ 0x63, 0x68, 0x65, 0x63, 0x6b, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09,
+ 0x52, 0x0a, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x12, 0x10, 0x0a, 0x03,
+ 0x70, 0x69, 0x64, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x03, 0x70, 0x69, 0x64, 0x22, 0x40,
+ 0x0a, 0x09, 0x54, 0x61, 0x73, 0x6b, 0x53, 0x74, 0x61, 0x72, 0x74, 0x12, 0x21, 0x0a, 0x0c, 0x63,
+ 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28,
+ 0x09, 0x52, 0x0b, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x49, 0x64, 0x12, 0x10,
+ 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x03, 0x70, 0x69, 0x64,
+ 0x22, 0xab, 0x01, 0x0a, 0x0a, 0x54, 0x61, 0x73, 0x6b, 0x44, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x12,
+ 0x21, 0x0a, 0x0c, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x5f, 0x69, 0x64, 0x18,
+ 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72,
+ 0x49, 0x64, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52,
+ 0x03, 0x70, 0x69, 0x64, 0x12, 0x1f, 0x0a, 0x0b, 0x65, 0x78, 0x69, 0x74, 0x5f, 0x73, 0x74, 0x61,
+ 0x74, 0x75, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0a, 0x65, 0x78, 0x69, 0x74, 0x53,
+ 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x37, 0x0a, 0x09, 0x65, 0x78, 0x69, 0x74, 0x65, 0x64, 0x5f,
+ 0x61, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
+ 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73,
+ 0x74, 0x61, 0x6d, 0x70, 0x52, 0x08, 0x65, 0x78, 0x69, 0x74, 0x65, 0x64, 0x41, 0x74, 0x12, 0x0e,
+ 0x0a, 0x02, 0x69, 0x64, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x64, 0x22, 0x6a,
+ 0x0a, 0x06, 0x54, 0x61, 0x73, 0x6b, 0x49, 0x4f, 0x12, 0x14, 0x0a, 0x05, 0x73, 0x74, 0x64, 0x69,
+ 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x73, 0x74, 0x64, 0x69, 0x6e, 0x12, 0x16,
+ 0x0a, 0x06, 0x73, 0x74, 0x64, 0x6f, 0x75, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06,
+ 0x73, 0x74, 0x64, 0x6f, 0x75, 0x74, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x74, 0x64, 0x65, 0x72, 0x72,
+ 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x74, 0x64, 0x65, 0x72, 0x72, 0x12, 0x1a,
+ 0x0a, 0x08, 0x74, 0x65, 0x72, 0x6d, 0x69, 0x6e, 0x61, 0x6c, 0x18, 0x04, 0x20, 0x01, 0x28, 0x08,
+ 0x52, 0x08, 0x74, 0x65, 0x72, 0x6d, 0x69, 0x6e, 0x61, 0x6c, 0x22, 0xa9, 0x01, 0x0a, 0x08, 0x54,
+ 0x61, 0x73, 0x6b, 0x45, 0x78, 0x69, 0x74, 0x12, 0x21, 0x0a, 0x0c, 0x63, 0x6f, 0x6e, 0x74, 0x61,
+ 0x69, 0x6e, 0x65, 0x72, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x63,
+ 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x49, 0x64, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64,
+ 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x64, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69,
+ 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x03, 0x70, 0x69, 0x64, 0x12, 0x1f, 0x0a, 0x0b,
+ 0x65, 0x78, 0x69, 0x74, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28,
+ 0x0d, 0x52, 0x0a, 0x65, 0x78, 0x69, 0x74, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x37, 0x0a,
+ 0x09, 0x65, 0x78, 0x69, 0x74, 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b,
+ 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62,
+ 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x08, 0x65, 0x78,
+ 0x69, 0x74, 0x65, 0x64, 0x41, 0x74, 0x22, 0x2c, 0x0a, 0x07, 0x54, 0x61, 0x73, 0x6b, 0x4f, 0x4f,
+ 0x4d, 0x12, 0x21, 0x0a, 0x0c, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x5f, 0x69,
+ 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e,
+ 0x65, 0x72, 0x49, 0x64, 0x22, 0x4b, 0x0a, 0x0d, 0x54, 0x61, 0x73, 0x6b, 0x45, 0x78, 0x65, 0x63,
+ 0x41, 0x64, 0x64, 0x65, 0x64, 0x12, 0x21, 0x0a, 0x0c, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e,
+ 0x65, 0x72, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x49, 0x64, 0x12, 0x17, 0x0a, 0x07, 0x65, 0x78, 0x65, 0x63,
+ 0x5f, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x65, 0x78, 0x65, 0x63, 0x49,
+ 0x64, 0x22, 0x5f, 0x0a, 0x0f, 0x54, 0x61, 0x73, 0x6b, 0x45, 0x78, 0x65, 0x63, 0x53, 0x74, 0x61,
+ 0x72, 0x74, 0x65, 0x64, 0x12, 0x21, 0x0a, 0x0c, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x63, 0x6f, 0x6e, 0x74,
+ 0x61, 0x69, 0x6e, 0x65, 0x72, 0x49, 0x64, 0x12, 0x17, 0x0a, 0x07, 0x65, 0x78, 0x65, 0x63, 0x5f,
+ 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x65, 0x78, 0x65, 0x63, 0x49, 0x64,
+ 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x03, 0x70,
+ 0x69, 0x64, 0x22, 0x2f, 0x0a, 0x0a, 0x54, 0x61, 0x73, 0x6b, 0x50, 0x61, 0x75, 0x73, 0x65, 0x64,
+ 0x12, 0x21, 0x0a, 0x0c, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x5f, 0x69, 0x64,
+ 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x49, 0x64, 0x22, 0x30, 0x0a, 0x0b, 0x54, 0x61, 0x73, 0x6b, 0x52, 0x65, 0x73, 0x75, 0x6d,
+ 0x65, 0x64, 0x12, 0x21, 0x0a, 0x0c, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x5f,
+ 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69,
+ 0x6e, 0x65, 0x72, 0x49, 0x64, 0x22, 0x55, 0x0a, 0x10, 0x54, 0x61, 0x73, 0x6b, 0x43, 0x68, 0x65,
+ 0x63, 0x6b, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x65, 0x64, 0x12, 0x21, 0x0a, 0x0c, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
+ 0x0b, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x49, 0x64, 0x12, 0x1e, 0x0a, 0x0a,
+ 0x63, 0x68, 0x65, 0x63, 0x6b, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09,
+ 0x52, 0x0a, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x42, 0x38, 0x5a, 0x32,
+ 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61,
+ 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64,
+ 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x65, 0x76, 0x65, 0x6e, 0x74, 0x73, 0x3b, 0x65, 0x76, 0x65, 0x6e,
+ 0x74, 0x73, 0xa0, 0xf4, 0x1e, 0x01, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_events_task_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_events_task_proto_rawDescData = file_github_com_containerd_containerd_api_events_task_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_events_task_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_events_task_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_events_task_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_events_task_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_events_task_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_events_task_proto_msgTypes = make([]protoimpl.MessageInfo, 11)
+var file_github_com_containerd_containerd_api_events_task_proto_goTypes = []interface{}{
+ (*TaskCreate)(nil), // 0: containerd.events.TaskCreate
+ (*TaskStart)(nil), // 1: containerd.events.TaskStart
+ (*TaskDelete)(nil), // 2: containerd.events.TaskDelete
+ (*TaskIO)(nil), // 3: containerd.events.TaskIO
+ (*TaskExit)(nil), // 4: containerd.events.TaskExit
+ (*TaskOOM)(nil), // 5: containerd.events.TaskOOM
+ (*TaskExecAdded)(nil), // 6: containerd.events.TaskExecAdded
+ (*TaskExecStarted)(nil), // 7: containerd.events.TaskExecStarted
+ (*TaskPaused)(nil), // 8: containerd.events.TaskPaused
+ (*TaskResumed)(nil), // 9: containerd.events.TaskResumed
+ (*TaskCheckpointed)(nil), // 10: containerd.events.TaskCheckpointed
+ (*types.Mount)(nil), // 11: containerd.types.Mount
+ (*timestamppb.Timestamp)(nil), // 12: google.protobuf.Timestamp
+}
+var file_github_com_containerd_containerd_api_events_task_proto_depIdxs = []int32{
+ 11, // 0: containerd.events.TaskCreate.rootfs:type_name -> containerd.types.Mount
+ 3, // 1: containerd.events.TaskCreate.io:type_name -> containerd.events.TaskIO
+ 12, // 2: containerd.events.TaskDelete.exited_at:type_name -> google.protobuf.Timestamp
+ 12, // 3: containerd.events.TaskExit.exited_at:type_name -> google.protobuf.Timestamp
+ 4, // [4:4] is the sub-list for method output_type
+ 4, // [4:4] is the sub-list for method input_type
+ 4, // [4:4] is the sub-list for extension type_name
+ 4, // [4:4] is the sub-list for extension extendee
+ 0, // [0:4] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_events_task_proto_init() }
+func file_github_com_containerd_containerd_api_events_task_proto_init() {
+ if File_github_com_containerd_containerd_api_events_task_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_events_task_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*TaskCreate); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_task_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*TaskStart); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_task_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*TaskDelete); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_task_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*TaskIO); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_task_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*TaskExit); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_task_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*TaskOOM); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_task_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*TaskExecAdded); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_task_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*TaskExecStarted); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_task_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*TaskPaused); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_task_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*TaskResumed); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_events_task_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*TaskCheckpointed); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_events_task_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 11,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_events_task_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_events_task_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_events_task_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_events_task_proto = out.File
+ file_github_com_containerd_containerd_api_events_task_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_events_task_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_events_task_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/task.proto b/tools/vendor/github.com/containerd/containerd/api/events/task.proto
new file mode 100644
index 000000000..238564dfe
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/task.proto
@@ -0,0 +1,93 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.events;
+
+import "google/protobuf/timestamp.proto";
+import "github.com/containerd/containerd/api/types/mount.proto";
+import "github.com/containerd/containerd/protobuf/plugin/fieldpath.proto";
+
+option go_package = "github.com/containerd/containerd/api/events;events";
+option (containerd.plugin.fieldpath_all) = true;
+
+message TaskCreate {
+ string container_id = 1;
+ string bundle = 2;
+ repeated containerd.types.Mount rootfs = 3;
+ TaskIO io = 4;
+ string checkpoint = 5;
+ uint32 pid = 6;
+}
+
+message TaskStart {
+ string container_id = 1;
+ uint32 pid = 2;
+}
+
+message TaskDelete {
+ string container_id = 1;
+ uint32 pid = 2;
+ uint32 exit_status = 3;
+ google.protobuf.Timestamp exited_at = 4;
+ // id is the specific exec. By default if omitted will be `""` thus matches
+ // the init exec of the task matching `container_id`.
+ string id = 5;
+}
+
+message TaskIO {
+ string stdin = 1;
+ string stdout = 2;
+ string stderr = 3;
+ bool terminal = 4;
+}
+
+message TaskExit {
+ string container_id = 1;
+ string id = 2;
+ uint32 pid = 3;
+ uint32 exit_status = 4;
+ google.protobuf.Timestamp exited_at = 5;
+}
+
+message TaskOOM {
+ string container_id = 1;
+}
+
+message TaskExecAdded {
+ string container_id = 1;
+ string exec_id = 2;
+}
+
+message TaskExecStarted {
+ string container_id = 1;
+ string exec_id = 2;
+ uint32 pid = 3;
+}
+
+message TaskPaused {
+ string container_id = 1;
+}
+
+message TaskResumed {
+ string container_id = 1;
+}
+
+message TaskCheckpointed {
+ string container_id = 1;
+ string checkpoint = 2;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/events/task_fieldpath.pb.go b/tools/vendor/github.com/containerd/containerd/api/events/task_fieldpath.pb.go
new file mode 100644
index 000000000..633fc3a65
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/events/task_fieldpath.pb.go
@@ -0,0 +1,191 @@
+// Code generated by protoc-gen-go-fieldpath. DO NOT EDIT.
+// source: github.com/containerd/containerd/api/events/task.proto
+package events
+
+import (
+ fmt "fmt"
+)
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *TaskCreate) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ // unhandled: rootfs
+ // unhandled: pid
+ case "container_id":
+ return string(m.ContainerID), len(m.ContainerID) > 0
+ case "bundle":
+ return string(m.Bundle), len(m.Bundle) > 0
+ case "io":
+ // NOTE(stevvooe): This is probably not correct in many cases.
+ // We assume that the target message also implements the Field
+ // method, which isn't likely true in a lot of cases.
+ //
+ // If you have a broken build and have found this comment,
+ // you may be closer to a solution.
+ if m.IO == nil {
+ return "", false
+ }
+ return m.IO.Field(fieldpath[1:])
+ case "checkpoint":
+ return string(m.Checkpoint), len(m.Checkpoint) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *TaskStart) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ // unhandled: pid
+ case "container_id":
+ return string(m.ContainerID), len(m.ContainerID) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *TaskDelete) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ // unhandled: pid
+ // unhandled: exit_status
+ // unhandled: exited_at
+ case "container_id":
+ return string(m.ContainerID), len(m.ContainerID) > 0
+ case "id":
+ return string(m.ID), len(m.ID) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *TaskIO) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "stdin":
+ return string(m.Stdin), len(m.Stdin) > 0
+ case "stdout":
+ return string(m.Stdout), len(m.Stdout) > 0
+ case "stderr":
+ return string(m.Stderr), len(m.Stderr) > 0
+ case "terminal":
+ return fmt.Sprint(m.Terminal), true
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *TaskExit) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ // unhandled: pid
+ // unhandled: exit_status
+ // unhandled: exited_at
+ case "container_id":
+ return string(m.ContainerID), len(m.ContainerID) > 0
+ case "id":
+ return string(m.ID), len(m.ID) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *TaskOOM) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "container_id":
+ return string(m.ContainerID), len(m.ContainerID) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *TaskExecAdded) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "container_id":
+ return string(m.ContainerID), len(m.ContainerID) > 0
+ case "exec_id":
+ return string(m.ExecID), len(m.ExecID) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *TaskExecStarted) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ // unhandled: pid
+ case "container_id":
+ return string(m.ContainerID), len(m.ContainerID) > 0
+ case "exec_id":
+ return string(m.ExecID), len(m.ExecID) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *TaskPaused) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "container_id":
+ return string(m.ContainerID), len(m.ContainerID) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *TaskResumed) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "container_id":
+ return string(m.ContainerID), len(m.ContainerID) > 0
+ }
+ return "", false
+}
+
+// Field returns the value for the given fieldpath as a string, if defined.
+// If the value is not defined, the second value will be false.
+func (m *TaskCheckpointed) Field(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+ switch fieldpath[0] {
+ case "container_id":
+ return string(m.ContainerID), len(m.ContainerID) > 0
+ case "checkpoint":
+ return string(m.Checkpoint), len(m.Checkpoint) > 0
+ }
+ return "", false
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/doc.go b/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/doc.go
new file mode 100644
index 000000000..f960350c1
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/doc.go
@@ -0,0 +1,17 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package sandbox
diff --git a/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox.pb.go b/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox.pb.go
new file mode 100644
index 000000000..5b3d4aa78
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox.pb.go
@@ -0,0 +1,1480 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox.proto
+
+package sandbox
+
+import (
+ types "github.com/containerd/containerd/api/types"
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ anypb "google.golang.org/protobuf/types/known/anypb"
+ timestamppb "google.golang.org/protobuf/types/known/timestamppb"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+type CreateSandboxRequest struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+ BundlePath string `protobuf:"bytes,2,opt,name=bundle_path,json=bundlePath,proto3" json:"bundle_path,omitempty"`
+ Rootfs []*types.Mount `protobuf:"bytes,3,rep,name=rootfs,proto3" json:"rootfs,omitempty"`
+ Options *anypb.Any `protobuf:"bytes,4,opt,name=options,proto3" json:"options,omitempty"`
+ NetnsPath string `protobuf:"bytes,5,opt,name=netns_path,json=netnsPath,proto3" json:"netns_path,omitempty"`
+}
+
+func (x *CreateSandboxRequest) Reset() {
+ *x = CreateSandboxRequest{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *CreateSandboxRequest) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*CreateSandboxRequest) ProtoMessage() {}
+
+func (x *CreateSandboxRequest) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use CreateSandboxRequest.ProtoReflect.Descriptor instead.
+func (*CreateSandboxRequest) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *CreateSandboxRequest) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+func (x *CreateSandboxRequest) GetBundlePath() string {
+ if x != nil {
+ return x.BundlePath
+ }
+ return ""
+}
+
+func (x *CreateSandboxRequest) GetRootfs() []*types.Mount {
+ if x != nil {
+ return x.Rootfs
+ }
+ return nil
+}
+
+func (x *CreateSandboxRequest) GetOptions() *anypb.Any {
+ if x != nil {
+ return x.Options
+ }
+ return nil
+}
+
+func (x *CreateSandboxRequest) GetNetnsPath() string {
+ if x != nil {
+ return x.NetnsPath
+ }
+ return ""
+}
+
+type CreateSandboxResponse struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+}
+
+func (x *CreateSandboxResponse) Reset() {
+ *x = CreateSandboxResponse{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[1]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *CreateSandboxResponse) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*CreateSandboxResponse) ProtoMessage() {}
+
+func (x *CreateSandboxResponse) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[1]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use CreateSandboxResponse.ProtoReflect.Descriptor instead.
+func (*CreateSandboxResponse) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{1}
+}
+
+type StartSandboxRequest struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+}
+
+func (x *StartSandboxRequest) Reset() {
+ *x = StartSandboxRequest{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[2]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *StartSandboxRequest) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*StartSandboxRequest) ProtoMessage() {}
+
+func (x *StartSandboxRequest) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[2]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use StartSandboxRequest.ProtoReflect.Descriptor instead.
+func (*StartSandboxRequest) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{2}
+}
+
+func (x *StartSandboxRequest) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+type StartSandboxResponse struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Pid uint32 `protobuf:"varint,1,opt,name=pid,proto3" json:"pid,omitempty"`
+ CreatedAt *timestamppb.Timestamp `protobuf:"bytes,2,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"`
+}
+
+func (x *StartSandboxResponse) Reset() {
+ *x = StartSandboxResponse{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[3]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *StartSandboxResponse) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*StartSandboxResponse) ProtoMessage() {}
+
+func (x *StartSandboxResponse) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[3]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use StartSandboxResponse.ProtoReflect.Descriptor instead.
+func (*StartSandboxResponse) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{3}
+}
+
+func (x *StartSandboxResponse) GetPid() uint32 {
+ if x != nil {
+ return x.Pid
+ }
+ return 0
+}
+
+func (x *StartSandboxResponse) GetCreatedAt() *timestamppb.Timestamp {
+ if x != nil {
+ return x.CreatedAt
+ }
+ return nil
+}
+
+type PlatformRequest struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+}
+
+func (x *PlatformRequest) Reset() {
+ *x = PlatformRequest{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[4]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *PlatformRequest) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*PlatformRequest) ProtoMessage() {}
+
+func (x *PlatformRequest) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[4]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use PlatformRequest.ProtoReflect.Descriptor instead.
+func (*PlatformRequest) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{4}
+}
+
+func (x *PlatformRequest) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+type PlatformResponse struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Platform *types.Platform `protobuf:"bytes,1,opt,name=platform,proto3" json:"platform,omitempty"`
+}
+
+func (x *PlatformResponse) Reset() {
+ *x = PlatformResponse{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[5]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *PlatformResponse) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*PlatformResponse) ProtoMessage() {}
+
+func (x *PlatformResponse) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[5]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use PlatformResponse.ProtoReflect.Descriptor instead.
+func (*PlatformResponse) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{5}
+}
+
+func (x *PlatformResponse) GetPlatform() *types.Platform {
+ if x != nil {
+ return x.Platform
+ }
+ return nil
+}
+
+type StopSandboxRequest struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+ TimeoutSecs uint32 `protobuf:"varint,2,opt,name=timeout_secs,json=timeoutSecs,proto3" json:"timeout_secs,omitempty"`
+}
+
+func (x *StopSandboxRequest) Reset() {
+ *x = StopSandboxRequest{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[6]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *StopSandboxRequest) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*StopSandboxRequest) ProtoMessage() {}
+
+func (x *StopSandboxRequest) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[6]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use StopSandboxRequest.ProtoReflect.Descriptor instead.
+func (*StopSandboxRequest) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{6}
+}
+
+func (x *StopSandboxRequest) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+func (x *StopSandboxRequest) GetTimeoutSecs() uint32 {
+ if x != nil {
+ return x.TimeoutSecs
+ }
+ return 0
+}
+
+type StopSandboxResponse struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+}
+
+func (x *StopSandboxResponse) Reset() {
+ *x = StopSandboxResponse{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[7]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *StopSandboxResponse) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*StopSandboxResponse) ProtoMessage() {}
+
+func (x *StopSandboxResponse) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[7]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use StopSandboxResponse.ProtoReflect.Descriptor instead.
+func (*StopSandboxResponse) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{7}
+}
+
+type UpdateSandboxRequest struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+ Resources *anypb.Any `protobuf:"bytes,2,opt,name=resources,proto3" json:"resources,omitempty"`
+ Annotations map[string]string `protobuf:"bytes,3,rep,name=annotations,proto3" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+}
+
+func (x *UpdateSandboxRequest) Reset() {
+ *x = UpdateSandboxRequest{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[8]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *UpdateSandboxRequest) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*UpdateSandboxRequest) ProtoMessage() {}
+
+func (x *UpdateSandboxRequest) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[8]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use UpdateSandboxRequest.ProtoReflect.Descriptor instead.
+func (*UpdateSandboxRequest) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{8}
+}
+
+func (x *UpdateSandboxRequest) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+func (x *UpdateSandboxRequest) GetResources() *anypb.Any {
+ if x != nil {
+ return x.Resources
+ }
+ return nil
+}
+
+func (x *UpdateSandboxRequest) GetAnnotations() map[string]string {
+ if x != nil {
+ return x.Annotations
+ }
+ return nil
+}
+
+type WaitSandboxRequest struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+}
+
+func (x *WaitSandboxRequest) Reset() {
+ *x = WaitSandboxRequest{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[9]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *WaitSandboxRequest) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*WaitSandboxRequest) ProtoMessage() {}
+
+func (x *WaitSandboxRequest) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[9]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use WaitSandboxRequest.ProtoReflect.Descriptor instead.
+func (*WaitSandboxRequest) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{9}
+}
+
+func (x *WaitSandboxRequest) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+type WaitSandboxResponse struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ ExitStatus uint32 `protobuf:"varint,1,opt,name=exit_status,json=exitStatus,proto3" json:"exit_status,omitempty"`
+ ExitedAt *timestamppb.Timestamp `protobuf:"bytes,2,opt,name=exited_at,json=exitedAt,proto3" json:"exited_at,omitempty"`
+}
+
+func (x *WaitSandboxResponse) Reset() {
+ *x = WaitSandboxResponse{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[10]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *WaitSandboxResponse) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*WaitSandboxResponse) ProtoMessage() {}
+
+func (x *WaitSandboxResponse) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[10]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use WaitSandboxResponse.ProtoReflect.Descriptor instead.
+func (*WaitSandboxResponse) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{10}
+}
+
+func (x *WaitSandboxResponse) GetExitStatus() uint32 {
+ if x != nil {
+ return x.ExitStatus
+ }
+ return 0
+}
+
+func (x *WaitSandboxResponse) GetExitedAt() *timestamppb.Timestamp {
+ if x != nil {
+ return x.ExitedAt
+ }
+ return nil
+}
+
+type UpdateSandboxResponse struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+}
+
+func (x *UpdateSandboxResponse) Reset() {
+ *x = UpdateSandboxResponse{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[11]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *UpdateSandboxResponse) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*UpdateSandboxResponse) ProtoMessage() {}
+
+func (x *UpdateSandboxResponse) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[11]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use UpdateSandboxResponse.ProtoReflect.Descriptor instead.
+func (*UpdateSandboxResponse) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{11}
+}
+
+type SandboxStatusRequest struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+ Verbose bool `protobuf:"varint,2,opt,name=verbose,proto3" json:"verbose,omitempty"`
+}
+
+func (x *SandboxStatusRequest) Reset() {
+ *x = SandboxStatusRequest{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[12]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *SandboxStatusRequest) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*SandboxStatusRequest) ProtoMessage() {}
+
+func (x *SandboxStatusRequest) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[12]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use SandboxStatusRequest.ProtoReflect.Descriptor instead.
+func (*SandboxStatusRequest) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{12}
+}
+
+func (x *SandboxStatusRequest) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+func (x *SandboxStatusRequest) GetVerbose() bool {
+ if x != nil {
+ return x.Verbose
+ }
+ return false
+}
+
+type SandboxStatusResponse struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+ Pid uint32 `protobuf:"varint,2,opt,name=pid,proto3" json:"pid,omitempty"`
+ State string `protobuf:"bytes,3,opt,name=state,proto3" json:"state,omitempty"`
+ Info map[string]string `protobuf:"bytes,4,rep,name=info,proto3" json:"info,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+ CreatedAt *timestamppb.Timestamp `protobuf:"bytes,5,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"`
+ ExitedAt *timestamppb.Timestamp `protobuf:"bytes,6,opt,name=exited_at,json=exitedAt,proto3" json:"exited_at,omitempty"`
+ Extra *anypb.Any `protobuf:"bytes,7,opt,name=extra,proto3" json:"extra,omitempty"`
+}
+
+func (x *SandboxStatusResponse) Reset() {
+ *x = SandboxStatusResponse{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[13]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *SandboxStatusResponse) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*SandboxStatusResponse) ProtoMessage() {}
+
+func (x *SandboxStatusResponse) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[13]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use SandboxStatusResponse.ProtoReflect.Descriptor instead.
+func (*SandboxStatusResponse) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{13}
+}
+
+func (x *SandboxStatusResponse) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+func (x *SandboxStatusResponse) GetPid() uint32 {
+ if x != nil {
+ return x.Pid
+ }
+ return 0
+}
+
+func (x *SandboxStatusResponse) GetState() string {
+ if x != nil {
+ return x.State
+ }
+ return ""
+}
+
+func (x *SandboxStatusResponse) GetInfo() map[string]string {
+ if x != nil {
+ return x.Info
+ }
+ return nil
+}
+
+func (x *SandboxStatusResponse) GetCreatedAt() *timestamppb.Timestamp {
+ if x != nil {
+ return x.CreatedAt
+ }
+ return nil
+}
+
+func (x *SandboxStatusResponse) GetExitedAt() *timestamppb.Timestamp {
+ if x != nil {
+ return x.ExitedAt
+ }
+ return nil
+}
+
+func (x *SandboxStatusResponse) GetExtra() *anypb.Any {
+ if x != nil {
+ return x.Extra
+ }
+ return nil
+}
+
+type PingRequest struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+}
+
+func (x *PingRequest) Reset() {
+ *x = PingRequest{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[14]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *PingRequest) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*PingRequest) ProtoMessage() {}
+
+func (x *PingRequest) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[14]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use PingRequest.ProtoReflect.Descriptor instead.
+func (*PingRequest) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{14}
+}
+
+func (x *PingRequest) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+type PingResponse struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+}
+
+func (x *PingResponse) Reset() {
+ *x = PingResponse{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[15]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *PingResponse) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*PingResponse) ProtoMessage() {}
+
+func (x *PingResponse) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[15]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use PingResponse.ProtoReflect.Descriptor instead.
+func (*PingResponse) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{15}
+}
+
+type ShutdownSandboxRequest struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+}
+
+func (x *ShutdownSandboxRequest) Reset() {
+ *x = ShutdownSandboxRequest{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[16]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *ShutdownSandboxRequest) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*ShutdownSandboxRequest) ProtoMessage() {}
+
+func (x *ShutdownSandboxRequest) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[16]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use ShutdownSandboxRequest.ProtoReflect.Descriptor instead.
+func (*ShutdownSandboxRequest) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{16}
+}
+
+func (x *ShutdownSandboxRequest) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+type ShutdownSandboxResponse struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+}
+
+func (x *ShutdownSandboxResponse) Reset() {
+ *x = ShutdownSandboxResponse{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[17]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *ShutdownSandboxResponse) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*ShutdownSandboxResponse) ProtoMessage() {}
+
+func (x *ShutdownSandboxResponse) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[17]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use ShutdownSandboxResponse.ProtoReflect.Descriptor instead.
+func (*ShutdownSandboxResponse) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP(), []int{17}
+}
+
+var File_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDesc = []byte{
+ 0x0a, 0x45, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2f, 0x73,
+ 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2f, 0x76, 0x31, 0x2f, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f,
+ 0x78, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x1d, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e,
+ 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64,
+ 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31, 0x1a, 0x19, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70,
+ 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x61, 0x6e, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74,
+ 0x6f, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62,
+ 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f,
+ 0x74, 0x6f, 0x1a, 0x36, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63,
+ 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69,
+ 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x6d,
+ 0x6f, 0x75, 0x6e, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x39, 0x67, 0x69, 0x74, 0x68,
+ 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72,
+ 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69,
+ 0x2f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x6c, 0x61, 0x74, 0x66, 0x6f, 0x72, 0x6d, 0x2e,
+ 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xd6, 0x01, 0x0a, 0x14, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65,
+ 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d,
+ 0x0a, 0x0a, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01,
+ 0x28, 0x09, 0x52, 0x09, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x49, 0x64, 0x12, 0x1f, 0x0a,
+ 0x0b, 0x62, 0x75, 0x6e, 0x64, 0x6c, 0x65, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x18, 0x02, 0x20, 0x01,
+ 0x28, 0x09, 0x52, 0x0a, 0x62, 0x75, 0x6e, 0x64, 0x6c, 0x65, 0x50, 0x61, 0x74, 0x68, 0x12, 0x2f,
+ 0x0a, 0x06, 0x72, 0x6f, 0x6f, 0x74, 0x66, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x17,
+ 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x74, 0x79, 0x70, 0x65,
+ 0x73, 0x2e, 0x4d, 0x6f, 0x75, 0x6e, 0x74, 0x52, 0x06, 0x72, 0x6f, 0x6f, 0x74, 0x66, 0x73, 0x12,
+ 0x2e, 0x0a, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b,
+ 0x32, 0x14, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62,
+ 0x75, 0x66, 0x2e, 0x41, 0x6e, 0x79, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12,
+ 0x1d, 0x0a, 0x0a, 0x6e, 0x65, 0x74, 0x6e, 0x73, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x18, 0x05, 0x20,
+ 0x01, 0x28, 0x09, 0x52, 0x09, 0x6e, 0x65, 0x74, 0x6e, 0x73, 0x50, 0x61, 0x74, 0x68, 0x22, 0x17,
+ 0x0a, 0x15, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52,
+ 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x34, 0x0a, 0x13, 0x53, 0x74, 0x61, 0x72, 0x74,
+ 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d,
+ 0x0a, 0x0a, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01,
+ 0x28, 0x09, 0x52, 0x09, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x49, 0x64, 0x22, 0x63, 0x0a,
+ 0x14, 0x53, 0x74, 0x61, 0x72, 0x74, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x73,
+ 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01,
+ 0x28, 0x0d, 0x52, 0x03, 0x70, 0x69, 0x64, 0x12, 0x39, 0x0a, 0x0a, 0x63, 0x72, 0x65, 0x61, 0x74,
+ 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f,
+ 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69,
+ 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x63, 0x72, 0x65, 0x61, 0x74, 0x65, 0x64,
+ 0x41, 0x74, 0x22, 0x30, 0x0a, 0x0f, 0x50, 0x6c, 0x61, 0x74, 0x66, 0x6f, 0x72, 0x6d, 0x52, 0x65,
+ 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78,
+ 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x73, 0x61, 0x6e, 0x64, 0x62,
+ 0x6f, 0x78, 0x49, 0x64, 0x22, 0x4a, 0x0a, 0x10, 0x50, 0x6c, 0x61, 0x74, 0x66, 0x6f, 0x72, 0x6d,
+ 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x36, 0x0a, 0x08, 0x70, 0x6c, 0x61, 0x74,
+ 0x66, 0x6f, 0x72, 0x6d, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2e, 0x50, 0x6c,
+ 0x61, 0x74, 0x66, 0x6f, 0x72, 0x6d, 0x52, 0x08, 0x70, 0x6c, 0x61, 0x74, 0x66, 0x6f, 0x72, 0x6d,
+ 0x22, 0x56, 0x0a, 0x12, 0x53, 0x74, 0x6f, 0x70, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52,
+ 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f,
+ 0x78, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x73, 0x61, 0x6e, 0x64,
+ 0x62, 0x6f, 0x78, 0x49, 0x64, 0x12, 0x21, 0x0a, 0x0c, 0x74, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74,
+ 0x5f, 0x73, 0x65, 0x63, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0b, 0x74, 0x69, 0x6d,
+ 0x65, 0x6f, 0x75, 0x74, 0x53, 0x65, 0x63, 0x73, 0x22, 0x15, 0x0a, 0x13, 0x53, 0x74, 0x6f, 0x70,
+ 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22,
+ 0x91, 0x02, 0x0a, 0x14, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f,
+ 0x78, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x61, 0x6e, 0x64,
+ 0x62, 0x6f, 0x78, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x73, 0x61,
+ 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x49, 0x64, 0x12, 0x32, 0x0a, 0x09, 0x72, 0x65, 0x73, 0x6f, 0x75,
+ 0x72, 0x63, 0x65, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x67, 0x6f, 0x6f,
+ 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x41, 0x6e, 0x79,
+ 0x52, 0x09, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x73, 0x12, 0x66, 0x0a, 0x0b, 0x61,
+ 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b,
+ 0x32, 0x44, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75,
+ 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31,
+ 0x2e, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65,
+ 0x71, 0x75, 0x65, 0x73, 0x74, 0x2e, 0x41, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e,
+ 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x0b, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69,
+ 0x6f, 0x6e, 0x73, 0x1a, 0x3e, 0x0a, 0x10, 0x41, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f,
+ 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01,
+ 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c,
+ 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a,
+ 0x02, 0x38, 0x01, 0x22, 0x33, 0x0a, 0x12, 0x57, 0x61, 0x69, 0x74, 0x53, 0x61, 0x6e, 0x64, 0x62,
+ 0x6f, 0x78, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x61, 0x6e,
+ 0x64, 0x62, 0x6f, 0x78, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x73,
+ 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x49, 0x64, 0x22, 0x6f, 0x0a, 0x13, 0x57, 0x61, 0x69, 0x74,
+ 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12,
+ 0x1f, 0x0a, 0x0b, 0x65, 0x78, 0x69, 0x74, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x01,
+ 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0a, 0x65, 0x78, 0x69, 0x74, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73,
+ 0x12, 0x37, 0x0a, 0x09, 0x65, 0x78, 0x69, 0x74, 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x02, 0x20,
+ 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f,
+ 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52,
+ 0x08, 0x65, 0x78, 0x69, 0x74, 0x65, 0x64, 0x41, 0x74, 0x22, 0x17, 0x0a, 0x15, 0x55, 0x70, 0x64,
+ 0x61, 0x74, 0x65, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
+ 0x73, 0x65, 0x22, 0x4f, 0x0a, 0x14, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x53, 0x74, 0x61,
+ 0x74, 0x75, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x61,
+ 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09,
+ 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x49, 0x64, 0x12, 0x18, 0x0a, 0x07, 0x76, 0x65, 0x72,
+ 0x62, 0x6f, 0x73, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08, 0x52, 0x07, 0x76, 0x65, 0x72, 0x62,
+ 0x6f, 0x73, 0x65, 0x22, 0x8b, 0x03, 0x0a, 0x15, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x53,
+ 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x1d, 0x0a,
+ 0x0a, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28,
+ 0x09, 0x52, 0x09, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x49, 0x64, 0x12, 0x10, 0x0a, 0x03,
+ 0x70, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x03, 0x70, 0x69, 0x64, 0x12, 0x14,
+ 0x0a, 0x05, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x73,
+ 0x74, 0x61, 0x74, 0x65, 0x12, 0x52, 0x0a, 0x04, 0x69, 0x6e, 0x66, 0x6f, 0x18, 0x04, 0x20, 0x03,
+ 0x28, 0x0b, 0x32, 0x3e, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e,
+ 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e,
+ 0x76, 0x31, 0x2e, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73,
+ 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x2e, 0x49, 0x6e, 0x66, 0x6f, 0x45, 0x6e, 0x74,
+ 0x72, 0x79, 0x52, 0x04, 0x69, 0x6e, 0x66, 0x6f, 0x12, 0x39, 0x0a, 0x0a, 0x63, 0x72, 0x65, 0x61,
+ 0x74, 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67,
+ 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54,
+ 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x63, 0x72, 0x65, 0x61, 0x74, 0x65,
+ 0x64, 0x41, 0x74, 0x12, 0x37, 0x0a, 0x09, 0x65, 0x78, 0x69, 0x74, 0x65, 0x64, 0x5f, 0x61, 0x74,
+ 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e,
+ 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61,
+ 0x6d, 0x70, 0x52, 0x08, 0x65, 0x78, 0x69, 0x74, 0x65, 0x64, 0x41, 0x74, 0x12, 0x2a, 0x0a, 0x05,
+ 0x65, 0x78, 0x74, 0x72, 0x61, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x67, 0x6f,
+ 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x41, 0x6e,
+ 0x79, 0x52, 0x05, 0x65, 0x78, 0x74, 0x72, 0x61, 0x1a, 0x37, 0x0a, 0x09, 0x49, 0x6e, 0x66, 0x6f,
+ 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01,
+ 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65,
+ 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38,
+ 0x01, 0x22, 0x2c, 0x0a, 0x0b, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
+ 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x5f, 0x69, 0x64, 0x18, 0x01,
+ 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x49, 0x64, 0x22,
+ 0x0e, 0x0a, 0x0c, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22,
+ 0x37, 0x0a, 0x16, 0x53, 0x68, 0x75, 0x74, 0x64, 0x6f, 0x77, 0x6e, 0x53, 0x61, 0x6e, 0x64, 0x62,
+ 0x6f, 0x78, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x61, 0x6e,
+ 0x64, 0x62, 0x6f, 0x78, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x73,
+ 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x49, 0x64, 0x22, 0x19, 0x0a, 0x17, 0x53, 0x68, 0x75, 0x74,
+ 0x64, 0x6f, 0x77, 0x6e, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x73, 0x70, 0x6f,
+ 0x6e, 0x73, 0x65, 0x32, 0xbe, 0x07, 0x0a, 0x07, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x12,
+ 0x7a, 0x0a, 0x0d, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78,
+ 0x12, 0x33, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75,
+ 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31,
+ 0x2e, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65,
+ 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x34, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2e, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62,
+ 0x6f, 0x78, 0x2e, 0x76, 0x31, 0x2e, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x53, 0x61, 0x6e, 0x64,
+ 0x62, 0x6f, 0x78, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x77, 0x0a, 0x0c, 0x53,
+ 0x74, 0x61, 0x72, 0x74, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x12, 0x32, 0x2e, 0x63, 0x6f,
+ 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65,
+ 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x74, 0x61, 0x72,
+ 0x74, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a,
+ 0x33, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75, 0x6e,
+ 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31, 0x2e,
+ 0x53, 0x74, 0x61, 0x72, 0x74, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x73, 0x70,
+ 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x6b, 0x0a, 0x08, 0x50, 0x6c, 0x61, 0x74, 0x66, 0x6f, 0x72, 0x6d,
+ 0x12, 0x2e, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75,
+ 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31,
+ 0x2e, 0x50, 0x6c, 0x61, 0x74, 0x66, 0x6f, 0x72, 0x6d, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
+ 0x1a, 0x2f, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75,
+ 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31,
+ 0x2e, 0x50, 0x6c, 0x61, 0x74, 0x66, 0x6f, 0x72, 0x6d, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
+ 0x65, 0x12, 0x74, 0x0a, 0x0b, 0x53, 0x74, 0x6f, 0x70, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78,
+ 0x12, 0x31, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75,
+ 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31,
+ 0x2e, 0x53, 0x74, 0x6f, 0x70, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x71, 0x75,
+ 0x65, 0x73, 0x74, 0x1a, 0x32, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64,
+ 0x2e, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78,
+ 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x74, 0x6f, 0x70, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52,
+ 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x74, 0x0a, 0x0b, 0x57, 0x61, 0x69, 0x74, 0x53,
+ 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x12, 0x31, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e,
+ 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64,
+ 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31, 0x2e, 0x57, 0x61, 0x69, 0x74, 0x53, 0x61, 0x6e, 0x64, 0x62,
+ 0x6f, 0x78, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x32, 0x2e, 0x63, 0x6f, 0x6e, 0x74,
+ 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73,
+ 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31, 0x2e, 0x57, 0x61, 0x69, 0x74, 0x53, 0x61,
+ 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x7a, 0x0a,
+ 0x0d, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x33,
+ 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75, 0x6e, 0x74,
+ 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31, 0x2e, 0x53,
+ 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65, 0x71, 0x75,
+ 0x65, 0x73, 0x74, 0x1a, 0x34, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64,
+ 0x2e, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78,
+ 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x53, 0x74, 0x61, 0x74, 0x75,
+ 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x66, 0x0a, 0x0b, 0x50, 0x69, 0x6e,
+ 0x67, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x12, 0x2a, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61,
+ 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61,
+ 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31, 0x2e, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x71,
+ 0x75, 0x65, 0x73, 0x74, 0x1a, 0x2b, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72,
+ 0x64, 0x2e, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f,
+ 0x78, 0x2e, 0x76, 0x31, 0x2e, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
+ 0x65, 0x12, 0x80, 0x01, 0x0a, 0x0f, 0x53, 0x68, 0x75, 0x74, 0x64, 0x6f, 0x77, 0x6e, 0x53, 0x61,
+ 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x12, 0x35, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2e, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62,
+ 0x6f, 0x78, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x68, 0x75, 0x74, 0x64, 0x6f, 0x77, 0x6e, 0x53, 0x61,
+ 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x36, 0x2e, 0x63,
+ 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d,
+ 0x65, 0x2e, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x68, 0x75,
+ 0x74, 0x64, 0x6f, 0x77, 0x6e, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x52, 0x65, 0x73, 0x70,
+ 0x6f, 0x6e, 0x73, 0x65, 0x42, 0x41, 0x5a, 0x3f, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63,
+ 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f,
+ 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x72, 0x75, 0x6e,
+ 0x74, 0x69, 0x6d, 0x65, 0x2f, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2f, 0x76, 0x31, 0x3b,
+ 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescData = file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes = make([]protoimpl.MessageInfo, 20)
+var file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_goTypes = []interface{}{
+ (*CreateSandboxRequest)(nil), // 0: containerd.runtime.sandbox.v1.CreateSandboxRequest
+ (*CreateSandboxResponse)(nil), // 1: containerd.runtime.sandbox.v1.CreateSandboxResponse
+ (*StartSandboxRequest)(nil), // 2: containerd.runtime.sandbox.v1.StartSandboxRequest
+ (*StartSandboxResponse)(nil), // 3: containerd.runtime.sandbox.v1.StartSandboxResponse
+ (*PlatformRequest)(nil), // 4: containerd.runtime.sandbox.v1.PlatformRequest
+ (*PlatformResponse)(nil), // 5: containerd.runtime.sandbox.v1.PlatformResponse
+ (*StopSandboxRequest)(nil), // 6: containerd.runtime.sandbox.v1.StopSandboxRequest
+ (*StopSandboxResponse)(nil), // 7: containerd.runtime.sandbox.v1.StopSandboxResponse
+ (*UpdateSandboxRequest)(nil), // 8: containerd.runtime.sandbox.v1.UpdateSandboxRequest
+ (*WaitSandboxRequest)(nil), // 9: containerd.runtime.sandbox.v1.WaitSandboxRequest
+ (*WaitSandboxResponse)(nil), // 10: containerd.runtime.sandbox.v1.WaitSandboxResponse
+ (*UpdateSandboxResponse)(nil), // 11: containerd.runtime.sandbox.v1.UpdateSandboxResponse
+ (*SandboxStatusRequest)(nil), // 12: containerd.runtime.sandbox.v1.SandboxStatusRequest
+ (*SandboxStatusResponse)(nil), // 13: containerd.runtime.sandbox.v1.SandboxStatusResponse
+ (*PingRequest)(nil), // 14: containerd.runtime.sandbox.v1.PingRequest
+ (*PingResponse)(nil), // 15: containerd.runtime.sandbox.v1.PingResponse
+ (*ShutdownSandboxRequest)(nil), // 16: containerd.runtime.sandbox.v1.ShutdownSandboxRequest
+ (*ShutdownSandboxResponse)(nil), // 17: containerd.runtime.sandbox.v1.ShutdownSandboxResponse
+ nil, // 18: containerd.runtime.sandbox.v1.UpdateSandboxRequest.AnnotationsEntry
+ nil, // 19: containerd.runtime.sandbox.v1.SandboxStatusResponse.InfoEntry
+ (*types.Mount)(nil), // 20: containerd.types.Mount
+ (*anypb.Any)(nil), // 21: google.protobuf.Any
+ (*timestamppb.Timestamp)(nil), // 22: google.protobuf.Timestamp
+ (*types.Platform)(nil), // 23: containerd.types.Platform
+}
+var file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_depIdxs = []int32{
+ 20, // 0: containerd.runtime.sandbox.v1.CreateSandboxRequest.rootfs:type_name -> containerd.types.Mount
+ 21, // 1: containerd.runtime.sandbox.v1.CreateSandboxRequest.options:type_name -> google.protobuf.Any
+ 22, // 2: containerd.runtime.sandbox.v1.StartSandboxResponse.created_at:type_name -> google.protobuf.Timestamp
+ 23, // 3: containerd.runtime.sandbox.v1.PlatformResponse.platform:type_name -> containerd.types.Platform
+ 21, // 4: containerd.runtime.sandbox.v1.UpdateSandboxRequest.resources:type_name -> google.protobuf.Any
+ 18, // 5: containerd.runtime.sandbox.v1.UpdateSandboxRequest.annotations:type_name -> containerd.runtime.sandbox.v1.UpdateSandboxRequest.AnnotationsEntry
+ 22, // 6: containerd.runtime.sandbox.v1.WaitSandboxResponse.exited_at:type_name -> google.protobuf.Timestamp
+ 19, // 7: containerd.runtime.sandbox.v1.SandboxStatusResponse.info:type_name -> containerd.runtime.sandbox.v1.SandboxStatusResponse.InfoEntry
+ 22, // 8: containerd.runtime.sandbox.v1.SandboxStatusResponse.created_at:type_name -> google.protobuf.Timestamp
+ 22, // 9: containerd.runtime.sandbox.v1.SandboxStatusResponse.exited_at:type_name -> google.protobuf.Timestamp
+ 21, // 10: containerd.runtime.sandbox.v1.SandboxStatusResponse.extra:type_name -> google.protobuf.Any
+ 0, // 11: containerd.runtime.sandbox.v1.Sandbox.CreateSandbox:input_type -> containerd.runtime.sandbox.v1.CreateSandboxRequest
+ 2, // 12: containerd.runtime.sandbox.v1.Sandbox.StartSandbox:input_type -> containerd.runtime.sandbox.v1.StartSandboxRequest
+ 4, // 13: containerd.runtime.sandbox.v1.Sandbox.Platform:input_type -> containerd.runtime.sandbox.v1.PlatformRequest
+ 6, // 14: containerd.runtime.sandbox.v1.Sandbox.StopSandbox:input_type -> containerd.runtime.sandbox.v1.StopSandboxRequest
+ 9, // 15: containerd.runtime.sandbox.v1.Sandbox.WaitSandbox:input_type -> containerd.runtime.sandbox.v1.WaitSandboxRequest
+ 12, // 16: containerd.runtime.sandbox.v1.Sandbox.SandboxStatus:input_type -> containerd.runtime.sandbox.v1.SandboxStatusRequest
+ 14, // 17: containerd.runtime.sandbox.v1.Sandbox.PingSandbox:input_type -> containerd.runtime.sandbox.v1.PingRequest
+ 16, // 18: containerd.runtime.sandbox.v1.Sandbox.ShutdownSandbox:input_type -> containerd.runtime.sandbox.v1.ShutdownSandboxRequest
+ 1, // 19: containerd.runtime.sandbox.v1.Sandbox.CreateSandbox:output_type -> containerd.runtime.sandbox.v1.CreateSandboxResponse
+ 3, // 20: containerd.runtime.sandbox.v1.Sandbox.StartSandbox:output_type -> containerd.runtime.sandbox.v1.StartSandboxResponse
+ 5, // 21: containerd.runtime.sandbox.v1.Sandbox.Platform:output_type -> containerd.runtime.sandbox.v1.PlatformResponse
+ 7, // 22: containerd.runtime.sandbox.v1.Sandbox.StopSandbox:output_type -> containerd.runtime.sandbox.v1.StopSandboxResponse
+ 10, // 23: containerd.runtime.sandbox.v1.Sandbox.WaitSandbox:output_type -> containerd.runtime.sandbox.v1.WaitSandboxResponse
+ 13, // 24: containerd.runtime.sandbox.v1.Sandbox.SandboxStatus:output_type -> containerd.runtime.sandbox.v1.SandboxStatusResponse
+ 15, // 25: containerd.runtime.sandbox.v1.Sandbox.PingSandbox:output_type -> containerd.runtime.sandbox.v1.PingResponse
+ 17, // 26: containerd.runtime.sandbox.v1.Sandbox.ShutdownSandbox:output_type -> containerd.runtime.sandbox.v1.ShutdownSandboxResponse
+ 19, // [19:27] is the sub-list for method output_type
+ 11, // [11:19] is the sub-list for method input_type
+ 11, // [11:11] is the sub-list for extension type_name
+ 11, // [11:11] is the sub-list for extension extendee
+ 0, // [0:11] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_init() }
+func file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_init() {
+ if File_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*CreateSandboxRequest); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*CreateSandboxResponse); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*StartSandboxRequest); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*StartSandboxResponse); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*PlatformRequest); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*PlatformResponse); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*StopSandboxRequest); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*StopSandboxResponse); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*UpdateSandboxRequest); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*WaitSandboxRequest); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*WaitSandboxResponse); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*UpdateSandboxResponse); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[12].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*SandboxStatusRequest); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[13].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*SandboxStatusResponse); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[14].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*PingRequest); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[15].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*PingResponse); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[16].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*ShutdownSandboxRequest); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes[17].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*ShutdownSandboxResponse); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 20,
+ NumExtensions: 0,
+ NumServices: 1,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto = out.File
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_runtime_sandbox_v1_sandbox_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox.proto b/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox.proto
new file mode 100644
index 000000000..a051f3ea3
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox.proto
@@ -0,0 +1,136 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.runtime.sandbox.v1;
+
+import "google/protobuf/any.proto";
+import "google/protobuf/timestamp.proto";
+
+import "github.com/containerd/containerd/api/types/mount.proto";
+import "github.com/containerd/containerd/api/types/platform.proto";
+
+option go_package = "github.com/containerd/containerd/api/runtime/sandbox/v1;sandbox";
+
+// Sandbox is an optional interface that shim may implement to support sandboxes environments.
+// A typical example of sandbox is microVM or pause container - an entity that groups containers and/or
+// holds resources relevant for this group.
+service Sandbox {
+ // CreateSandbox will be called right after sandbox shim instance launched.
+ // It is a good place to initialize sandbox environment.
+ rpc CreateSandbox(CreateSandboxRequest) returns (CreateSandboxResponse);
+
+ // StartSandbox will start previsouly created sandbox.
+ rpc StartSandbox(StartSandboxRequest) returns (StartSandboxResponse);
+
+ // Platform queries the platform the sandbox is going to run containers on.
+ // containerd will use this to generate a proper OCI spec.
+ rpc Platform(PlatformRequest) returns (PlatformResponse);
+
+ // StopSandbox will stop existing sandbox instance
+ rpc StopSandbox(StopSandboxRequest) returns (StopSandboxResponse);
+
+ // WaitSandbox blocks until sanbox exits.
+ rpc WaitSandbox(WaitSandboxRequest) returns (WaitSandboxResponse);
+
+ // SandboxStatus will return current status of the running sandbox instance
+ rpc SandboxStatus(SandboxStatusRequest) returns (SandboxStatusResponse);
+
+ // PingSandbox is a lightweight API call to check whether sandbox alive.
+ rpc PingSandbox(PingRequest) returns (PingResponse);
+
+ // ShutdownSandbox must shutdown shim instance.
+ rpc ShutdownSandbox(ShutdownSandboxRequest) returns (ShutdownSandboxResponse);
+}
+
+message CreateSandboxRequest {
+ string sandbox_id = 1;
+ string bundle_path = 2;
+ repeated containerd.types.Mount rootfs = 3;
+ google.protobuf.Any options = 4;
+ string netns_path = 5;
+}
+
+message CreateSandboxResponse {}
+
+message StartSandboxRequest {
+ string sandbox_id = 1;
+}
+
+message StartSandboxResponse {
+ uint32 pid = 1;
+ google.protobuf.Timestamp created_at = 2;
+}
+
+message PlatformRequest {
+ string sandbox_id = 1;
+}
+
+message PlatformResponse {
+ containerd.types.Platform platform = 1;
+}
+
+message StopSandboxRequest {
+ string sandbox_id = 1;
+ uint32 timeout_secs = 2;
+}
+
+message StopSandboxResponse {}
+
+message UpdateSandboxRequest {
+ string sandbox_id = 1;
+ google.protobuf.Any resources = 2;
+ map annotations = 3;
+}
+
+message WaitSandboxRequest {
+ string sandbox_id = 1;
+}
+
+message WaitSandboxResponse {
+ uint32 exit_status = 1;
+ google.protobuf.Timestamp exited_at = 2;
+}
+
+message UpdateSandboxResponse {}
+
+message SandboxStatusRequest {
+ string sandbox_id = 1;
+ bool verbose = 2;
+}
+
+message SandboxStatusResponse {
+ string sandbox_id = 1;
+ uint32 pid = 2;
+ string state = 3;
+ map info = 4;
+ google.protobuf.Timestamp created_at = 5;
+ google.protobuf.Timestamp exited_at = 6;
+ google.protobuf.Any extra = 7;
+}
+
+message PingRequest {
+ string sandbox_id = 1;
+}
+
+message PingResponse {}
+
+message ShutdownSandboxRequest {
+ string sandbox_id = 1;
+}
+
+message ShutdownSandboxResponse {}
diff --git a/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox_grpc.pb.go b/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox_grpc.pb.go
new file mode 100644
index 000000000..f79424986
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox_grpc.pb.go
@@ -0,0 +1,377 @@
+// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
+// versions:
+// - protoc-gen-go-grpc v1.2.0
+// - protoc v3.20.1
+// source: github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox.proto
+
+package sandbox
+
+import (
+ context "context"
+ grpc "google.golang.org/grpc"
+ codes "google.golang.org/grpc/codes"
+ status "google.golang.org/grpc/status"
+)
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+// Requires gRPC-Go v1.32.0 or later.
+const _ = grpc.SupportPackageIsVersion7
+
+// SandboxClient is the client API for Sandbox service.
+//
+// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
+type SandboxClient interface {
+ // CreateSandbox will be called right after sandbox shim instance launched.
+ // It is a good place to initialize sandbox environment.
+ CreateSandbox(ctx context.Context, in *CreateSandboxRequest, opts ...grpc.CallOption) (*CreateSandboxResponse, error)
+ // StartSandbox will start previsouly created sandbox.
+ StartSandbox(ctx context.Context, in *StartSandboxRequest, opts ...grpc.CallOption) (*StartSandboxResponse, error)
+ // Platform queries the platform the sandbox is going to run containers on.
+ // containerd will use this to generate a proper OCI spec.
+ Platform(ctx context.Context, in *PlatformRequest, opts ...grpc.CallOption) (*PlatformResponse, error)
+ // StopSandbox will stop existing sandbox instance
+ StopSandbox(ctx context.Context, in *StopSandboxRequest, opts ...grpc.CallOption) (*StopSandboxResponse, error)
+ // WaitSandbox blocks until sanbox exits.
+ WaitSandbox(ctx context.Context, in *WaitSandboxRequest, opts ...grpc.CallOption) (*WaitSandboxResponse, error)
+ // SandboxStatus will return current status of the running sandbox instance
+ SandboxStatus(ctx context.Context, in *SandboxStatusRequest, opts ...grpc.CallOption) (*SandboxStatusResponse, error)
+ // PingSandbox is a lightweight API call to check whether sandbox alive.
+ PingSandbox(ctx context.Context, in *PingRequest, opts ...grpc.CallOption) (*PingResponse, error)
+ // ShutdownSandbox must shutdown shim instance.
+ ShutdownSandbox(ctx context.Context, in *ShutdownSandboxRequest, opts ...grpc.CallOption) (*ShutdownSandboxResponse, error)
+}
+
+type sandboxClient struct {
+ cc grpc.ClientConnInterface
+}
+
+func NewSandboxClient(cc grpc.ClientConnInterface) SandboxClient {
+ return &sandboxClient{cc}
+}
+
+func (c *sandboxClient) CreateSandbox(ctx context.Context, in *CreateSandboxRequest, opts ...grpc.CallOption) (*CreateSandboxResponse, error) {
+ out := new(CreateSandboxResponse)
+ err := c.cc.Invoke(ctx, "/containerd.runtime.sandbox.v1.Sandbox/CreateSandbox", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+func (c *sandboxClient) StartSandbox(ctx context.Context, in *StartSandboxRequest, opts ...grpc.CallOption) (*StartSandboxResponse, error) {
+ out := new(StartSandboxResponse)
+ err := c.cc.Invoke(ctx, "/containerd.runtime.sandbox.v1.Sandbox/StartSandbox", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+func (c *sandboxClient) Platform(ctx context.Context, in *PlatformRequest, opts ...grpc.CallOption) (*PlatformResponse, error) {
+ out := new(PlatformResponse)
+ err := c.cc.Invoke(ctx, "/containerd.runtime.sandbox.v1.Sandbox/Platform", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+func (c *sandboxClient) StopSandbox(ctx context.Context, in *StopSandboxRequest, opts ...grpc.CallOption) (*StopSandboxResponse, error) {
+ out := new(StopSandboxResponse)
+ err := c.cc.Invoke(ctx, "/containerd.runtime.sandbox.v1.Sandbox/StopSandbox", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+func (c *sandboxClient) WaitSandbox(ctx context.Context, in *WaitSandboxRequest, opts ...grpc.CallOption) (*WaitSandboxResponse, error) {
+ out := new(WaitSandboxResponse)
+ err := c.cc.Invoke(ctx, "/containerd.runtime.sandbox.v1.Sandbox/WaitSandbox", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+func (c *sandboxClient) SandboxStatus(ctx context.Context, in *SandboxStatusRequest, opts ...grpc.CallOption) (*SandboxStatusResponse, error) {
+ out := new(SandboxStatusResponse)
+ err := c.cc.Invoke(ctx, "/containerd.runtime.sandbox.v1.Sandbox/SandboxStatus", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+func (c *sandboxClient) PingSandbox(ctx context.Context, in *PingRequest, opts ...grpc.CallOption) (*PingResponse, error) {
+ out := new(PingResponse)
+ err := c.cc.Invoke(ctx, "/containerd.runtime.sandbox.v1.Sandbox/PingSandbox", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+func (c *sandboxClient) ShutdownSandbox(ctx context.Context, in *ShutdownSandboxRequest, opts ...grpc.CallOption) (*ShutdownSandboxResponse, error) {
+ out := new(ShutdownSandboxResponse)
+ err := c.cc.Invoke(ctx, "/containerd.runtime.sandbox.v1.Sandbox/ShutdownSandbox", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+// SandboxServer is the server API for Sandbox service.
+// All implementations must embed UnimplementedSandboxServer
+// for forward compatibility
+type SandboxServer interface {
+ // CreateSandbox will be called right after sandbox shim instance launched.
+ // It is a good place to initialize sandbox environment.
+ CreateSandbox(context.Context, *CreateSandboxRequest) (*CreateSandboxResponse, error)
+ // StartSandbox will start previsouly created sandbox.
+ StartSandbox(context.Context, *StartSandboxRequest) (*StartSandboxResponse, error)
+ // Platform queries the platform the sandbox is going to run containers on.
+ // containerd will use this to generate a proper OCI spec.
+ Platform(context.Context, *PlatformRequest) (*PlatformResponse, error)
+ // StopSandbox will stop existing sandbox instance
+ StopSandbox(context.Context, *StopSandboxRequest) (*StopSandboxResponse, error)
+ // WaitSandbox blocks until sanbox exits.
+ WaitSandbox(context.Context, *WaitSandboxRequest) (*WaitSandboxResponse, error)
+ // SandboxStatus will return current status of the running sandbox instance
+ SandboxStatus(context.Context, *SandboxStatusRequest) (*SandboxStatusResponse, error)
+ // PingSandbox is a lightweight API call to check whether sandbox alive.
+ PingSandbox(context.Context, *PingRequest) (*PingResponse, error)
+ // ShutdownSandbox must shutdown shim instance.
+ ShutdownSandbox(context.Context, *ShutdownSandboxRequest) (*ShutdownSandboxResponse, error)
+ mustEmbedUnimplementedSandboxServer()
+}
+
+// UnimplementedSandboxServer must be embedded to have forward compatible implementations.
+type UnimplementedSandboxServer struct {
+}
+
+func (UnimplementedSandboxServer) CreateSandbox(context.Context, *CreateSandboxRequest) (*CreateSandboxResponse, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method CreateSandbox not implemented")
+}
+func (UnimplementedSandboxServer) StartSandbox(context.Context, *StartSandboxRequest) (*StartSandboxResponse, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method StartSandbox not implemented")
+}
+func (UnimplementedSandboxServer) Platform(context.Context, *PlatformRequest) (*PlatformResponse, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method Platform not implemented")
+}
+func (UnimplementedSandboxServer) StopSandbox(context.Context, *StopSandboxRequest) (*StopSandboxResponse, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method StopSandbox not implemented")
+}
+func (UnimplementedSandboxServer) WaitSandbox(context.Context, *WaitSandboxRequest) (*WaitSandboxResponse, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method WaitSandbox not implemented")
+}
+func (UnimplementedSandboxServer) SandboxStatus(context.Context, *SandboxStatusRequest) (*SandboxStatusResponse, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method SandboxStatus not implemented")
+}
+func (UnimplementedSandboxServer) PingSandbox(context.Context, *PingRequest) (*PingResponse, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method PingSandbox not implemented")
+}
+func (UnimplementedSandboxServer) ShutdownSandbox(context.Context, *ShutdownSandboxRequest) (*ShutdownSandboxResponse, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method ShutdownSandbox not implemented")
+}
+func (UnimplementedSandboxServer) mustEmbedUnimplementedSandboxServer() {}
+
+// UnsafeSandboxServer may be embedded to opt out of forward compatibility for this service.
+// Use of this interface is not recommended, as added methods to SandboxServer will
+// result in compilation errors.
+type UnsafeSandboxServer interface {
+ mustEmbedUnimplementedSandboxServer()
+}
+
+func RegisterSandboxServer(s grpc.ServiceRegistrar, srv SandboxServer) {
+ s.RegisterService(&Sandbox_ServiceDesc, srv)
+}
+
+func _Sandbox_CreateSandbox_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(CreateSandboxRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(SandboxServer).CreateSandbox(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/containerd.runtime.sandbox.v1.Sandbox/CreateSandbox",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(SandboxServer).CreateSandbox(ctx, req.(*CreateSandboxRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+func _Sandbox_StartSandbox_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(StartSandboxRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(SandboxServer).StartSandbox(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/containerd.runtime.sandbox.v1.Sandbox/StartSandbox",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(SandboxServer).StartSandbox(ctx, req.(*StartSandboxRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+func _Sandbox_Platform_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(PlatformRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(SandboxServer).Platform(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/containerd.runtime.sandbox.v1.Sandbox/Platform",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(SandboxServer).Platform(ctx, req.(*PlatformRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+func _Sandbox_StopSandbox_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(StopSandboxRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(SandboxServer).StopSandbox(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/containerd.runtime.sandbox.v1.Sandbox/StopSandbox",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(SandboxServer).StopSandbox(ctx, req.(*StopSandboxRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+func _Sandbox_WaitSandbox_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(WaitSandboxRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(SandboxServer).WaitSandbox(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/containerd.runtime.sandbox.v1.Sandbox/WaitSandbox",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(SandboxServer).WaitSandbox(ctx, req.(*WaitSandboxRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+func _Sandbox_SandboxStatus_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(SandboxStatusRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(SandboxServer).SandboxStatus(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/containerd.runtime.sandbox.v1.Sandbox/SandboxStatus",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(SandboxServer).SandboxStatus(ctx, req.(*SandboxStatusRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+func _Sandbox_PingSandbox_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(PingRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(SandboxServer).PingSandbox(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/containerd.runtime.sandbox.v1.Sandbox/PingSandbox",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(SandboxServer).PingSandbox(ctx, req.(*PingRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+func _Sandbox_ShutdownSandbox_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(ShutdownSandboxRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(SandboxServer).ShutdownSandbox(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/containerd.runtime.sandbox.v1.Sandbox/ShutdownSandbox",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(SandboxServer).ShutdownSandbox(ctx, req.(*ShutdownSandboxRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+// Sandbox_ServiceDesc is the grpc.ServiceDesc for Sandbox service.
+// It's only intended for direct use with grpc.RegisterService,
+// and not to be introspected or modified (even as a copy)
+var Sandbox_ServiceDesc = grpc.ServiceDesc{
+ ServiceName: "containerd.runtime.sandbox.v1.Sandbox",
+ HandlerType: (*SandboxServer)(nil),
+ Methods: []grpc.MethodDesc{
+ {
+ MethodName: "CreateSandbox",
+ Handler: _Sandbox_CreateSandbox_Handler,
+ },
+ {
+ MethodName: "StartSandbox",
+ Handler: _Sandbox_StartSandbox_Handler,
+ },
+ {
+ MethodName: "Platform",
+ Handler: _Sandbox_Platform_Handler,
+ },
+ {
+ MethodName: "StopSandbox",
+ Handler: _Sandbox_StopSandbox_Handler,
+ },
+ {
+ MethodName: "WaitSandbox",
+ Handler: _Sandbox_WaitSandbox_Handler,
+ },
+ {
+ MethodName: "SandboxStatus",
+ Handler: _Sandbox_SandboxStatus_Handler,
+ },
+ {
+ MethodName: "PingSandbox",
+ Handler: _Sandbox_PingSandbox_Handler,
+ },
+ {
+ MethodName: "ShutdownSandbox",
+ Handler: _Sandbox_ShutdownSandbox_Handler,
+ },
+ },
+ Streams: []grpc.StreamDesc{},
+ Metadata: "github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox.proto",
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox_ttrpc.pb.go b/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox_ttrpc.pb.go
new file mode 100644
index 000000000..d935fe611
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox_ttrpc.pb.go
@@ -0,0 +1,156 @@
+// Code generated by protoc-gen-go-ttrpc. DO NOT EDIT.
+// source: github.com/containerd/containerd/api/runtime/sandbox/v1/sandbox.proto
+package sandbox
+
+import (
+ context "context"
+ ttrpc "github.com/containerd/ttrpc"
+)
+
+type TTRPCSandboxService interface {
+ CreateSandbox(context.Context, *CreateSandboxRequest) (*CreateSandboxResponse, error)
+ StartSandbox(context.Context, *StartSandboxRequest) (*StartSandboxResponse, error)
+ Platform(context.Context, *PlatformRequest) (*PlatformResponse, error)
+ StopSandbox(context.Context, *StopSandboxRequest) (*StopSandboxResponse, error)
+ WaitSandbox(context.Context, *WaitSandboxRequest) (*WaitSandboxResponse, error)
+ SandboxStatus(context.Context, *SandboxStatusRequest) (*SandboxStatusResponse, error)
+ PingSandbox(context.Context, *PingRequest) (*PingResponse, error)
+ ShutdownSandbox(context.Context, *ShutdownSandboxRequest) (*ShutdownSandboxResponse, error)
+}
+
+func RegisterTTRPCSandboxService(srv *ttrpc.Server, svc TTRPCSandboxService) {
+ srv.RegisterService("containerd.runtime.sandbox.v1.Sandbox", &ttrpc.ServiceDesc{
+ Methods: map[string]ttrpc.Method{
+ "CreateSandbox": func(ctx context.Context, unmarshal func(interface{}) error) (interface{}, error) {
+ var req CreateSandboxRequest
+ if err := unmarshal(&req); err != nil {
+ return nil, err
+ }
+ return svc.CreateSandbox(ctx, &req)
+ },
+ "StartSandbox": func(ctx context.Context, unmarshal func(interface{}) error) (interface{}, error) {
+ var req StartSandboxRequest
+ if err := unmarshal(&req); err != nil {
+ return nil, err
+ }
+ return svc.StartSandbox(ctx, &req)
+ },
+ "Platform": func(ctx context.Context, unmarshal func(interface{}) error) (interface{}, error) {
+ var req PlatformRequest
+ if err := unmarshal(&req); err != nil {
+ return nil, err
+ }
+ return svc.Platform(ctx, &req)
+ },
+ "StopSandbox": func(ctx context.Context, unmarshal func(interface{}) error) (interface{}, error) {
+ var req StopSandboxRequest
+ if err := unmarshal(&req); err != nil {
+ return nil, err
+ }
+ return svc.StopSandbox(ctx, &req)
+ },
+ "WaitSandbox": func(ctx context.Context, unmarshal func(interface{}) error) (interface{}, error) {
+ var req WaitSandboxRequest
+ if err := unmarshal(&req); err != nil {
+ return nil, err
+ }
+ return svc.WaitSandbox(ctx, &req)
+ },
+ "SandboxStatus": func(ctx context.Context, unmarshal func(interface{}) error) (interface{}, error) {
+ var req SandboxStatusRequest
+ if err := unmarshal(&req); err != nil {
+ return nil, err
+ }
+ return svc.SandboxStatus(ctx, &req)
+ },
+ "PingSandbox": func(ctx context.Context, unmarshal func(interface{}) error) (interface{}, error) {
+ var req PingRequest
+ if err := unmarshal(&req); err != nil {
+ return nil, err
+ }
+ return svc.PingSandbox(ctx, &req)
+ },
+ "ShutdownSandbox": func(ctx context.Context, unmarshal func(interface{}) error) (interface{}, error) {
+ var req ShutdownSandboxRequest
+ if err := unmarshal(&req); err != nil {
+ return nil, err
+ }
+ return svc.ShutdownSandbox(ctx, &req)
+ },
+ },
+ })
+}
+
+type ttrpcsandboxClient struct {
+ client *ttrpc.Client
+}
+
+func NewTTRPCSandboxClient(client *ttrpc.Client) TTRPCSandboxService {
+ return &ttrpcsandboxClient{
+ client: client,
+ }
+}
+
+func (c *ttrpcsandboxClient) CreateSandbox(ctx context.Context, req *CreateSandboxRequest) (*CreateSandboxResponse, error) {
+ var resp CreateSandboxResponse
+ if err := c.client.Call(ctx, "containerd.runtime.sandbox.v1.Sandbox", "CreateSandbox", req, &resp); err != nil {
+ return nil, err
+ }
+ return &resp, nil
+}
+
+func (c *ttrpcsandboxClient) StartSandbox(ctx context.Context, req *StartSandboxRequest) (*StartSandboxResponse, error) {
+ var resp StartSandboxResponse
+ if err := c.client.Call(ctx, "containerd.runtime.sandbox.v1.Sandbox", "StartSandbox", req, &resp); err != nil {
+ return nil, err
+ }
+ return &resp, nil
+}
+
+func (c *ttrpcsandboxClient) Platform(ctx context.Context, req *PlatformRequest) (*PlatformResponse, error) {
+ var resp PlatformResponse
+ if err := c.client.Call(ctx, "containerd.runtime.sandbox.v1.Sandbox", "Platform", req, &resp); err != nil {
+ return nil, err
+ }
+ return &resp, nil
+}
+
+func (c *ttrpcsandboxClient) StopSandbox(ctx context.Context, req *StopSandboxRequest) (*StopSandboxResponse, error) {
+ var resp StopSandboxResponse
+ if err := c.client.Call(ctx, "containerd.runtime.sandbox.v1.Sandbox", "StopSandbox", req, &resp); err != nil {
+ return nil, err
+ }
+ return &resp, nil
+}
+
+func (c *ttrpcsandboxClient) WaitSandbox(ctx context.Context, req *WaitSandboxRequest) (*WaitSandboxResponse, error) {
+ var resp WaitSandboxResponse
+ if err := c.client.Call(ctx, "containerd.runtime.sandbox.v1.Sandbox", "WaitSandbox", req, &resp); err != nil {
+ return nil, err
+ }
+ return &resp, nil
+}
+
+func (c *ttrpcsandboxClient) SandboxStatus(ctx context.Context, req *SandboxStatusRequest) (*SandboxStatusResponse, error) {
+ var resp SandboxStatusResponse
+ if err := c.client.Call(ctx, "containerd.runtime.sandbox.v1.Sandbox", "SandboxStatus", req, &resp); err != nil {
+ return nil, err
+ }
+ return &resp, nil
+}
+
+func (c *ttrpcsandboxClient) PingSandbox(ctx context.Context, req *PingRequest) (*PingResponse, error) {
+ var resp PingResponse
+ if err := c.client.Call(ctx, "containerd.runtime.sandbox.v1.Sandbox", "PingSandbox", req, &resp); err != nil {
+ return nil, err
+ }
+ return &resp, nil
+}
+
+func (c *ttrpcsandboxClient) ShutdownSandbox(ctx context.Context, req *ShutdownSandboxRequest) (*ShutdownSandboxResponse, error) {
+ var resp ShutdownSandboxResponse
+ if err := c.client.Call(ctx, "containerd.runtime.sandbox.v1.Sandbox", "ShutdownSandbox", req, &resp); err != nil {
+ return nil, err
+ }
+ return &resp, nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/types/descriptor.pb.go b/tools/vendor/github.com/containerd/containerd/api/types/descriptor.pb.go
new file mode 100644
index 000000000..f3db1c52d
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/types/descriptor.pb.go
@@ -0,0 +1,206 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/types/descriptor.proto
+
+package types
+
+import (
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+// Descriptor describes a blob in a content store.
+//
+// This descriptor can be used to reference content from an
+// oci descriptor found in a manifest.
+// See https://godoc.org/github.com/opencontainers/image-spec/specs-go/v1#Descriptor
+type Descriptor struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ MediaType string `protobuf:"bytes,1,opt,name=media_type,json=mediaType,proto3" json:"media_type,omitempty"`
+ Digest string `protobuf:"bytes,2,opt,name=digest,proto3" json:"digest,omitempty"`
+ Size int64 `protobuf:"varint,3,opt,name=size,proto3" json:"size,omitempty"`
+ Annotations map[string]string `protobuf:"bytes,5,rep,name=annotations,proto3" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+}
+
+func (x *Descriptor) Reset() {
+ *x = Descriptor{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_types_descriptor_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Descriptor) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Descriptor) ProtoMessage() {}
+
+func (x *Descriptor) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_types_descriptor_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Descriptor.ProtoReflect.Descriptor instead.
+func (*Descriptor) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_types_descriptor_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *Descriptor) GetMediaType() string {
+ if x != nil {
+ return x.MediaType
+ }
+ return ""
+}
+
+func (x *Descriptor) GetDigest() string {
+ if x != nil {
+ return x.Digest
+ }
+ return ""
+}
+
+func (x *Descriptor) GetSize() int64 {
+ if x != nil {
+ return x.Size
+ }
+ return 0
+}
+
+func (x *Descriptor) GetAnnotations() map[string]string {
+ if x != nil {
+ return x.Annotations
+ }
+ return nil
+}
+
+var File_github_com_containerd_containerd_api_types_descriptor_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_types_descriptor_proto_rawDesc = []byte{
+ 0x0a, 0x3b, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x64, 0x65, 0x73,
+ 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x10, 0x63,
+ 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x74, 0x79, 0x70, 0x65, 0x73, 0x22,
+ 0xe8, 0x01, 0x0a, 0x0a, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x12, 0x1d,
+ 0x0a, 0x0a, 0x6d, 0x65, 0x64, 0x69, 0x61, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01,
+ 0x28, 0x09, 0x52, 0x09, 0x6d, 0x65, 0x64, 0x69, 0x61, 0x54, 0x79, 0x70, 0x65, 0x12, 0x16, 0x0a,
+ 0x06, 0x64, 0x69, 0x67, 0x65, 0x73, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x64,
+ 0x69, 0x67, 0x65, 0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x69, 0x7a, 0x65, 0x18, 0x03, 0x20,
+ 0x01, 0x28, 0x03, 0x52, 0x04, 0x73, 0x69, 0x7a, 0x65, 0x12, 0x4f, 0x0a, 0x0b, 0x61, 0x6e, 0x6e,
+ 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2d,
+ 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x74, 0x79, 0x70, 0x65,
+ 0x73, 0x2e, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x2e, 0x41, 0x6e, 0x6e,
+ 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x0b, 0x61,
+ 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x1a, 0x3e, 0x0a, 0x10, 0x41, 0x6e,
+ 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10,
+ 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79,
+ 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52,
+ 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x42, 0x32, 0x5a, 0x30, 0x67, 0x69,
+ 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e,
+ 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x61,
+ 0x70, 0x69, 0x2f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x3b, 0x74, 0x79, 0x70, 0x65, 0x73, 0x62, 0x06,
+ 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_types_descriptor_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_types_descriptor_proto_rawDescData = file_github_com_containerd_containerd_api_types_descriptor_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_types_descriptor_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_types_descriptor_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_types_descriptor_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_types_descriptor_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_types_descriptor_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_types_descriptor_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
+var file_github_com_containerd_containerd_api_types_descriptor_proto_goTypes = []interface{}{
+ (*Descriptor)(nil), // 0: containerd.types.Descriptor
+ nil, // 1: containerd.types.Descriptor.AnnotationsEntry
+}
+var file_github_com_containerd_containerd_api_types_descriptor_proto_depIdxs = []int32{
+ 1, // 0: containerd.types.Descriptor.annotations:type_name -> containerd.types.Descriptor.AnnotationsEntry
+ 1, // [1:1] is the sub-list for method output_type
+ 1, // [1:1] is the sub-list for method input_type
+ 1, // [1:1] is the sub-list for extension type_name
+ 1, // [1:1] is the sub-list for extension extendee
+ 0, // [0:1] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_types_descriptor_proto_init() }
+func file_github_com_containerd_containerd_api_types_descriptor_proto_init() {
+ if File_github_com_containerd_containerd_api_types_descriptor_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_types_descriptor_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*Descriptor); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_types_descriptor_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 2,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_types_descriptor_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_types_descriptor_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_types_descriptor_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_types_descriptor_proto = out.File
+ file_github_com_containerd_containerd_api_types_descriptor_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_types_descriptor_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_types_descriptor_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/types/descriptor.proto b/tools/vendor/github.com/containerd/containerd/api/types/descriptor.proto
new file mode 100644
index 000000000..faaf416dd
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/types/descriptor.proto
@@ -0,0 +1,33 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.types;
+
+option go_package = "github.com/containerd/containerd/api/types;types";
+
+// Descriptor describes a blob in a content store.
+//
+// This descriptor can be used to reference content from an
+// oci descriptor found in a manifest.
+// See https://godoc.org/github.com/opencontainers/image-spec/specs-go/v1#Descriptor
+message Descriptor {
+ string media_type = 1;
+ string digest = 2;
+ int64 size = 3;
+ map annotations = 5;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/types/doc.go b/tools/vendor/github.com/containerd/containerd/api/types/doc.go
new file mode 100644
index 000000000..475b465ed
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/types/doc.go
@@ -0,0 +1,17 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package types
diff --git a/tools/vendor/github.com/containerd/containerd/api/types/metrics.pb.go b/tools/vendor/github.com/containerd/containerd/api/types/metrics.pb.go
new file mode 100644
index 000000000..b18ce1c5b
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/types/metrics.pb.go
@@ -0,0 +1,194 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/types/metrics.proto
+
+package types
+
+import (
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ anypb "google.golang.org/protobuf/types/known/anypb"
+ timestamppb "google.golang.org/protobuf/types/known/timestamppb"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+type Metric struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ Timestamp *timestamppb.Timestamp `protobuf:"bytes,1,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
+ ID string `protobuf:"bytes,2,opt,name=id,proto3" json:"id,omitempty"`
+ Data *anypb.Any `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"`
+}
+
+func (x *Metric) Reset() {
+ *x = Metric{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_types_metrics_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Metric) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Metric) ProtoMessage() {}
+
+func (x *Metric) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_types_metrics_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Metric.ProtoReflect.Descriptor instead.
+func (*Metric) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_types_metrics_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *Metric) GetTimestamp() *timestamppb.Timestamp {
+ if x != nil {
+ return x.Timestamp
+ }
+ return nil
+}
+
+func (x *Metric) GetID() string {
+ if x != nil {
+ return x.ID
+ }
+ return ""
+}
+
+func (x *Metric) GetData() *anypb.Any {
+ if x != nil {
+ return x.Data
+ }
+ return nil
+}
+
+var File_github_com_containerd_containerd_api_types_metrics_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_types_metrics_proto_rawDesc = []byte{
+ 0x0a, 0x38, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x6d, 0x65, 0x74,
+ 0x72, 0x69, 0x63, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x10, 0x63, 0x6f, 0x6e, 0x74,
+ 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x74, 0x79, 0x70, 0x65, 0x73, 0x1a, 0x19, 0x67, 0x6f,
+ 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x61, 0x6e,
+ 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f,
+ 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61,
+ 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x7c, 0x0a, 0x06, 0x4d, 0x65, 0x74, 0x72,
+ 0x69, 0x63, 0x12, 0x38, 0x0a, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x18,
+ 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70,
+ 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d,
+ 0x70, 0x52, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x12, 0x0e, 0x0a, 0x02,
+ 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x64, 0x12, 0x28, 0x0a, 0x04,
+ 0x64, 0x61, 0x74, 0x61, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x67, 0x6f, 0x6f,
+ 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x41, 0x6e, 0x79,
+ 0x52, 0x04, 0x64, 0x61, 0x74, 0x61, 0x42, 0x32, 0x5a, 0x30, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62,
+ 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f,
+ 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x74,
+ 0x79, 0x70, 0x65, 0x73, 0x3b, 0x74, 0x79, 0x70, 0x65, 0x73, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
+ 0x6f, 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_types_metrics_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_types_metrics_proto_rawDescData = file_github_com_containerd_containerd_api_types_metrics_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_types_metrics_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_types_metrics_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_types_metrics_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_types_metrics_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_types_metrics_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_types_metrics_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
+var file_github_com_containerd_containerd_api_types_metrics_proto_goTypes = []interface{}{
+ (*Metric)(nil), // 0: containerd.types.Metric
+ (*timestamppb.Timestamp)(nil), // 1: google.protobuf.Timestamp
+ (*anypb.Any)(nil), // 2: google.protobuf.Any
+}
+var file_github_com_containerd_containerd_api_types_metrics_proto_depIdxs = []int32{
+ 1, // 0: containerd.types.Metric.timestamp:type_name -> google.protobuf.Timestamp
+ 2, // 1: containerd.types.Metric.data:type_name -> google.protobuf.Any
+ 2, // [2:2] is the sub-list for method output_type
+ 2, // [2:2] is the sub-list for method input_type
+ 2, // [2:2] is the sub-list for extension type_name
+ 2, // [2:2] is the sub-list for extension extendee
+ 0, // [0:2] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_types_metrics_proto_init() }
+func file_github_com_containerd_containerd_api_types_metrics_proto_init() {
+ if File_github_com_containerd_containerd_api_types_metrics_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_types_metrics_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*Metric); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_types_metrics_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 1,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_types_metrics_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_types_metrics_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_types_metrics_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_types_metrics_proto = out.File
+ file_github_com_containerd_containerd_api_types_metrics_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_types_metrics_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_types_metrics_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/types/metrics.proto b/tools/vendor/github.com/containerd/containerd/api/types/metrics.proto
new file mode 100644
index 000000000..3e6a7751e
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/types/metrics.proto
@@ -0,0 +1,30 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.types;
+
+import "google/protobuf/any.proto";
+import "google/protobuf/timestamp.proto";
+
+option go_package = "github.com/containerd/containerd/api/types;types";
+
+message Metric {
+ google.protobuf.Timestamp timestamp = 1;
+ string id = 2;
+ google.protobuf.Any data = 3;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/types/mount.pb.go b/tools/vendor/github.com/containerd/containerd/api/types/mount.pb.go
new file mode 100644
index 000000000..ff77a7d7b
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/types/mount.pb.go
@@ -0,0 +1,202 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/types/mount.proto
+
+package types
+
+import (
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+// Mount describes mounts for a container.
+//
+// This type is the lingua franca of ContainerD. All services provide mounts
+// to be used with the container at creation time.
+//
+// The Mount type follows the structure of the mount syscall, including a type,
+// source, target and options.
+type Mount struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ // Type defines the nature of the mount.
+ Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
+ // Source specifies the name of the mount. Depending on mount type, this
+ // may be a volume name or a host path, or even ignored.
+ Source string `protobuf:"bytes,2,opt,name=source,proto3" json:"source,omitempty"`
+ // Target path in container
+ Target string `protobuf:"bytes,3,opt,name=target,proto3" json:"target,omitempty"`
+ // Options specifies zero or more fstab style mount options.
+ Options []string `protobuf:"bytes,4,rep,name=options,proto3" json:"options,omitempty"`
+}
+
+func (x *Mount) Reset() {
+ *x = Mount{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_types_mount_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Mount) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Mount) ProtoMessage() {}
+
+func (x *Mount) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_types_mount_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Mount.ProtoReflect.Descriptor instead.
+func (*Mount) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_types_mount_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *Mount) GetType() string {
+ if x != nil {
+ return x.Type
+ }
+ return ""
+}
+
+func (x *Mount) GetSource() string {
+ if x != nil {
+ return x.Source
+ }
+ return ""
+}
+
+func (x *Mount) GetTarget() string {
+ if x != nil {
+ return x.Target
+ }
+ return ""
+}
+
+func (x *Mount) GetOptions() []string {
+ if x != nil {
+ return x.Options
+ }
+ return nil
+}
+
+var File_github_com_containerd_containerd_api_types_mount_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_types_mount_proto_rawDesc = []byte{
+ 0x0a, 0x36, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x6d, 0x6f, 0x75,
+ 0x6e, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x10, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69,
+ 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x74, 0x79, 0x70, 0x65, 0x73, 0x22, 0x65, 0x0a, 0x05, 0x4d, 0x6f,
+ 0x75, 0x6e, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28,
+ 0x09, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x6f, 0x75, 0x72, 0x63,
+ 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x12,
+ 0x16, 0x0a, 0x06, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52,
+ 0x06, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x12, 0x18, 0x0a, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f,
+ 0x6e, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x09, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e,
+ 0x73, 0x42, 0x32, 0x5a, 0x30, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f,
+ 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61,
+ 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x3b,
+ 0x74, 0x79, 0x70, 0x65, 0x73, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_types_mount_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_types_mount_proto_rawDescData = file_github_com_containerd_containerd_api_types_mount_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_types_mount_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_types_mount_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_types_mount_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_types_mount_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_types_mount_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_types_mount_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
+var file_github_com_containerd_containerd_api_types_mount_proto_goTypes = []interface{}{
+ (*Mount)(nil), // 0: containerd.types.Mount
+}
+var file_github_com_containerd_containerd_api_types_mount_proto_depIdxs = []int32{
+ 0, // [0:0] is the sub-list for method output_type
+ 0, // [0:0] is the sub-list for method input_type
+ 0, // [0:0] is the sub-list for extension type_name
+ 0, // [0:0] is the sub-list for extension extendee
+ 0, // [0:0] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_types_mount_proto_init() }
+func file_github_com_containerd_containerd_api_types_mount_proto_init() {
+ if File_github_com_containerd_containerd_api_types_mount_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_types_mount_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*Mount); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_types_mount_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 1,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_types_mount_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_types_mount_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_types_mount_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_types_mount_proto = out.File
+ file_github_com_containerd_containerd_api_types_mount_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_types_mount_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_types_mount_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/types/mount.proto b/tools/vendor/github.com/containerd/containerd/api/types/mount.proto
new file mode 100644
index 000000000..54e0a0cdd
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/types/mount.proto
@@ -0,0 +1,43 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.types;
+
+option go_package = "github.com/containerd/containerd/api/types;types";
+
+// Mount describes mounts for a container.
+//
+// This type is the lingua franca of ContainerD. All services provide mounts
+// to be used with the container at creation time.
+//
+// The Mount type follows the structure of the mount syscall, including a type,
+// source, target and options.
+message Mount {
+ // Type defines the nature of the mount.
+ string type = 1;
+
+ // Source specifies the name of the mount. Depending on mount type, this
+ // may be a volume name or a host path, or even ignored.
+ string source = 2;
+
+ // Target path in container
+ string target = 3;
+
+ // Options specifies zero or more fstab style mount options.
+ repeated string options = 4;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/types/platform.pb.go b/tools/vendor/github.com/containerd/containerd/api/types/platform.pb.go
new file mode 100644
index 000000000..3e206cbaf
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/types/platform.pb.go
@@ -0,0 +1,184 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/types/platform.proto
+
+package types
+
+import (
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+// Platform follows the structure of the OCI platform specification, from
+// descriptors.
+type Platform struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ OS string `protobuf:"bytes,1,opt,name=os,proto3" json:"os,omitempty"`
+ Architecture string `protobuf:"bytes,2,opt,name=architecture,proto3" json:"architecture,omitempty"`
+ Variant string `protobuf:"bytes,3,opt,name=variant,proto3" json:"variant,omitempty"`
+}
+
+func (x *Platform) Reset() {
+ *x = Platform{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_types_platform_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Platform) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Platform) ProtoMessage() {}
+
+func (x *Platform) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_types_platform_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Platform.ProtoReflect.Descriptor instead.
+func (*Platform) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_types_platform_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *Platform) GetOS() string {
+ if x != nil {
+ return x.OS
+ }
+ return ""
+}
+
+func (x *Platform) GetArchitecture() string {
+ if x != nil {
+ return x.Architecture
+ }
+ return ""
+}
+
+func (x *Platform) GetVariant() string {
+ if x != nil {
+ return x.Variant
+ }
+ return ""
+}
+
+var File_github_com_containerd_containerd_api_types_platform_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_types_platform_proto_rawDesc = []byte{
+ 0x0a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x6c, 0x61,
+ 0x74, 0x66, 0x6f, 0x72, 0x6d, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x10, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x74, 0x79, 0x70, 0x65, 0x73, 0x22, 0x58, 0x0a,
+ 0x08, 0x50, 0x6c, 0x61, 0x74, 0x66, 0x6f, 0x72, 0x6d, 0x12, 0x0e, 0x0a, 0x02, 0x6f, 0x73, 0x18,
+ 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x6f, 0x73, 0x12, 0x22, 0x0a, 0x0c, 0x61, 0x72, 0x63,
+ 0x68, 0x69, 0x74, 0x65, 0x63, 0x74, 0x75, 0x72, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52,
+ 0x0c, 0x61, 0x72, 0x63, 0x68, 0x69, 0x74, 0x65, 0x63, 0x74, 0x75, 0x72, 0x65, 0x12, 0x18, 0x0a,
+ 0x07, 0x76, 0x61, 0x72, 0x69, 0x61, 0x6e, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07,
+ 0x76, 0x61, 0x72, 0x69, 0x61, 0x6e, 0x74, 0x42, 0x32, 0x5a, 0x30, 0x67, 0x69, 0x74, 0x68, 0x75,
+ 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64,
+ 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f,
+ 0x74, 0x79, 0x70, 0x65, 0x73, 0x3b, 0x74, 0x79, 0x70, 0x65, 0x73, 0x62, 0x06, 0x70, 0x72, 0x6f,
+ 0x74, 0x6f, 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_types_platform_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_types_platform_proto_rawDescData = file_github_com_containerd_containerd_api_types_platform_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_types_platform_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_types_platform_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_types_platform_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_types_platform_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_types_platform_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_types_platform_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
+var file_github_com_containerd_containerd_api_types_platform_proto_goTypes = []interface{}{
+ (*Platform)(nil), // 0: containerd.types.Platform
+}
+var file_github_com_containerd_containerd_api_types_platform_proto_depIdxs = []int32{
+ 0, // [0:0] is the sub-list for method output_type
+ 0, // [0:0] is the sub-list for method input_type
+ 0, // [0:0] is the sub-list for extension type_name
+ 0, // [0:0] is the sub-list for extension extendee
+ 0, // [0:0] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_types_platform_proto_init() }
+func file_github_com_containerd_containerd_api_types_platform_proto_init() {
+ if File_github_com_containerd_containerd_api_types_platform_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_types_platform_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*Platform); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_types_platform_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 1,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_types_platform_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_types_platform_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_types_platform_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_types_platform_proto = out.File
+ file_github_com_containerd_containerd_api_types_platform_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_types_platform_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_types_platform_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/types/platform.proto b/tools/vendor/github.com/containerd/containerd/api/types/platform.proto
new file mode 100644
index 000000000..b6088251f
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/types/platform.proto
@@ -0,0 +1,29 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.types;
+
+option go_package = "github.com/containerd/containerd/api/types;types";
+
+// Platform follows the structure of the OCI platform specification, from
+// descriptors.
+message Platform {
+ string os = 1;
+ string architecture = 2;
+ string variant = 3;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/types/sandbox.pb.go b/tools/vendor/github.com/containerd/containerd/api/types/sandbox.pb.go
new file mode 100644
index 000000000..67594f416
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/types/sandbox.pb.go
@@ -0,0 +1,346 @@
+//
+//Copyright The containerd Authors.
+//
+//Licensed under the Apache License, Version 2.0 (the "License");
+//you may not use this file except in compliance with the License.
+//You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+//Unless required by applicable law or agreed to in writing, software
+//distributed under the License is distributed on an "AS IS" BASIS,
+//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+//See the License for the specific language governing permissions and
+//limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.28.1
+// protoc v3.20.1
+// source: github.com/containerd/containerd/api/types/sandbox.proto
+
+package types
+
+import (
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+ anypb "google.golang.org/protobuf/types/known/anypb"
+ timestamppb "google.golang.org/protobuf/types/known/timestamppb"
+ reflect "reflect"
+ sync "sync"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+// Sandbox represents a sandbox metadata object that keeps all info required by controller to
+// work with a particular instance.
+type Sandbox struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ // SandboxID is a unique instance identifier within namespace
+ SandboxID string `protobuf:"bytes,1,opt,name=sandbox_id,json=sandboxId,proto3" json:"sandbox_id,omitempty"`
+ // Runtime specifies which runtime to use for executing this container.
+ Runtime *Sandbox_Runtime `protobuf:"bytes,2,opt,name=runtime,proto3" json:"runtime,omitempty"`
+ // Spec is sandbox configuration (kin of OCI runtime spec), spec's data will be written to a config.json file in the
+ // bundle directory (similary to OCI spec).
+ Spec *anypb.Any `protobuf:"bytes,3,opt,name=spec,proto3" json:"spec,omitempty"`
+ // Labels provides an area to include arbitrary data on containers.
+ Labels map[string]string `protobuf:"bytes,4,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+ // CreatedAt is the time the container was first created.
+ CreatedAt *timestamppb.Timestamp `protobuf:"bytes,5,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"`
+ // UpdatedAt is the last time the container was mutated.
+ UpdatedAt *timestamppb.Timestamp `protobuf:"bytes,6,opt,name=updated_at,json=updatedAt,proto3" json:"updated_at,omitempty"`
+ // Extensions allow clients to provide optional blobs that can be handled by runtime.
+ Extensions map[string]*anypb.Any `protobuf:"bytes,7,rep,name=extensions,proto3" json:"extensions,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+}
+
+func (x *Sandbox) Reset() {
+ *x = Sandbox{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_types_sandbox_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Sandbox) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Sandbox) ProtoMessage() {}
+
+func (x *Sandbox) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_types_sandbox_proto_msgTypes[0]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Sandbox.ProtoReflect.Descriptor instead.
+func (*Sandbox) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_types_sandbox_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *Sandbox) GetSandboxID() string {
+ if x != nil {
+ return x.SandboxID
+ }
+ return ""
+}
+
+func (x *Sandbox) GetRuntime() *Sandbox_Runtime {
+ if x != nil {
+ return x.Runtime
+ }
+ return nil
+}
+
+func (x *Sandbox) GetSpec() *anypb.Any {
+ if x != nil {
+ return x.Spec
+ }
+ return nil
+}
+
+func (x *Sandbox) GetLabels() map[string]string {
+ if x != nil {
+ return x.Labels
+ }
+ return nil
+}
+
+func (x *Sandbox) GetCreatedAt() *timestamppb.Timestamp {
+ if x != nil {
+ return x.CreatedAt
+ }
+ return nil
+}
+
+func (x *Sandbox) GetUpdatedAt() *timestamppb.Timestamp {
+ if x != nil {
+ return x.UpdatedAt
+ }
+ return nil
+}
+
+func (x *Sandbox) GetExtensions() map[string]*anypb.Any {
+ if x != nil {
+ return x.Extensions
+ }
+ return nil
+}
+
+type Sandbox_Runtime struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ // Name is the name of the runtime.
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+ // Options specify additional runtime initialization options for the shim (this data will be available in StartShim).
+ // Typically this data expected to be runtime shim implementation specific.
+ Options *anypb.Any `protobuf:"bytes,2,opt,name=options,proto3" json:"options,omitempty"`
+}
+
+func (x *Sandbox_Runtime) Reset() {
+ *x = Sandbox_Runtime{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_github_com_containerd_containerd_api_types_sandbox_proto_msgTypes[1]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *Sandbox_Runtime) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*Sandbox_Runtime) ProtoMessage() {}
+
+func (x *Sandbox_Runtime) ProtoReflect() protoreflect.Message {
+ mi := &file_github_com_containerd_containerd_api_types_sandbox_proto_msgTypes[1]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use Sandbox_Runtime.ProtoReflect.Descriptor instead.
+func (*Sandbox_Runtime) Descriptor() ([]byte, []int) {
+ return file_github_com_containerd_containerd_api_types_sandbox_proto_rawDescGZIP(), []int{0, 0}
+}
+
+func (x *Sandbox_Runtime) GetName() string {
+ if x != nil {
+ return x.Name
+ }
+ return ""
+}
+
+func (x *Sandbox_Runtime) GetOptions() *anypb.Any {
+ if x != nil {
+ return x.Options
+ }
+ return nil
+}
+
+var File_github_com_containerd_containerd_api_types_sandbox_proto protoreflect.FileDescriptor
+
+var file_github_com_containerd_containerd_api_types_sandbox_proto_rawDesc = []byte{
+ 0x0a, 0x38, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x73, 0x61, 0x6e,
+ 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x10, 0x63, 0x6f, 0x6e, 0x74,
+ 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x74, 0x79, 0x70, 0x65, 0x73, 0x1a, 0x19, 0x67, 0x6f,
+ 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x61, 0x6e,
+ 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f,
+ 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61,
+ 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xee, 0x04, 0x0a, 0x07, 0x53, 0x61, 0x6e,
+ 0x64, 0x62, 0x6f, 0x78, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x5f,
+ 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x73, 0x61, 0x6e, 0x64, 0x62, 0x6f,
+ 0x78, 0x49, 0x64, 0x12, 0x3b, 0x0a, 0x07, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x02,
+ 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72,
+ 0x64, 0x2e, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2e, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e,
+ 0x52, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x52, 0x07, 0x72, 0x75, 0x6e, 0x74, 0x69, 0x6d, 0x65,
+ 0x12, 0x28, 0x0a, 0x04, 0x73, 0x70, 0x65, 0x63, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14,
+ 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66,
+ 0x2e, 0x41, 0x6e, 0x79, 0x52, 0x04, 0x73, 0x70, 0x65, 0x63, 0x12, 0x3d, 0x0a, 0x06, 0x6c, 0x61,
+ 0x62, 0x65, 0x6c, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x25, 0x2e, 0x63, 0x6f, 0x6e,
+ 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2e, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2e, 0x53, 0x61,
+ 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72,
+ 0x79, 0x52, 0x06, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x12, 0x39, 0x0a, 0x0a, 0x63, 0x72, 0x65,
+ 0x61, 0x74, 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e,
+ 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e,
+ 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x63, 0x72, 0x65, 0x61, 0x74,
+ 0x65, 0x64, 0x41, 0x74, 0x12, 0x39, 0x0a, 0x0a, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x5f,
+ 0x61, 0x74, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
+ 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73,
+ 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x64, 0x41, 0x74, 0x12,
+ 0x49, 0x0a, 0x0a, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x07, 0x20,
+ 0x03, 0x28, 0x0b, 0x32, 0x29, 0x2e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64,
+ 0x2e, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2e, 0x53, 0x61, 0x6e, 0x64, 0x62, 0x6f, 0x78, 0x2e, 0x45,
+ 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x0a,
+ 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x1a, 0x4d, 0x0a, 0x07, 0x52, 0x75,
+ 0x6e, 0x74, 0x69, 0x6d, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20,
+ 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x2e, 0x0a, 0x07, 0x6f, 0x70, 0x74,
+ 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x67, 0x6f, 0x6f,
+ 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x41, 0x6e, 0x79,
+ 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x1a, 0x39, 0x0a, 0x0b, 0x4c, 0x61, 0x62,
+ 0x65, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18,
+ 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61,
+ 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65,
+ 0x3a, 0x02, 0x38, 0x01, 0x1a, 0x53, 0x0a, 0x0f, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f,
+ 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01,
+ 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x2a, 0x0a, 0x05, 0x76, 0x61, 0x6c,
+ 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
+ 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x41, 0x6e, 0x79, 0x52, 0x05,
+ 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x42, 0x32, 0x5a, 0x30, 0x67, 0x69, 0x74,
+ 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65,
+ 0x72, 0x64, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x61, 0x70,
+ 0x69, 0x2f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x3b, 0x74, 0x79, 0x70, 0x65, 0x73, 0x62, 0x06, 0x70,
+ 0x72, 0x6f, 0x74, 0x6f, 0x33,
+}
+
+var (
+ file_github_com_containerd_containerd_api_types_sandbox_proto_rawDescOnce sync.Once
+ file_github_com_containerd_containerd_api_types_sandbox_proto_rawDescData = file_github_com_containerd_containerd_api_types_sandbox_proto_rawDesc
+)
+
+func file_github_com_containerd_containerd_api_types_sandbox_proto_rawDescGZIP() []byte {
+ file_github_com_containerd_containerd_api_types_sandbox_proto_rawDescOnce.Do(func() {
+ file_github_com_containerd_containerd_api_types_sandbox_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_containerd_api_types_sandbox_proto_rawDescData)
+ })
+ return file_github_com_containerd_containerd_api_types_sandbox_proto_rawDescData
+}
+
+var file_github_com_containerd_containerd_api_types_sandbox_proto_msgTypes = make([]protoimpl.MessageInfo, 4)
+var file_github_com_containerd_containerd_api_types_sandbox_proto_goTypes = []interface{}{
+ (*Sandbox)(nil), // 0: containerd.types.Sandbox
+ (*Sandbox_Runtime)(nil), // 1: containerd.types.Sandbox.Runtime
+ nil, // 2: containerd.types.Sandbox.LabelsEntry
+ nil, // 3: containerd.types.Sandbox.ExtensionsEntry
+ (*anypb.Any)(nil), // 4: google.protobuf.Any
+ (*timestamppb.Timestamp)(nil), // 5: google.protobuf.Timestamp
+}
+var file_github_com_containerd_containerd_api_types_sandbox_proto_depIdxs = []int32{
+ 1, // 0: containerd.types.Sandbox.runtime:type_name -> containerd.types.Sandbox.Runtime
+ 4, // 1: containerd.types.Sandbox.spec:type_name -> google.protobuf.Any
+ 2, // 2: containerd.types.Sandbox.labels:type_name -> containerd.types.Sandbox.LabelsEntry
+ 5, // 3: containerd.types.Sandbox.created_at:type_name -> google.protobuf.Timestamp
+ 5, // 4: containerd.types.Sandbox.updated_at:type_name -> google.protobuf.Timestamp
+ 3, // 5: containerd.types.Sandbox.extensions:type_name -> containerd.types.Sandbox.ExtensionsEntry
+ 4, // 6: containerd.types.Sandbox.Runtime.options:type_name -> google.protobuf.Any
+ 4, // 7: containerd.types.Sandbox.ExtensionsEntry.value:type_name -> google.protobuf.Any
+ 8, // [8:8] is the sub-list for method output_type
+ 8, // [8:8] is the sub-list for method input_type
+ 8, // [8:8] is the sub-list for extension type_name
+ 8, // [8:8] is the sub-list for extension extendee
+ 0, // [0:8] is the sub-list for field type_name
+}
+
+func init() { file_github_com_containerd_containerd_api_types_sandbox_proto_init() }
+func file_github_com_containerd_containerd_api_types_sandbox_proto_init() {
+ if File_github_com_containerd_containerd_api_types_sandbox_proto != nil {
+ return
+ }
+ if !protoimpl.UnsafeEnabled {
+ file_github_com_containerd_containerd_api_types_sandbox_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*Sandbox); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_github_com_containerd_containerd_api_types_sandbox_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*Sandbox_Runtime); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_github_com_containerd_containerd_api_types_sandbox_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 4,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_github_com_containerd_containerd_api_types_sandbox_proto_goTypes,
+ DependencyIndexes: file_github_com_containerd_containerd_api_types_sandbox_proto_depIdxs,
+ MessageInfos: file_github_com_containerd_containerd_api_types_sandbox_proto_msgTypes,
+ }.Build()
+ File_github_com_containerd_containerd_api_types_sandbox_proto = out.File
+ file_github_com_containerd_containerd_api_types_sandbox_proto_rawDesc = nil
+ file_github_com_containerd_containerd_api_types_sandbox_proto_goTypes = nil
+ file_github_com_containerd_containerd_api_types_sandbox_proto_depIdxs = nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/api/types/sandbox.proto b/tools/vendor/github.com/containerd/containerd/api/types/sandbox.proto
new file mode 100644
index 000000000..b60770619
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/api/types/sandbox.proto
@@ -0,0 +1,51 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+syntax = "proto3";
+
+package containerd.types;
+
+import "google/protobuf/any.proto";
+import "google/protobuf/timestamp.proto";
+
+option go_package = "github.com/containerd/containerd/api/types;types";
+
+// Sandbox represents a sandbox metadata object that keeps all info required by controller to
+// work with a particular instance.
+message Sandbox {
+ // SandboxID is a unique instance identifier within namespace
+ string sandbox_id = 1;
+ message Runtime {
+ // Name is the name of the runtime.
+ string name = 1;
+ // Options specify additional runtime initialization options for the shim (this data will be available in StartShim).
+ // Typically this data expected to be runtime shim implementation specific.
+ google.protobuf.Any options = 2;
+ }
+ // Runtime specifies which runtime to use for executing this container.
+ Runtime runtime = 2;
+ // Spec is sandbox configuration (kin of OCI runtime spec), spec's data will be written to a config.json file in the
+ // bundle directory (similary to OCI spec).
+ google.protobuf.Any spec = 3;
+ // Labels provides an area to include arbitrary data on containers.
+ map labels = 4;
+ // CreatedAt is the time the container was first created.
+ google.protobuf.Timestamp created_at = 5;
+ // UpdatedAt is the last time the container was mutated.
+ google.protobuf.Timestamp updated_at = 6;
+ // Extensions allow clients to provide optional blobs that can be handled by runtime.
+ map extensions = 7;
+}
diff --git a/tools/vendor/github.com/containerd/containerd/archive/compression/compression_fuzzer.go b/tools/vendor/github.com/containerd/containerd/archive/compression/compression_fuzzer.go
new file mode 100644
index 000000000..3516494ac
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/archive/compression/compression_fuzzer.go
@@ -0,0 +1,28 @@
+//go:build gofuzz
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package compression
+
+import (
+ "bytes"
+)
+
+func FuzzDecompressStream(data []byte) int {
+ _, _ = DecompressStream(bytes.NewReader(data))
+ return 1
+}
diff --git a/tools/vendor/github.com/containerd/containerd/archive/link_default.go b/tools/vendor/github.com/containerd/containerd/archive/link_default.go
new file mode 100644
index 000000000..84a8997eb
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/archive/link_default.go
@@ -0,0 +1,25 @@
+//go:build !freebsd
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package archive
+
+import "os"
+
+func link(oldname, newname string) error {
+ return os.Link(oldname, newname)
+}
diff --git a/tools/vendor/github.com/containerd/containerd/archive/link_freebsd.go b/tools/vendor/github.com/containerd/containerd/archive/link_freebsd.go
new file mode 100644
index 000000000..2097db06b
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/archive/link_freebsd.go
@@ -0,0 +1,82 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package archive
+
+import (
+ "os"
+ "syscall"
+
+ "golang.org/x/sys/unix"
+)
+
+func link(oldname, newname string) error {
+ e := ignoringEINTR(func() error {
+ return unix.Linkat(unix.AT_FDCWD, oldname, unix.AT_FDCWD, newname, 0)
+ })
+ if e != nil {
+ return &os.LinkError{Op: "link", Old: oldname, New: newname, Err: e}
+ }
+ return nil
+}
+
+// The following contents were copied from Go 1.18.2.
+// Use of this source code is governed by the following
+// BSD-style license:
+//
+// Copyright (c) 2009 The Go Authors. All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// ignoringEINTR makes a function call and repeats it if it returns an
+// EINTR error. This appears to be required even though we install all
+// signal handlers with SA_RESTART: see #22838, #38033, #38836, #40846.
+// Also #20400 and #36644 are issues in which a signal handler is
+// installed without setting SA_RESTART. None of these are the common case,
+// but there are enough of them that it seems that we can't avoid
+// an EINTR loop.
+func ignoringEINTR(fn func() error) error {
+ for {
+ err := fn()
+ if err != syscall.EINTR {
+ return err
+ }
+ }
+}
diff --git a/tools/vendor/github.com/containerd/containerd/archive/tar.go b/tools/vendor/github.com/containerd/containerd/archive/tar.go
index a57074e77..28b623dcf 100644
--- a/tools/vendor/github.com/containerd/containerd/archive/tar.go
+++ b/tools/vendor/github.com/containerd/containerd/archive/tar.go
@@ -24,13 +24,14 @@ import (
"io"
"os"
"path/filepath"
- "runtime"
"strings"
"sync"
"syscall"
"time"
+ "github.com/containerd/containerd/archive/tarheader"
"github.com/containerd/containerd/log"
+ "github.com/containerd/containerd/pkg/epoch"
"github.com/containerd/containerd/pkg/userns"
"github.com/containerd/continuity/fs"
)
@@ -51,11 +52,11 @@ var errInvalidArchive = errors.New("invalid archive")
// files will be prepended with the prefix ".wh.". This style is
// based off AUFS whiteouts.
// See https://github.com/opencontainers/image-spec/blob/main/layer.md
-func Diff(ctx context.Context, a, b string) io.ReadCloser {
+func Diff(ctx context.Context, a, b string, opts ...WriteDiffOpt) io.ReadCloser {
r, w := io.Pipe()
go func() {
- err := WriteDiff(ctx, w, a, b)
+ err := WriteDiff(ctx, w, a, b, opts...)
if err != nil {
log.G(ctx).WithError(err).Debugf("write diff failed")
}
@@ -81,6 +82,10 @@ func WriteDiff(ctx context.Context, w io.Writer, a, b string, opts ...WriteDiffO
return fmt.Errorf("failed to apply option: %w", err)
}
}
+ if tm := epoch.FromContext(ctx); tm != nil && options.SourceDateEpoch == nil {
+ options.SourceDateEpoch = tm
+ }
+
if options.writeDiffFunc == nil {
options.writeDiffFunc = writeDiffNaive
}
@@ -95,8 +100,14 @@ func WriteDiff(ctx context.Context, w io.Writer, a, b string, opts ...WriteDiffO
// files will be prepended with the prefix ".wh.". This style is
// based off AUFS whiteouts.
// See https://github.com/opencontainers/image-spec/blob/main/layer.md
-func writeDiffNaive(ctx context.Context, w io.Writer, a, b string, _ WriteDiffOptions) error {
- cw := NewChangeWriter(w, b)
+func writeDiffNaive(ctx context.Context, w io.Writer, a, b string, o WriteDiffOptions) error {
+ var opts []ChangeWriterOpt
+ if o.SourceDateEpoch != nil {
+ opts = append(opts,
+ WithModTimeUpperBound(*o.SourceDateEpoch),
+ WithWhiteoutTime(*o.SourceDateEpoch))
+ }
+ cw := NewChangeWriter(w, b, opts...)
err := fs.Changes(ctx, a, b, cw.HandleChange)
if err != nil {
return fmt.Errorf("failed to create diff tar stream: %w", err)
@@ -120,6 +131,8 @@ const (
whiteoutOpaqueDir = whiteoutMetaPrefix + ".opq"
paxSchilyXattr = "SCHILY.xattr."
+
+ userXattrPrefix = "user."
)
// Apply applies a tar stream of an OCI style diff tar.
@@ -288,7 +301,7 @@ func applyNaive(ctx context.Context, root string, r io.Reader, options ApplyOpti
srcData := io.Reader(tr)
srcHdr := hdr
- if err := createTarFile(ctx, path, root, srcHdr, srcData); err != nil {
+ if err := createTarFile(ctx, path, root, srcHdr, srcData, options.NoSameOwner); err != nil {
return 0, err
}
@@ -313,7 +326,7 @@ func applyNaive(ctx context.Context, root string, r io.Reader, options ApplyOpti
return size, nil
}
-func createTarFile(ctx context.Context, path, extractDir string, hdr *tar.Header, reader io.Reader) error {
+func createTarFile(ctx context.Context, path, extractDir string, hdr *tar.Header, reader io.Reader, noSameOwner bool) error {
// hdr.Mode is in linux format, which we can use for syscalls,
// but for os.Foo() calls we need the mode converted to os.FileMode,
// so use hdrInfo.Mode() (they differ for e.g. setuid bits)
@@ -329,6 +342,7 @@ func createTarFile(ctx context.Context, path, extractDir string, hdr *tar.Header
}
}
+ //nolint:staticcheck // TypeRegA is deprecated but we may still receive an external tar with TypeRegA
case tar.TypeReg, tar.TypeRegA:
file, err := openFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, hdrInfo.Mode())
if err != nil {
@@ -361,7 +375,7 @@ func createTarFile(ctx context.Context, path, extractDir string, hdr *tar.Header
return err
}
- if err := os.Link(targetPath, path); err != nil {
+ if err := link(targetPath, path); err != nil {
return err
}
@@ -378,8 +392,7 @@ func createTarFile(ctx context.Context, path, extractDir string, hdr *tar.Header
return fmt.Errorf("unhandled tar header type %d", hdr.Typeflag)
}
- // Lchown is not supported on Windows.
- if runtime.GOOS != "windows" {
+ if !noSameOwner {
if err := os.Lchown(path, hdr.Uid, hdr.Gid); err != nil {
err = fmt.Errorf("failed to Lchown %q for UID %d, GID %d: %w", path, hdr.Uid, hdr.Gid, err)
if errors.Is(err, syscall.EINVAL) && userns.RunningInUserNS() {
@@ -393,11 +406,19 @@ func createTarFile(ctx context.Context, path, extractDir string, hdr *tar.Header
if strings.HasPrefix(key, paxSchilyXattr) {
key = key[len(paxSchilyXattr):]
if err := setxattr(path, key, value); err != nil {
+ if errors.Is(err, syscall.EPERM) && strings.HasPrefix(key, userXattrPrefix) {
+ // In the user.* namespace, only regular files and directories can have extended attributes.
+ // See https://man7.org/linux/man-pages/man7/xattr.7.html for details.
+ if fi, err := os.Lstat(path); err == nil && (!fi.Mode().IsRegular() && !fi.Mode().IsDir()) {
+ log.G(ctx).WithError(err).Warnf("ignored xattr %s in archive", key)
+ continue
+ }
+ }
if errors.Is(err, syscall.ENOTSUP) {
log.G(ctx).WithError(err).Warnf("ignored xattr %s in archive", key)
continue
}
- return err
+ return fmt.Errorf("failed to setxattr %q for key %q: %w", path, key, err)
}
}
}
@@ -481,26 +502,48 @@ func mkparent(ctx context.Context, path, root string, parents []string) error {
// See also https://github.com/opencontainers/image-spec/blob/main/layer.md for details
// about OCI layers
type ChangeWriter struct {
- tw *tar.Writer
- source string
- whiteoutT time.Time
- inodeSrc map[uint64]string
- inodeRefs map[uint64][]string
- addedDirs map[string]struct{}
+ tw *tar.Writer
+ source string
+ modTimeUpperBound *time.Time
+ whiteoutT time.Time
+ inodeSrc map[uint64]string
+ inodeRefs map[uint64][]string
+ addedDirs map[string]struct{}
+}
+
+// ChangeWriterOpt can be specified in NewChangeWriter.
+type ChangeWriterOpt func(cw *ChangeWriter)
+
+// WithModTimeUpperBound sets the mod time upper bound.
+func WithModTimeUpperBound(tm time.Time) ChangeWriterOpt {
+ return func(cw *ChangeWriter) {
+ cw.modTimeUpperBound = &tm
+ }
+}
+
+// WithWhiteoutTime sets the whiteout timestamp.
+func WithWhiteoutTime(tm time.Time) ChangeWriterOpt {
+ return func(cw *ChangeWriter) {
+ cw.whiteoutT = tm
+ }
}
// NewChangeWriter returns ChangeWriter that writes tar stream of the source directory
// to the privided writer. Change information (add/modify/delete/unmodified) for each
// file needs to be passed through HandleChange method.
-func NewChangeWriter(w io.Writer, source string) *ChangeWriter {
- return &ChangeWriter{
+func NewChangeWriter(w io.Writer, source string, opts ...ChangeWriterOpt) *ChangeWriter {
+ cw := &ChangeWriter{
tw: tar.NewWriter(w),
source: source,
- whiteoutT: time.Now(),
+ whiteoutT: time.Now(), // can be overridden with WithWhiteoutTime(time.Time) ChangeWriterOpt .
inodeSrc: map[uint64]string{},
inodeRefs: map[uint64][]string{},
addedDirs: map[string]struct{}{},
}
+ for _, o := range opts {
+ o(cw)
+ }
+ return cw
}
// HandleChange receives filesystem change information and reflect that information to
@@ -544,7 +587,8 @@ func (cw *ChangeWriter) HandleChange(k fs.ChangeKind, p string, f os.FileInfo, e
}
}
- hdr, err := tar.FileInfoHeader(f, link)
+ // Use FileInfoHeaderNoLookups to avoid propagating user names and group names from the host
+ hdr, err := tarheader.FileInfoHeaderNoLookups(f, link)
if err != nil {
return err
}
@@ -553,6 +597,9 @@ func (cw *ChangeWriter) HandleChange(k fs.ChangeKind, p string, f os.FileInfo, e
// truncate timestamp for compatibility. without PAX stdlib rounds timestamps instead
hdr.Format = tar.FormatPAX
+ if cw.modTimeUpperBound != nil && hdr.ModTime.After(*cw.modTimeUpperBound) {
+ hdr.ModTime = *cw.modTimeUpperBound
+ }
hdr.ModTime = hdr.ModTime.Truncate(time.Second)
hdr.AccessTime = time.Time{}
hdr.ChangeTime = time.Time{}
@@ -564,11 +611,9 @@ func (cw *ChangeWriter) HandleChange(k fs.ChangeKind, p string, f os.FileInfo, e
return fmt.Errorf("failed to make path relative: %w", err)
}
}
- name, err = tarName(name)
- if err != nil {
- return fmt.Errorf("cannot canonicalize path: %w", err)
- }
- // suffix with '/' for directories
+ // Canonicalize to POSIX-style paths using forward slashes. Directory
+ // entries must end with a slash.
+ name = filepath.ToSlash(name)
if f.IsDir() && !strings.HasSuffix(name, "/") {
name += "/"
}
diff --git a/tools/vendor/github.com/containerd/containerd/archive/tar_mostunix.go b/tools/vendor/github.com/containerd/containerd/archive/tar_mostunix.go
index d2d970356..ca5e602a0 100644
--- a/tools/vendor/github.com/containerd/containerd/archive/tar_mostunix.go
+++ b/tools/vendor/github.com/containerd/containerd/archive/tar_mostunix.go
@@ -1,5 +1,4 @@
//go:build !windows && !freebsd
-// +build !windows,!freebsd
/*
Copyright The containerd Authors.
diff --git a/tools/vendor/github.com/containerd/containerd/archive/tar_opts.go b/tools/vendor/github.com/containerd/containerd/archive/tar_opts.go
index 58985555a..ac3a03c8d 100644
--- a/tools/vendor/github.com/containerd/containerd/archive/tar_opts.go
+++ b/tools/vendor/github.com/containerd/containerd/archive/tar_opts.go
@@ -20,6 +20,7 @@ import (
"archive/tar"
"context"
"io"
+ "time"
)
// ApplyOptions provides additional options for an Apply operation
@@ -27,6 +28,7 @@ type ApplyOptions struct {
Filter Filter // Filter tar headers
ConvertWhiteout ConvertWhiteout // Convert whiteout files
Parents []string // Parent directories to handle inherited attributes without CoW
+ NoSameOwner bool // NoSameOwner will not attempt to preserve the owner specified in the tar archive.
applyFunc func(context.Context, string, io.Reader, ApplyOptions) (int64, error)
}
@@ -61,6 +63,15 @@ func WithConvertWhiteout(c ConvertWhiteout) ApplyOpt {
}
}
+// WithNoSameOwner is same as '--no-same-owner` in 'tar' command.
+// It'll skip attempt to preserve the owner specified in the tar archive.
+func WithNoSameOwner() ApplyOpt {
+ return func(options *ApplyOptions) error {
+ options.NoSameOwner = true
+ return nil
+ }
+}
+
// WithParents provides parent directories for resolving inherited attributes
// directory from the filesystem.
// Inherited attributes are searched from first to last, making the first
@@ -79,7 +90,22 @@ type WriteDiffOptions struct {
ParentLayers []string // Windows needs the full list of parent layers
writeDiffFunc func(context.Context, io.Writer, string, string, WriteDiffOptions) error
+
+ // SourceDateEpoch specifies the following timestamps to provide control for reproducibility.
+ // - The upper bound timestamp of the diff contents
+ // - The timestamp of the whiteouts
+ //
+ // See also https://reproducible-builds.org/docs/source-date-epoch/ .
+ SourceDateEpoch *time.Time
}
// WriteDiffOpt allows setting mutable archive write properties on creation
type WriteDiffOpt func(options *WriteDiffOptions) error
+
+// WithSourceDateEpoch specifies the SOURCE_DATE_EPOCH without touching the env vars.
+func WithSourceDateEpoch(tm *time.Time) WriteDiffOpt {
+ return func(options *WriteDiffOptions) error {
+ options.SourceDateEpoch = tm
+ return nil
+ }
+}
diff --git a/tools/vendor/github.com/containerd/containerd/archive/tar_unix.go b/tools/vendor/github.com/containerd/containerd/archive/tar_unix.go
index 2f3a3a392..8e883eee6 100644
--- a/tools/vendor/github.com/containerd/containerd/archive/tar_unix.go
+++ b/tools/vendor/github.com/containerd/containerd/archive/tar_unix.go
@@ -1,5 +1,4 @@
//go:build !windows
-// +build !windows
/*
Copyright The containerd Authors.
@@ -34,10 +33,6 @@ import (
"golang.org/x/sys/unix"
)
-func tarName(p string) (string, error) {
- return p, nil
-}
-
func chmodTarEntry(perm os.FileMode) os.FileMode {
return perm
}
@@ -62,8 +57,7 @@ func setHeaderForSpecialDevice(hdr *tar.Header, name string, fi os.FileInfo) err
return errors.New("unsupported stat type")
}
- // Rdev is int32 on darwin/bsd, int64 on linux/solaris
- rdev := uint64(s.Rdev) // nolint: unconvert
+ rdev := uint64(s.Rdev) //nolint:nolintlint,unconvert // rdev is int32 on darwin/bsd, int64 on linux/solaris
// Currently go does not fill in the major/minors
if s.Mode&syscall.S_IFBLK != 0 ||
diff --git a/tools/vendor/github.com/containerd/containerd/archive/tar_windows.go b/tools/vendor/github.com/containerd/containerd/archive/tar_windows.go
index 4b71c1e30..dc8f64ee4 100644
--- a/tools/vendor/github.com/containerd/containerd/archive/tar_windows.go
+++ b/tools/vendor/github.com/containerd/containerd/archive/tar_windows.go
@@ -23,24 +23,9 @@ import (
"os"
"strings"
- "github.com/containerd/containerd/sys"
+ "github.com/moby/sys/sequential"
)
-// tarName returns platform-specific filepath
-// to canonical posix-style path for tar archival. p is relative
-// path.
-func tarName(p string) (string, error) {
- // windows: convert windows style relative path with backslashes
- // into forward slashes. Since windows does not allow '/' or '\'
- // in file names, it is mostly safe to replace however we must
- // check just in case
- if strings.Contains(p, "/") {
- return "", fmt.Errorf("windows path contains forward slash: %s", p)
- }
-
- return strings.Replace(p, string(os.PathSeparator), "/", -1), nil
-}
-
// chmodTarEntry is used to adjust the file permissions used in tar header based
// on the platform the archival is done.
func chmodTarEntry(perm os.FileMode) os.FileMode {
@@ -57,15 +42,15 @@ func setHeaderForSpecialDevice(*tar.Header, string, os.FileInfo) error {
}
func open(p string) (*os.File, error) {
- // We use sys.OpenSequential to ensure we use sequential file
- // access on Windows to avoid depleting the standby list.
- return sys.OpenSequential(p)
+ // We use sequential file access to avoid depleting the standby list on
+ // Windows.
+ return sequential.Open(p)
}
func openFile(name string, flag int, perm os.FileMode) (*os.File, error) {
- // Source is regular file. We use sys.OpenFileSequential to use sequential
- // file access to avoid depleting the standby list on Windows.
- return sys.OpenFileSequential(name, flag, perm)
+ // Source is regular file. We use sequential file access to avoid depleting
+ // the standby list on Windows.
+ return sequential.OpenFile(name, flag, perm)
}
func mkdir(path string, perm os.FileMode) error {
diff --git a/tools/vendor/github.com/containerd/containerd/archive/tarheader/tarheader.go b/tools/vendor/github.com/containerd/containerd/archive/tarheader/tarheader.go
new file mode 100644
index 000000000..2f93842c1
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/archive/tarheader/tarheader.go
@@ -0,0 +1,82 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+/*
+ Portions from https://github.com/moby/moby/blob/v23.0.1/pkg/archive/archive.go#L419-L464
+ Copyright (C) Docker/Moby authors.
+ Licensed under the Apache License, Version 2.0
+ NOTICE: https://github.com/moby/moby/blob/v23.0.1/NOTICE
+*/
+
+package tarheader
+
+import (
+ "archive/tar"
+ "os"
+)
+
+// nosysFileInfo hides the system-dependent info of the wrapped FileInfo to
+// prevent tar.FileInfoHeader from introspecting it and potentially calling into
+// glibc.
+//
+// From https://github.com/moby/moby/blob/v23.0.1/pkg/archive/archive.go#L419-L434 .
+type nosysFileInfo struct {
+ os.FileInfo
+}
+
+func (fi nosysFileInfo) Sys() interface{} {
+ // A Sys value of type *tar.Header is safe as it is system-independent.
+ // The tar.FileInfoHeader function copies the fields into the returned
+ // header without performing any OS lookups.
+ if sys, ok := fi.FileInfo.Sys().(*tar.Header); ok {
+ return sys
+ }
+ return nil
+}
+
+// sysStat, if non-nil, populates hdr from system-dependent fields of fi.
+//
+// From https://github.com/moby/moby/blob/v23.0.1/pkg/archive/archive.go#L436-L437 .
+var sysStat func(fi os.FileInfo, hdr *tar.Header) error
+
+// FileInfoHeaderNoLookups creates a partially-populated tar.Header from fi.
+//
+// Compared to the archive/tar.FileInfoHeader function, this function is safe to
+// call from a chrooted process as it does not populate fields which would
+// require operating system lookups. It behaves identically to
+// tar.FileInfoHeader when fi is a FileInfo value returned from
+// tar.Header.FileInfo().
+//
+// When fi is a FileInfo for a native file, such as returned from os.Stat() and
+// os.Lstat(), the returned Header value differs from one returned from
+// tar.FileInfoHeader in the following ways. The Uname and Gname fields are not
+// set as OS lookups would be required to populate them. The AccessTime and
+// ChangeTime fields are not currently set (not yet implemented) although that
+// is subject to change. Callers which require the AccessTime or ChangeTime
+// fields to be zeroed should explicitly zero them out in the returned Header
+// value to avoid any compatibility issues in the future.
+//
+// From https://github.com/moby/moby/blob/v23.0.1/pkg/archive/archive.go#L439-L464 .
+func FileInfoHeaderNoLookups(fi os.FileInfo, link string) (*tar.Header, error) {
+ hdr, err := tar.FileInfoHeader(nosysFileInfo{fi}, link)
+ if err != nil {
+ return nil, err
+ }
+ if sysStat != nil {
+ return hdr, sysStat(fi, hdr)
+ }
+ return hdr, nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/archive/tarheader/tarheader_unix.go b/tools/vendor/github.com/containerd/containerd/archive/tarheader/tarheader_unix.go
new file mode 100644
index 000000000..98ad8f945
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/archive/tarheader/tarheader_unix.go
@@ -0,0 +1,59 @@
+//go:build !windows
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+/*
+ Portions from https://github.com/moby/moby/blob/v23.0.1/pkg/archive/archive_unix.go#L52-L70
+ Copyright (C) Docker/Moby authors.
+ Licensed under the Apache License, Version 2.0
+ NOTICE: https://github.com/moby/moby/blob/v23.0.1/NOTICE
+*/
+
+package tarheader
+
+import (
+ "archive/tar"
+ "os"
+ "syscall"
+
+ "golang.org/x/sys/unix"
+)
+
+func init() {
+ sysStat = statUnix
+}
+
+// statUnix populates hdr from system-dependent fields of fi without performing
+// any OS lookups.
+// From https://github.com/moby/moby/blob/v23.0.1/pkg/archive/archive_unix.go#L52-L70
+func statUnix(fi os.FileInfo, hdr *tar.Header) error {
+ s, ok := fi.Sys().(*syscall.Stat_t)
+ if !ok {
+ return nil
+ }
+
+ hdr.Uid = int(s.Uid)
+ hdr.Gid = int(s.Gid)
+
+ if s.Mode&unix.S_IFBLK != 0 ||
+ s.Mode&unix.S_IFCHR != 0 {
+ hdr.Devmajor = int64(unix.Major(uint64(s.Rdev)))
+ hdr.Devminor = int64(unix.Minor(uint64(s.Rdev)))
+ }
+
+ return nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/archive/time_unix.go b/tools/vendor/github.com/containerd/containerd/archive/time_unix.go
index 043e37453..b5a10d528 100644
--- a/tools/vendor/github.com/containerd/containerd/archive/time_unix.go
+++ b/tools/vendor/github.com/containerd/containerd/archive/time_unix.go
@@ -1,5 +1,4 @@
//go:build !windows
-// +build !windows
/*
Copyright The containerd Authors.
diff --git a/tools/vendor/github.com/containerd/containerd/archive/time_windows.go b/tools/vendor/github.com/containerd/containerd/archive/time_windows.go
index 71f397821..d906a3e5e 100644
--- a/tools/vendor/github.com/containerd/containerd/archive/time_windows.go
+++ b/tools/vendor/github.com/containerd/containerd/archive/time_windows.go
@@ -25,18 +25,17 @@ import (
// chtimes will set the create time on a file using the given modtime.
// This requires calling SetFileTime and explicitly including the create time.
func chtimes(path string, atime, mtime time.Time) error {
- ctimespec := windows.NsecToTimespec(mtime.UnixNano())
- pathp, e := windows.UTF16PtrFromString(path)
- if e != nil {
- return e
+ pathp, err := windows.UTF16PtrFromString(path)
+ if err != nil {
+ return err
}
- h, e := windows.CreateFile(pathp,
+ h, err := windows.CreateFile(pathp,
windows.FILE_WRITE_ATTRIBUTES, windows.FILE_SHARE_WRITE, nil,
windows.OPEN_EXISTING, windows.FILE_FLAG_BACKUP_SEMANTICS, 0)
- if e != nil {
- return e
+ if err != nil {
+ return err
}
defer windows.Close(h)
- c := windows.NsecToFiletime(windows.TimespecToNsec(ctimespec))
+ c := windows.NsecToFiletime(mtime.UnixNano())
return windows.SetFileTime(h, &c, nil, nil)
}
diff --git a/tools/vendor/github.com/containerd/containerd/containers/containers.go b/tools/vendor/github.com/containerd/containerd/containers/containers.go
index 7174bbd6a..49f389133 100644
--- a/tools/vendor/github.com/containerd/containerd/containers/containers.go
+++ b/tools/vendor/github.com/containerd/containerd/containers/containers.go
@@ -20,7 +20,7 @@ import (
"context"
"time"
- "github.com/gogo/protobuf/types"
+ "github.com/containerd/typeurl/v2"
)
// Container represents the set of data pinned by a container. Unless otherwise
@@ -53,7 +53,7 @@ type Container struct {
// container.
//
// This field is required but mutable.
- Spec *types.Any
+ Spec typeurl.Any
// SnapshotKey specifies the snapshot key to use for the container's root
// filesystem. When starting a task from this container, a caller should
@@ -75,13 +75,18 @@ type Container struct {
UpdatedAt time.Time
// Extensions stores client-specified metadata
- Extensions map[string]types.Any
+ Extensions map[string]typeurl.Any
+
+ // SandboxID is an identifier of sandbox this container belongs to.
+ //
+ // This property is optional, but can't be changed after creation.
+ SandboxID string
}
// RuntimeInfo holds runtime specific information
type RuntimeInfo struct {
Name string
- Options *types.Any
+ Options typeurl.Any
}
// Store interacts with the underlying container storage
diff --git a/tools/vendor/github.com/containerd/containerd/content/content.go b/tools/vendor/github.com/containerd/containerd/content/content.go
index ff17a8417..8eb1a1692 100644
--- a/tools/vendor/github.com/containerd/containerd/content/content.go
+++ b/tools/vendor/github.com/containerd/containerd/content/content.go
@@ -25,6 +25,26 @@ import (
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
+// Store combines the methods of content-oriented interfaces into a set that
+// are commonly provided by complete implementations.
+//
+// Overall content lifecycle:
+// - Ingester is used to initiate a write operation (aka ingestion)
+// - IngestManager is used to manage (e.g. list, abort) active ingestions
+// - Once an ingestion is complete (see Writer.Commit), Provider is used to
+// query a single piece of content by its digest
+// - Manager is used to manage (e.g. list, delete) previously committed content
+//
+// Note that until ingestion is complete, its content is not visible through
+// Provider or Manager. Once ingestion is complete, it is no longer exposed
+// through IngestManager.
+type Store interface {
+ Manager
+ Provider
+ IngestManager
+ Ingester
+}
+
// ReaderAt extends the standard io.ReaderAt interface with reporting of Size and io.Closer
type ReaderAt interface {
io.ReaderAt
@@ -42,14 +62,31 @@ type Provider interface {
// Ingester writes content
type Ingester interface {
- // Some implementations require WithRef to be included in opts.
+ // Writer initiates a writing operation (aka ingestion). A single ingestion
+ // is uniquely identified by its ref, provided using a WithRef option.
+ // Writer can be called multiple times with the same ref to access the same
+ // ingestion.
+ // Once all the data is written, use Writer.Commit to complete the ingestion.
Writer(ctx context.Context, opts ...WriterOpt) (Writer, error)
}
+// IngestManager provides methods for managing ingestions. An ingestion is a
+// not-yet-complete writing operation initiated using Ingester and identified
+// by a ref string.
+type IngestManager interface {
+ // Status returns the status of the provided ref.
+ Status(ctx context.Context, ref string) (Status, error)
+
+ // ListStatuses returns the status of any active ingestions whose ref match
+ // the provided regular expression. If empty, all active ingestions will be
+ // returned.
+ ListStatuses(ctx context.Context, filters ...string) ([]Status, error)
+
+ // Abort completely cancels the ingest operation targeted by ref.
+ Abort(ctx context.Context, ref string) error
+}
+
// Info holds content specific information
-//
-// TODO(stevvooe): Consider a very different name for this struct. Info is way
-// to general. It also reads very weird in certain context, like pluralization.
type Info struct {
Digest digest.Digest
Size int64
@@ -58,7 +95,7 @@ type Info struct {
Labels map[string]string
}
-// Status of a content operation
+// Status of a content operation (i.e. an ingestion)
type Status struct {
Ref string
Offset int64
@@ -71,12 +108,17 @@ type Status struct {
// WalkFunc defines the callback for a blob walk.
type WalkFunc func(Info) error
-// Manager provides methods for inspecting, listing and removing content.
-type Manager interface {
+// InfoProvider provides info for content inspection.
+type InfoProvider interface {
// Info will return metadata about content available in the content store.
//
// If the content is not present, ErrNotFound will be returned.
Info(ctx context.Context, dgst digest.Digest) (Info, error)
+}
+
+// Manager provides methods for inspecting, listing and removing content.
+type Manager interface {
+ InfoProvider
// Update updates mutable information related to content.
// If one or more fieldpaths are provided, only those
@@ -94,21 +136,7 @@ type Manager interface {
Delete(ctx context.Context, dgst digest.Digest) error
}
-// IngestManager provides methods for managing ingests.
-type IngestManager interface {
- // Status returns the status of the provided ref.
- Status(ctx context.Context, ref string) (Status, error)
-
- // ListStatuses returns the status of any active ingestions whose ref match the
- // provided regular expression. If empty, all active ingestions will be
- // returned.
- ListStatuses(ctx context.Context, filters ...string) ([]Status, error)
-
- // Abort completely cancels the ingest operation targeted by ref.
- Abort(ctx context.Context, ref string) error
-}
-
-// Writer handles the write of content into a content store
+// Writer handles writing of content into a content store
type Writer interface {
// Close closes the writer, if the writer has not been
// committed this allows resuming or aborting.
@@ -131,15 +159,6 @@ type Writer interface {
Truncate(size int64) error
}
-// Store combines the methods of content-oriented interfaces into a set that
-// are commonly provided by complete implementations.
-type Store interface {
- Manager
- Provider
- IngestManager
- Ingester
-}
-
// Opt is used to alter the mutable properties of content
type Opt func(*Info) error
diff --git a/tools/vendor/github.com/containerd/containerd/content/helpers.go b/tools/vendor/github.com/containerd/containerd/content/helpers.go
index 3ec1ffce0..5404109a6 100644
--- a/tools/vendor/github.com/containerd/containerd/content/helpers.go
+++ b/tools/vendor/github.com/containerd/containerd/content/helpers.go
@@ -21,15 +21,21 @@ import (
"errors"
"fmt"
"io"
- "math/rand"
"sync"
"time"
"github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/log"
+ "github.com/containerd/containerd/pkg/randutil"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
+// maxResets is the no.of times the Copy() method can tolerate a reset of the body
+const maxResets = 5
+
+var ErrReset = errors.New("writer has been reset")
+
var bufPool = sync.Pool{
New: func() interface{} {
buffer := make([]byte, 1<<20)
@@ -37,16 +43,26 @@ var bufPool = sync.Pool{
},
}
+type reader interface {
+ Reader() io.Reader
+}
+
// NewReader returns a io.Reader from a ReaderAt
func NewReader(ra ReaderAt) io.Reader {
- rd := io.NewSectionReader(ra, 0, ra.Size())
- return rd
+ if rd, ok := ra.(reader); ok {
+ return rd.Reader()
+ }
+ return io.NewSectionReader(ra, 0, ra.Size())
}
// ReadBlob retrieves the entire contents of the blob from the provider.
//
// Avoid using this for large blobs, such as layers.
func ReadBlob(ctx context.Context, provider Provider, desc ocispec.Descriptor) ([]byte, error) {
+ if int64(len(desc.Data)) == desc.Size && digest.FromBytes(desc.Data) == desc.Digest {
+ return desc.Data, nil
+ }
+
ra, err := provider.ReaderAt(ctx, desc)
if err != nil {
return nil, err
@@ -80,7 +96,7 @@ func WriteBlob(ctx context.Context, cs Ingester, ref string, r io.Reader, desc o
return fmt.Errorf("failed to open writer: %w", err)
}
- return nil // all ready present
+ return nil // already present
}
defer cw.Close()
@@ -107,7 +123,7 @@ func OpenWriter(ctx context.Context, cs Ingester, opts ...WriterOpt) (Writer, er
// error or abort. Requires asserting for an ingest manager
select {
- case <-time.After(time.Millisecond * time.Duration(rand.Intn(retry))):
+ case <-time.After(time.Millisecond * time.Duration(randutil.Intn(retry))):
if retry < 2048 {
retry = retry << 1
}
@@ -131,35 +147,63 @@ func OpenWriter(ctx context.Context, cs Ingester, opts ...WriterOpt) (Writer, er
// the size or digest is unknown, these values may be empty.
//
// Copy is buffered, so no need to wrap reader in buffered io.
-func Copy(ctx context.Context, cw Writer, r io.Reader, size int64, expected digest.Digest, opts ...Opt) error {
+func Copy(ctx context.Context, cw Writer, or io.Reader, size int64, expected digest.Digest, opts ...Opt) error {
ws, err := cw.Status()
if err != nil {
return fmt.Errorf("failed to get status: %w", err)
}
-
+ r := or
if ws.Offset > 0 {
- r, err = seekReader(r, ws.Offset, size)
+ r, err = seekReader(or, ws.Offset, size)
if err != nil {
return fmt.Errorf("unable to resume write to %v: %w", ws.Ref, err)
}
}
- copied, err := copyWithBuffer(cw, r)
- if err != nil {
- return fmt.Errorf("failed to copy: %w", err)
- }
- if size != 0 && copied < size-ws.Offset {
- // Short writes would return its own error, this indicates a read failure
- return fmt.Errorf("failed to read expected number of bytes: %w", io.ErrUnexpectedEOF)
- }
-
- if err := cw.Commit(ctx, size, expected, opts...); err != nil {
- if !errdefs.IsAlreadyExists(err) {
- return fmt.Errorf("failed commit on ref %q: %w", ws.Ref, err)
+ for i := 0; i < maxResets; i++ {
+ if i >= 1 {
+ log.G(ctx).WithField("digest", expected).Debugf("retrying copy due to reset")
+ }
+ copied, err := copyWithBuffer(cw, r)
+ if errors.Is(err, ErrReset) {
+ ws, err := cw.Status()
+ if err != nil {
+ return fmt.Errorf("failed to get status: %w", err)
+ }
+ r, err = seekReader(or, ws.Offset, size)
+ if err != nil {
+ return fmt.Errorf("unable to resume write to %v: %w", ws.Ref, err)
+ }
+ continue
+ }
+ if err != nil {
+ return fmt.Errorf("failed to copy: %w", err)
+ }
+ if size != 0 && copied < size-ws.Offset {
+ // Short writes would return its own error, this indicates a read failure
+ return fmt.Errorf("failed to read expected number of bytes: %w", io.ErrUnexpectedEOF)
}
+ if err := cw.Commit(ctx, size, expected, opts...); err != nil {
+ if errors.Is(err, ErrReset) {
+ ws, err := cw.Status()
+ if err != nil {
+ return fmt.Errorf("failed to get status: %w", err)
+ }
+ r, err = seekReader(or, ws.Offset, size)
+ if err != nil {
+ return fmt.Errorf("unable to resume write to %v: %w", ws.Ref, err)
+ }
+ continue
+ }
+ if !errdefs.IsAlreadyExists(err) {
+ return fmt.Errorf("failed commit on ref %q: %w", ws.Ref, err)
+ }
+ }
+ return nil
}
- return nil
+ log.G(ctx).WithField("digest", expected).Errorf("failed to copy after %d retries", maxResets)
+ return fmt.Errorf("failed to copy after %d retries", maxResets)
}
// CopyReaderAt copies to a writer from a given reader at for the given
diff --git a/tools/vendor/github.com/containerd/containerd/content/local/content_local_fuzzer.go b/tools/vendor/github.com/containerd/containerd/content/local/content_local_fuzzer.go
new file mode 100644
index 000000000..a523f28d9
--- /dev/null
+++ b/tools/vendor/github.com/containerd/containerd/content/local/content_local_fuzzer.go
@@ -0,0 +1,76 @@
+//go:build gofuzz
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package local
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ _ "crypto/sha256"
+ "io"
+ "testing"
+
+ "github.com/opencontainers/go-digest"
+
+ "github.com/containerd/containerd/content"
+)
+
+func FuzzContentStoreWriter(data []byte) int {
+ t := &testing.T{}
+ ctx := context.Background()
+ ctx, _, cs, cleanup := contentStoreEnv(t)
+ defer cleanup()
+
+ cw, err := cs.Writer(ctx, content.WithRef("myref"))
+ if err != nil {
+ return 0
+ }
+ if err := cw.Close(); err != nil {
+ return 0
+ }
+
+ // reopen, so we can test things
+ cw, err = cs.Writer(ctx, content.WithRef("myref"))
+ if err != nil {
+ return 0
+ }
+
+ err = checkCopyFuzz(int64(len(data)), cw, bufio.NewReader(io.NopCloser(bytes.NewReader(data))))
+ if err != nil {
+ return 0
+ }
+ expected := digest.FromBytes(data)
+
+ if err = cw.Commit(ctx, int64(len(data)), expected); err != nil {
+ return 0
+ }
+ return 1
+}
+
+func checkCopyFuzz(size int64, dst io.Writer, src io.Reader) error {
+ nn, err := io.Copy(dst, src)
+ if err != nil {
+ return err
+ }
+
+ if nn != size {
+ return err
+ }
+ return nil
+}
diff --git a/tools/vendor/github.com/containerd/containerd/content/local/readerat.go b/tools/vendor/github.com/containerd/containerd/content/local/readerat.go
index a83c171bb..899e85c0b 100644
--- a/tools/vendor/github.com/containerd/containerd/content/local/readerat.go
+++ b/tools/vendor/github.com/containerd/containerd/content/local/readerat.go
@@ -18,6 +18,7 @@ package local
import (
"fmt"
+ "io"
"os"
"github.com/containerd/containerd/content"
@@ -65,3 +66,7 @@ func (ra sizeReaderAt) Size() int64 {
func (ra sizeReaderAt) Close() error {
return ra.fp.Close()
}
+
+func (ra sizeReaderAt) Reader() io.Reader {
+ return io.LimitReader(ra.fp, ra.size)
+}
diff --git a/tools/vendor/github.com/containerd/containerd/content/local/store.go b/tools/vendor/github.com/containerd/containerd/content/local/store.go
index 457bbcd0e..baae3565b 100644
--- a/tools/vendor/github.com/containerd/containerd/content/local/store.go
+++ b/tools/vendor/github.com/containerd/containerd/content/local/store.go
@@ -20,7 +20,6 @@ import (
"context"
"fmt"
"io"
- "math/rand"
"os"
"path/filepath"
"strconv"
@@ -32,9 +31,10 @@ import (
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/filters"
"github.com/containerd/containerd/log"
+ "github.com/containerd/containerd/pkg/randutil"
"github.com/sirupsen/logrus"
- digest "github.com/opencontainers/go-digest"
+ "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
@@ -262,7 +262,7 @@ func (s *store) Walk(ctx context.Context, fn content.WalkFunc, fs ...string) err
return nil
}
- dgst := digest.NewDigestFromHex(alg.String(), filepath.Base(path))
+ dgst := digest.NewDigestFromEncoded(alg, filepath.Base(path))
if err := dgst.Validate(); err != nil {
// log error but don't report
log.L.WithError(err).WithField("path", path).Error("invalid digest for blob path")
@@ -473,7 +473,7 @@ func (s *store) Writer(ctx context.Context, opts ...content.WriterOpt) (content.
lockErr = nil
break
}
- time.Sleep(time.Millisecond * time.Duration(rand.Intn(1< 0
+ case "topic":
+ return e.Topic, len(e.Topic) > 0
+ case "event":
+ decoded, err := typeurl.UnmarshalAny(e.Event)
+ if err != nil {
+ return "", false
+ }
+
+ adaptor, ok := decoded.(interface {
+ Field([]string) (string, bool)
+ })
+ if !ok {
+ return "", false
+ }
+ return adaptor.Field(fieldpath[1:])
+ }
+ return "", false
+}
+
+// Event is a generic interface for any type of event
+type Event interface{}
+
+// Publisher posts the event.
+type Publisher interface {
+ Publish(ctx context.Context, topic string, event Event) error
+}
+
+// Forwarder forwards an event to the underlying event bus
+type Forwarder interface {
+ Forward(ctx context.Context, envelope *Envelope) error
+}
+
+// Subscriber allows callers to subscribe to events
+type Subscriber interface {
+ Subscribe(ctx context.Context, filters ...string) (ch <-chan *Envelope, errs <-chan error)
+}
diff --git a/tools/vendor/github.com/containerd/containerd/filters/filter.go b/tools/vendor/github.com/containerd/containerd/filters/filter.go
index cf09d8d9e..e13f2625c 100644
--- a/tools/vendor/github.com/containerd/containerd/filters/filter.go
+++ b/tools/vendor/github.com/containerd/containerd/filters/filter.go
@@ -65,7 +65,6 @@
// ```
// name==foo,labels.bar
// ```
-//
package filters
import (
diff --git a/tools/vendor/github.com/containerd/containerd/filters/parser.go b/tools/vendor/github.com/containerd/containerd/filters/parser.go
index 49182d7b7..32767909b 100644
--- a/tools/vendor/github.com/containerd/containerd/filters/parser.go
+++ b/tools/vendor/github.com/containerd/containerd/filters/parser.go
@@ -45,7 +45,6 @@ field := quoted | [A-Za-z] [A-Za-z0-9_]+
operator := "==" | "!=" | "~="
value := quoted | [^\s,]+
quoted :=
-
*/
func Parse(s string) (Filter, error) {
// special case empty to match all
diff --git a/tools/vendor/github.com/containerd/containerd/filters/quote.go b/tools/vendor/github.com/containerd/containerd/filters/quote.go
index b76aab9b4..5c800ef84 100644
--- a/tools/vendor/github.com/containerd/containerd/filters/quote.go
+++ b/tools/vendor/github.com/containerd/containerd/filters/quote.go
@@ -31,10 +31,10 @@ var errQuoteSyntax = errors.New("quote syntax error")
// or character literal represented by the string s.
// It returns four values:
//
-// 1) value, the decoded Unicode code point or byte value;
-// 2) multibyte, a boolean indicating whether the decoded character requires a multibyte UTF-8 representation;
-// 3) tail, the remainder of the string after the character; and
-// 4) an error that will be nil if the character is syntactically valid.
+// 1. value, the decoded Unicode code point or byte value;
+// 2. multibyte, a boolean indicating whether the decoded character requires a multibyte UTF-8 representation;
+// 3. tail, the remainder of the string after the character; and
+// 4. an error that will be nil if the character is syntactically valid.
//
// The second argument, quote, specifies the type of literal being parsed
// and therefore which escaped quote character is permitted.
diff --git a/tools/vendor/github.com/containerd/containerd/images/image.go b/tools/vendor/github.com/containerd/containerd/images/image.go
index d45afe482..2d2e36a9a 100644
--- a/tools/vendor/github.com/containerd/containerd/images/image.go
+++ b/tools/vendor/github.com/containerd/containerd/images/image.go
@@ -138,7 +138,7 @@ type platformManifest struct {
// TODO(stevvooe): This violates the current platform agnostic approach to this
// package by returning a specific manifest type. We'll need to refactor this
// to return a manifest descriptor or decide that we want to bring the API in
-// this direction because this abstraction is not needed.`
+// this direction because this abstraction is not needed.
func Manifest(ctx context.Context, provider content.Provider, image ocispec.Descriptor, platform platforms.MatchComparer) (ocispec.Manifest, error) {
var (
limit = 1
@@ -311,7 +311,7 @@ func Check(ctx context.Context, provider content.Provider, image ocispec.Descrip
return false, nil, nil, nil, fmt.Errorf("failed to check image %v: %w", image.Digest, err)
}
- // TODO(stevvooe): It is possible that referenced conponents could have
+ // TODO(stevvooe): It is possible that referenced components could have
// children, but this is rare. For now, we ignore this and only verify
// that manifest components are present.
required = append([]ocispec.Descriptor{mfst.Config}, mfst.Layers...)
diff --git a/tools/vendor/github.com/containerd/containerd/images/mediatypes.go b/tools/vendor/github.com/containerd/containerd/images/mediatypes.go
index 671e160e1..067963bab 100644
--- a/tools/vendor/github.com/containerd/containerd/images/mediatypes.go
+++ b/tools/vendor/github.com/containerd/containerd/images/mediatypes.go
@@ -38,7 +38,9 @@ const (
MediaTypeDockerSchema2Config = "application/vnd.docker.container.image.v1+json"
MediaTypeDockerSchema2Manifest = "application/vnd.docker.distribution.manifest.v2+json"
MediaTypeDockerSchema2ManifestList = "application/vnd.docker.distribution.manifest.list.v2+json"
+
// Checkpoint/Restore Media Types
+
MediaTypeContainerd1Checkpoint = "application/vnd.containerd.container.criu.checkpoint.criu.tar"
MediaTypeContainerd1CheckpointPreDump = "application/vnd.containerd.container.criu.checkpoint.predump.tar"
MediaTypeContainerd1Resource = "application/vnd.containerd.container.resource.tar"
@@ -47,9 +49,12 @@ const (
MediaTypeContainerd1CheckpointOptions = "application/vnd.containerd.container.checkpoint.options.v1+proto"
MediaTypeContainerd1CheckpointRuntimeName = "application/vnd.containerd.container.checkpoint.runtime.name"
MediaTypeContainerd1CheckpointRuntimeOptions = "application/vnd.containerd.container.checkpoint.runtime.options+proto"
- // Legacy Docker schema1 manifest
+
+ // MediaTypeDockerSchema1Manifest is the legacy Docker schema1 manifest
MediaTypeDockerSchema1Manifest = "application/vnd.docker.distribution.manifest.v1+prettyjws"
- // Encypted media types
+
+ // Encrypted media types
+
MediaTypeImageLayerEncrypted = ocispec.MediaTypeImageLayer + "+encrypted"
MediaTypeImageLayerGzipEncrypted = ocispec.MediaTypeImageLayerGzip + "+encrypted"
)
@@ -93,16 +98,23 @@ func DiffCompression(ctx context.Context, mediaType string) (string, error) {
// parseMediaTypes splits the media type into the base type and
// an array of sorted extensions
-func parseMediaTypes(mt string) (string, []string) {
+func parseMediaTypes(mt string) (mediaType string, suffixes []string) {
if mt == "" {
return "", []string{}
}
+ mediaType, ext, ok := strings.Cut(mt, "+")
+ if !ok {
+ return mediaType, []string{}
+ }
- s := strings.Split(mt, "+")
- ext := s[1:]
- sort.Strings(ext)
-
- return s[0], ext
+ // Splitting the extensions following the mediatype "(+)gzip+encrypted".
+ // We expect this to be a limited list, so add an arbitrary limit (50).
+ //
+ // Note that DiffCompression is only using the last element, so perhaps we
+ // should split on the last "+" only.
+ suffixes = strings.SplitN(ext, "+", 50)
+ sort.Strings(suffixes)
+ return mediaType, suffixes
}
// IsNonDistributable returns true if the media type is non-distributable.
@@ -118,8 +130,7 @@ func IsLayerType(mt string) bool {
}
// Parse Docker media types, strip off any + suffixes first
- base, _ := parseMediaTypes(mt)
- switch base {
+ switch base, _ := parseMediaTypes(mt); base {
case MediaTypeDockerSchema2Layer, MediaTypeDockerSchema2LayerGzip,
MediaTypeDockerSchema2LayerForeign, MediaTypeDockerSchema2LayerForeignGzip:
return true
diff --git a/tools/vendor/github.com/containerd/containerd/labels/labels.go b/tools/vendor/github.com/containerd/containerd/labels/labels.go
index d76ff2cf9..0f9bab5c5 100644
--- a/tools/vendor/github.com/containerd/containerd/labels/labels.go
+++ b/tools/vendor/github.com/containerd/containerd/labels/labels.go
@@ -19,3 +19,11 @@ package labels
// LabelUncompressed is added to compressed layer contents.
// The value is digest of the uncompressed content.
const LabelUncompressed = "containerd.io/uncompressed"
+
+// LabelSharedNamespace is added to a namespace to allow that namespaces
+// contents to be shared.
+const LabelSharedNamespace = "containerd.io/namespace.shareable"
+
+// LabelDistributionSource is added to content to indicate its origin.
+// e.g., "containerd.io/distribution.source.docker.io=library/redis"
+const LabelDistributionSource = "containerd.io/distribution.source"
diff --git a/tools/vendor/github.com/containerd/containerd/labels/validate.go b/tools/vendor/github.com/containerd/containerd/labels/validate.go
index 1fd527adb..f83b5dde2 100644
--- a/tools/vendor/github.com/containerd/containerd/labels/validate.go
+++ b/tools/vendor/github.com/containerd/containerd/labels/validate.go
@@ -24,15 +24,18 @@ import (
const (
maxSize = 4096
+ // maximum length of key portion of error message if len of key + len of value > maxSize
+ keyMaxLen = 64
)
// Validate a label's key and value are under 4096 bytes
func Validate(k, v string) error {
- if (len(k) + len(v)) > maxSize {
- if len(k) > 10 {
- k = k[:10]
+ total := len(k) + len(v)
+ if total > maxSize {
+ if len(k) > keyMaxLen {
+ k = k[:keyMaxLen]
}
- return fmt.Errorf("label key and value greater than maximum size (%d bytes), key: %s: %w", maxSize, k, errdefs.ErrInvalidArgument)
+ return fmt.Errorf("label key and value length (%d bytes) greater than maximum size (%d bytes), key: %s: %w", total, maxSize, k, errdefs.ErrInvalidArgument)
}
return nil
}
diff --git a/tools/vendor/github.com/containerd/containerd/leases/id.go b/tools/vendor/github.com/containerd/containerd/leases/id.go
index 8781a1d72..8f5dc93f3 100644
--- a/tools/vendor/github.com/containerd/containerd/leases/id.go
+++ b/tools/vendor/github.com/containerd/containerd/leases/id.go
@@ -17,9 +17,9 @@
package leases
import (
+ "crypto/rand"
"encoding/base64"
"fmt"
- "math/rand"
"time"
)
diff --git a/tools/vendor/github.com/containerd/containerd/leases/lease.go b/tools/vendor/github.com/containerd/containerd/leases/lease.go
index 058d06559..fc0ca3491 100644
--- a/tools/vendor/github.com/containerd/containerd/leases/lease.go
+++ b/tools/vendor/github.com/containerd/containerd/leases/lease.go
@@ -65,10 +65,15 @@ func SynchronousDelete(ctx context.Context, o *DeleteOptions) error {
return nil
}
-// WithLabels sets labels on a lease
+// WithLabels merges labels on a lease
func WithLabels(labels map[string]string) Opt {
return func(l *Lease) error {
- l.Labels = labels
+ if l.Labels == nil {
+ l.Labels = map[string]string{}
+ }
+ for k, v := range labels {
+ l.Labels[k] = v
+ }
return nil
}
}
diff --git a/tools/vendor/github.com/containerd/containerd/log/context.go b/tools/vendor/github.com/containerd/containerd/log/context.go
index 0db9562b8..20153066f 100644
--- a/tools/vendor/github.com/containerd/containerd/log/context.go
+++ b/tools/vendor/github.com/containerd/containerd/log/context.go
@@ -14,56 +14,169 @@
limitations under the License.
*/
+// Package log provides types and functions related to logging, passing
+// loggers through a context, and attaching context to the logger.
+//
+// # Transitional types
+//
+// This package contains various types that are aliases for types in [logrus].
+// These aliases are intended for transitioning away from hard-coding logrus
+// as logging implementation. Consumers of this package are encouraged to use
+// the type-aliases from this package instead of directly using their logrus
+// equivalent.
+//
+// The intent is to replace these aliases with locally defined types and
+// interfaces once all consumers are no longer directly importing logrus
+// types.
+//
+// IMPORTANT: due to the transitional purpose of this package, it is not
+// guaranteed for the full logrus API to be provided in the future. As
+// outlined, these aliases are provided as a step to transition away from
+// a specific implementation which, as a result, exposes the full logrus API.
+// While no decisions have been made on the ultimate design and interface
+// provided by this package, we do not expect carrying "less common" features.
package log
import (
"context"
+ "fmt"
"github.com/sirupsen/logrus"
)
-var (
- // G is an alias for GetLogger.
- //
- // We may want to define this locally to a package to get package tagged log
- // messages.
- G = GetLogger
+// G is a shorthand for [GetLogger].
+//
+// We may want to define this locally to a package to get package tagged log
+// messages.
+var G = GetLogger
+
+// L is an alias for the standard logger.
+var L = &Entry{
+ Logger: logrus.StandardLogger(),
+ // Default is three fields plus a little extra room.
+ Data: make(Fields, 6),
+}
- // L is an alias for the standard logger.
- L = logrus.NewEntry(logrus.StandardLogger())
-)
+type loggerKey struct{}
-type (
- loggerKey struct{}
-)
+// Fields type to pass to "WithFields".
+type Fields = map[string]any
+
+// Entry is a logging entry. It contains all the fields passed with
+// [Entry.WithFields]. It's finally logged when Trace, Debug, Info, Warn,
+// Error, Fatal or Panic is called on it. These objects can be reused and
+// passed around as much as you wish to avoid field duplication.
+//
+// Entry is a transitional type, and currently an alias for [logrus.Entry].
+type Entry = logrus.Entry
+// RFC3339NanoFixed is [time.RFC3339Nano] with nanoseconds padded using
+// zeros to ensure the formatted time is always the same number of
+// characters.
+const RFC3339NanoFixed = "2006-01-02T15:04:05.000000000Z07:00"
+
+// Level is a logging level.
+type Level = logrus.Level
+
+// Supported log levels.
const (
- // RFC3339NanoFixed is time.RFC3339Nano with nanoseconds padded using zeros to
- // ensure the formatted time is always the same number of characters.
- RFC3339NanoFixed = "2006-01-02T15:04:05.000000000Z07:00"
+ // TraceLevel level. Designates finer-grained informational events
+ // than [DebugLevel].
+ TraceLevel Level = logrus.TraceLevel
+
+ // DebugLevel level. Usually only enabled when debugging. Very verbose
+ // logging.
+ DebugLevel Level = logrus.DebugLevel
+
+ // InfoLevel level. General operational entries about what's going on
+ // inside the application.
+ InfoLevel Level = logrus.InfoLevel
+
+ // WarnLevel level. Non-critical entries that deserve eyes.
+ WarnLevel Level = logrus.WarnLevel
+
+ // ErrorLevel level. Logs errors that should definitely be noted.
+ // Commonly used for hooks to send errors to an error tracking service.
+ ErrorLevel Level = logrus.ErrorLevel
+
+ // FatalLevel level. Logs and then calls "logger.Exit(1)". It exits
+ // even if the logging level is set to Panic.
+ FatalLevel Level = logrus.FatalLevel
+
+ // PanicLevel level. This is the highest level of severity. Logs and
+ // then calls panic with the message passed to Debug, Info, ...
+ PanicLevel Level = logrus.PanicLevel
+)
- // TextFormat represents the text logging format
- TextFormat = "text"
+// SetLevel sets log level globally. It returns an error if the given
+// level is not supported.
+//
+// level can be one of:
+//
+// - "trace" ([TraceLevel])
+// - "debug" ([DebugLevel])
+// - "info" ([InfoLevel])
+// - "warn" ([WarnLevel])
+// - "error" ([ErrorLevel])
+// - "fatal" ([FatalLevel])
+// - "panic" ([PanicLevel])
+func SetLevel(level string) error {
+ lvl, err := logrus.ParseLevel(level)
+ if err != nil {
+ return err
+ }
+
+ L.Logger.SetLevel(lvl)
+ return nil
+}
- // JSONFormat represents the JSON logging format
- JSONFormat = "json"
+// GetLevel returns the current log level.
+func GetLevel() Level {
+ return L.Logger.GetLevel()
+}
+
+// OutputFormat specifies a log output format.
+type OutputFormat string
+
+// Supported log output formats.
+const (
+ // TextFormat represents the text logging format.
+ TextFormat OutputFormat = "text"
+
+ // JSONFormat represents the JSON logging format.
+ JSONFormat OutputFormat = "json"
)
+// SetFormat sets the log output format ([TextFormat] or [JSONFormat]).
+func SetFormat(format OutputFormat) error {
+ switch format {
+ case TextFormat:
+ L.Logger.SetFormatter(&logrus.TextFormatter{
+ TimestampFormat: RFC3339NanoFixed,
+ FullTimestamp: true,
+ })
+ return nil
+ case JSONFormat:
+ L.Logger.SetFormatter(&logrus.JSONFormatter{
+ TimestampFormat: RFC3339NanoFixed,
+ })
+ return nil
+ default:
+ return fmt.Errorf("unknown log format: %s", format)
+ }
+}
+
// WithLogger returns a new context with the provided logger. Use in
// combination with logger.WithField(s) for great effect.
-func WithLogger(ctx context.Context, logger *logrus.Entry) context.Context {
- e := logger.WithContext(ctx)
- return context.WithValue(ctx, loggerKey{}, e)
+func WithLogger(ctx context.Context, logger *Entry) context.Context {
+ return context.WithValue(ctx, loggerKey{}, logger.WithContext(ctx))
}
// GetLogger retrieves the current logger from the context. If no logger is
// available, the default logger is returned.
-func GetLogger(ctx context.Context) *logrus.Entry {
- logger := ctx.Value(loggerKey{})
-
- if logger == nil {
- return L.WithContext(ctx)
+func GetLogger(ctx context.Context) *Entry {
+ if logger := ctx.Value(loggerKey{}); logger != nil {
+ return logger.(*Entry)
}
-
- return logger.(*logrus.Entry)
+ return L.WithContext(ctx)
}
diff --git a/tools/vendor/github.com/containerd/containerd/metadata/adaptors.go b/tools/vendor/github.com/containerd/containerd/metadata/adaptors.go
index c5d576f84..dbff7bacd 100644
--- a/tools/vendor/github.com/containerd/containerd/metadata/adaptors.go
+++ b/tools/vendor/github.com/containerd/containerd/metadata/adaptors.go
@@ -24,6 +24,7 @@ import (
"github.com/containerd/containerd/filters"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/leases"
+ "github.com/containerd/containerd/sandbox"
"github.com/containerd/containerd/snapshots"
)
@@ -149,6 +150,23 @@ func adaptSnapshot(info snapshots.Info) filters.Adaptor {
})
}
+func adaptSandbox(instance *sandbox.Sandbox) filters.Adaptor {
+ return filters.AdapterFunc(func(fieldpath []string) (string, bool) {
+ if len(fieldpath) == 0 {
+ return "", false
+ }
+
+ switch fieldpath[0] {
+ case "id":
+ return instance.ID, true
+ case "labels":
+ return checkMap(fieldpath[1:], instance.Labels)
+ default:
+ return "", false
+ }
+ })
+}
+
func checkMap(fieldpath []string, m map[string]string) (string, bool) {
if len(m) == 0 {
return "", false
diff --git a/tools/vendor/github.com/containerd/containerd/metadata/boltutil/helpers.go b/tools/vendor/github.com/containerd/containerd/metadata/boltutil/helpers.go
index 4722a5226..8f8df3361 100644
--- a/tools/vendor/github.com/containerd/containerd/metadata/boltutil/helpers.go
+++ b/tools/vendor/github.com/containerd/containerd/metadata/boltutil/helpers.go
@@ -20,8 +20,10 @@ import (
"fmt"
"time"
- "github.com/gogo/protobuf/proto"
- "github.com/gogo/protobuf/types"
+ "github.com/containerd/containerd/protobuf"
+ "github.com/containerd/containerd/protobuf/proto"
+ "github.com/containerd/containerd/protobuf/types"
+ "github.com/containerd/typeurl/v2"
bolt "go.etcd.io/bbolt"
)
@@ -151,7 +153,7 @@ func WriteTimestamps(bkt *bolt.Bucket, created, updated time.Time) error {
// WriteExtensions will write a KV map to the given bucket,
// where `K` is a string key and `V` is a protobuf's Any type that represents a generic extension.
-func WriteExtensions(bkt *bolt.Bucket, extensions map[string]types.Any) error {
+func WriteExtensions(bkt *bolt.Bucket, extensions map[string]typeurl.Any) error {
if len(extensions) == 0 {
return nil
}
@@ -162,7 +164,8 @@ func WriteExtensions(bkt *bolt.Bucket, extensions map[string]types.Any) error {
}
for name, ext := range extensions {
- p, err := proto.Marshal(&ext)
+ ext := protobuf.FromAny(ext)
+ p, err := proto.Marshal(ext)
if err != nil {
return err
}
@@ -176,9 +179,9 @@ func WriteExtensions(bkt *bolt.Bucket, extensions map[string]types.Any) error {
}
// ReadExtensions will read back a map of extensions from the given bucket, previously written by WriteExtensions
-func ReadExtensions(bkt *bolt.Bucket) (map[string]types.Any, error) {
+func ReadExtensions(bkt *bolt.Bucket) (map[string]typeurl.Any, error) {
var (
- extensions = make(map[string]types.Any)
+ extensions = make(map[string]typeurl.Any)
ebkt = bkt.Bucket(bucketKeyExtensions)
)
@@ -192,7 +195,7 @@ func ReadExtensions(bkt *bolt.Bucket) (map[string]types.Any, error) {
return err
}
- extensions[string(k)] = t
+ extensions[string(k)] = &t
return nil
}); err != nil {
return nil, err
@@ -202,18 +205,19 @@ func ReadExtensions(bkt *bolt.Bucket) (map[string]types.Any, error) {
}
// WriteAny write a protobuf's Any type to the bucket
-func WriteAny(bkt *bolt.Bucket, name []byte, any *types.Any) error {
- if any == nil {
+func WriteAny(bkt *bolt.Bucket, name []byte, any typeurl.Any) error {
+ pbany := protobuf.FromAny(any)
+ if pbany == nil {
return nil
}
- data, err := proto.Marshal(any)
+ data, err := proto.Marshal(pbany)
if err != nil {
- return err
+ return fmt.Errorf("failed to marshal: %w", err)
}
if err := bkt.Put(name, data); err != nil {
- return err
+ return fmt.Errorf("put failed: %w", err)
}
return nil
diff --git a/tools/vendor/github.com/containerd/containerd/metadata/buckets.go b/tools/vendor/github.com/containerd/containerd/metadata/buckets.go
index d23be84fe..8dcf10f47 100644
--- a/tools/vendor/github.com/containerd/containerd/metadata/buckets.go
+++ b/tools/vendor/github.com/containerd/containerd/metadata/buckets.go
@@ -26,7 +26,7 @@
//
// Generically, we try to do the following:
//
-// //