You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST
What happened:
At the moment, cluster controller checks the connectivity every regular interval(configurable from config map) and validates the connectivity between manager and target cluster and updates the status.
This provides us the opportunity to add some more monitoring based on the configuration. For ex: If administrator wants to monitor specific set of pods in a target cluster in a known namespace.
May be something like:
Monitor:
name: kube-system-ns-monitor
type: Pod
min: 3
maxAllowedFailureIntervals: 3
result: warning/error
name: some-system-deploymennt-monitor
type: Deployment
name: manager-server
maxAllowedFailureIntervals: 3
result: warning/error What you expected to happen:
Based on the monitoring, cluster state should change from Ready/Warning/Error
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Still need to think about it and see whether its feasible and worth it Environment:
manager version
Kubernetes version :
$ kubectl version -o yaml
Other debugging information (if applicable):
- controller logs:
$ kubectl logs
The text was updated successfully, but these errors were encountered:
Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST
What happened:
At the moment, cluster controller checks the connectivity every regular interval(configurable from config map) and validates the connectivity between manager and target cluster and updates the status.
This provides us the opportunity to add some more monitoring based on the configuration. For ex: If administrator wants to monitor specific set of pods in a target cluster in a known namespace.
May be something like:
Monitor:
type: Pod
min: 3
maxAllowedFailureIntervals: 3
result: warning/error
type: Deployment
name: manager-server
maxAllowedFailureIntervals: 3
result: warning/error
What you expected to happen:
Based on the monitoring, cluster state should change from Ready/Warning/Error
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Still need to think about it and see whether its feasible and worth it
Environment:
Other debugging information (if applicable):
$ kubectl logs
The text was updated successfully, but these errors were encountered: