diff --git a/.github/workflows/github-actions-stale.yml b/.github/workflows/github-actions-stale.yml
new file mode 100644
index 000000000..20f8ed183
--- /dev/null
+++ b/.github/workflows/github-actions-stale.yml
@@ -0,0 +1,25 @@
+name: Mark stale issues and pull requests
+on:
+ schedule:
+ - cron: '0 23 * * *' # once a day at 11pm UTC time zone
+jobs:
+ stale:
+ permissions:
+ issues: write # for commenting on an issue and editing labels
+ pull-requests: write # for commenting on a PR and editing labels
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/stale@v9
+ with:
+ repo-token: ${{ secrets.GITHUB_TOKEN }}
+ # timing
+ days-before-stale: 60 # 60 days of inactivity
+ days-before-close: 30 # 30 more days of inactivity
+ # labels to watch for, add, and remove
+ only-labels: 'pending info' # only mark issues/PRs as stale if they have this label
+ labels-to-remove-when-unstale: 'pending info' # remove label when unstale -- should be manually added back if information is insufficient
+ # automated messages to issue/PR authors
+ stale-issue-message: 'This issue has been marked as stale because it has been open for 60 days with no activity. This issue will be automatically closed in 30 days if no further activity occurs.'
+ stale-pr-message: 'This pull request has been marked as stale because it has been open for 60 days with no activity. This pull request will be automatically closed in 30 days if no further activity occurs.'
+ close-issue-message: 'This issue was closed because it has been inactive for 30 days since being marked as stale.'
+ close-pr-message: 'This pull request was closed because it has been inactive for 30 days since being marked as stale.'
\ No newline at end of file
diff --git a/charts/addons/Chart.yaml b/charts/addons/Chart.yaml
index 38764c2e0..2d1a299f9 100644
--- a/charts/addons/Chart.yaml
+++ b/charts/addons/Chart.yaml
@@ -3,4 +3,4 @@ apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: addons
-version: "3.20.0"
+version: "3.23.0"
diff --git a/charts/backingservices/Chart.yaml b/charts/backingservices/Chart.yaml
index d4d326e77..4d08f63a6 100644
--- a/charts/backingservices/Chart.yaml
+++ b/charts/backingservices/Chart.yaml
@@ -17,4 +17,4 @@ description: Helm Chart to provision the latest Search and Reporting Service (SR
# The chart version: Pega provides this as a useful way to track changes you make to this chart.
# As a best practice, you should increment the version number each time you make changes to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
-version: "3.20.0"
+version: "3.23.0"
diff --git a/charts/backingservices/charts/srs/README.md b/charts/backingservices/charts/srs/README.md
index 0aa2bee2c..6562cf24e 100644
--- a/charts/backingservices/charts/srs/README.md
+++ b/charts/backingservices/charts/srs/README.md
@@ -37,7 +37,7 @@ The service deployment provisions runtime service pods along with a dependency o
>= 8.6
-
1.29.1
+
1.31.2
< 1.25
Not enabled
7.10.2, 7.16.3 & 7.17.9
@@ -66,7 +66,7 @@ The service deployment provisions runtime service pods along with a dependency o
### If your deployment uses the internally-provisioned Elasticsearch: ###
To migrate to Elasticsearch version 7.17.9 or 8.10.3 from the Elasticsearch version 7.10.2 or 7.16.3, perform the following steps:
-1. Update the SRS Docker image version to use v1.29.1. This version has backward compatibility with Elasticsearch versions 7.10.x and 7.16.x, so your SRS will continue to work even before you update your Elasticsearch service.
+1. Update the SRS Docker image version to use v1.31.2. This version has backward compatibility with Elasticsearch versions 7.10.x and 7.16.x, so your SRS will continue to work even before you update your Elasticsearch service.
2. To update Elasticsearch version to 7.17.9 perform the following actions:
* Update the Elasticsearch `dependencies.version` parameter in the [requirement.yaml](../../requirements.yaml) to 7.17.3.
@@ -81,7 +81,7 @@ To migrate to Elasticsearch version 7.17.9 or 8.10.3 from the Elasticsearch vers
### If your deployment connects to an externally-managed Elasticsearch service: ###
To migrate to Elasticsearch version 7.17.9 or 8.10.3 from the Elasticsearch version 7.10.2 or 7.16.3, perform the following steps:
-1. Update the SRS Docker image version to use v1.29.1. This version has backward compatibility with Elasticsearch versions 7.10.x and 7.16.x, so your SRS will continue to work even before you update your Elasticsearch service.
+1. Update the SRS Docker image version to use v1.31.2. This version has backward compatibility with Elasticsearch versions 7.10.x and 7.16.x, so your SRS will continue to work even before you update your Elasticsearch service.
2. To use Elasticsearch version 7.17.9, upgrade your external Elasticsearch cluster to 7.17.9 according to your organization’s best practices. For more information, see official Elasticsearch version 7.17 documentation.
3. To use Elasticsearch version 8.10.3, upgrade your external Elasticsearch cluster to 8.10.3 according to your organization’s best practices. For more information, see official Elasticsearch version 8.10 documentation.
4. Restart the SRS pods
diff --git a/charts/pega/Chart.yaml b/charts/pega/Chart.yaml
index aba5f427b..aa3a93961 100644
--- a/charts/pega/Chart.yaml
+++ b/charts/pega/Chart.yaml
@@ -1,7 +1,7 @@
---
apiVersion: v1
name: pega
-version: "3.20.0"
+version: "3.23.0"
description: Pega installation on kubernetes
keywords:
- pega
diff --git a/charts/pega/README.md b/charts/pega/README.md
index 2997e298e..ed6b6a684 100644
--- a/charts/pega/README.md
+++ b/charts/pega/README.md
@@ -467,17 +467,31 @@ ingress:
You can optionally configure the resource allocation and limits for a tier using the following parameters. The default value is used if you do not specify an alternative value. See [Managing Kubernetes Resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for more information about how Kubernetes manages resources.
+Example:
+```yaml
+resources:
+ requests:
+ memory: "12Gi"
+ cpu: 3
+ ephemeral-storage:
+ limits:
+ memory: "12Gi"
+ cpu: 4
+ ephemeral-storage:
+```
+
+
Parameter | Description | Default value
--- | --- | ---
`replicas` | Specify the number of Pods to deploy in the tier. | `1`
-`cpuRequest` | Initial CPU request for pods in the current tier. | `3`
-`cpuLimit` | CPU limit for pods in the current tier. | `4`
-`memRequest` | Initial memory request for pods in the current tier. | `12Gi`
-`memLimit` | Memory limit for pods in the current tier. | `12Gi`
+`cpuRequest` | Deprecated, use `resources.requests.cpu`. Initial CPU request for pods in the current tier. | `3`
+`cpuLimit` | Deprecated, use `resources.limits.cpu`. CPU limit for pods in the current tier. | `4`
+`memRequest` | Deprecated, use `resources.requests.memory`. Initial memory request for pods in the current tier. | `12Gi`
+`memLimit` | Deprecated, use `resources.limits.memory`. Memory limit for pods in the current tier. | `12Gi`
`initialHeap` | Specify the initial heap size of the JVM. | `8192m`
`maxHeap` | Specify the maximum heap size of the JVM. | `8192m`
-`ephemeralStorageRequest`| Ephemeral storage request for the tomcat container. | -
-`ephemeralStorageLimit` | Ephemeral storage limit for the tomcat container. | -
+`ephemeralStorageRequest`| Deprecated, use `resources.requests.ephemeral-storage`. Ephemeral storage request for the tomcat container. | -
+`ephemeralStorageLimit` | Deprecated, use `resources.limits.ephemeral-storage`. Ephemeral storage limit for the tomcat container. | -
### JVM Arguments
You can optionally pass in JVM arguments to Tomcat. Depending on the parameter/attribute used, the arguments will be placed into `JAVA_OPTS` or `CATALINA_OPTS` environmental variables.
@@ -506,6 +520,25 @@ tier:
disktype: ssd
```
+### Tolerations
+
+Pega supports configuring tolerations for workloads. Taints are applied to nodes and tolerations are applied to pods. For more information about taints and tolerations please refer official K8S [documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
+
+Example:
+
+```yaml
+tier:
+- name: "my-tier"
+ nodeType: "WebUser"
+
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "value1"
+ effect: "NoSchedule"
+
+```
+
### Liveness, readiness, and startup probes
Pega uses liveness, readiness, and startup probes to determine application health in your deployments. For an overview of these probes, see [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). Configure a probe for *liveness* to determine if a Pod has entered a broken state; configure it for *readiness* to determine if the application is available to be exposed; configure it for *startup* to determine if a pod is ready to be checked for liveness. You can configure probes independently for each tier. If not explicitly configured, default probes are used during the deployment. Set the following parameters as part of a `livenessProbe`, `readinessProbe`, or `startupProbe` configuration.
diff --git a/charts/pega/charts/installer/config/prlog4j2.xml b/charts/pega/charts/installer/config/prlog4j2.xml
index b54b95d1a..c22a83cb1 100644
--- a/charts/pega/charts/installer/config/prlog4j2.xml
+++ b/charts/pega/charts/installer/config/prlog4j2.xml
@@ -130,9 +130,6 @@
-
-
-
diff --git a/charts/pega/config/deploy/context.xml.tmpl b/charts/pega/config/deploy/context.xml.tmpl
index 1ef3717b3..814c61fe8 100644
--- a/charts/pega/config/deploy/context.xml.tmpl
+++ b/charts/pega/config/deploy/context.xml.tmpl
@@ -21,8 +21,7 @@
minEvictableIdleTimeMillis="60000"
/>
- {{ if or .Env.SET_RW .Env.JDBC_RW_URL }}
-
- {{ end }}
{{ if and .Env.JDBC_RO_URL .Env.DB_RO_USERNAME .Env.DB_RO_PASSWORD }}
# labelSelector: