Note:
- Starting from September 2, 2020, all release notes of TiDB Operator will be maintained in pingcap/docs-tidb-operator.
- You can read the release notes of all versions of TiDB Operator at PingCAP Docs.
TableFilter
is added to theBackupSpec
andRestoreSpec
.TableFilter
supports backing up specific databases or tables with Dumpling or BR and supports restoring specific databases or tables with BR.BackupSpec.Dumpling.TableFilter
is deprecated since v1.1.4. Please configureBackupSpec.TableFilter
instead. Since TiDB v4.0.3, you can configureBackupSpec.TableFilter
to replace theBackupSpec.BR.DB
andBackupSpec.BR.Table
fields and configureRestoreSpec.TableFilter
to replace theRestoreSpec.BR.DB
andRestoreSpec.BR.Table
fields (#3134, @sstubbs)- Update the version of TiDB and tools to v4.0.4 (#3135, @lichunzhu)
- Support customizing environment variables for the initializer container in the TidbMonitor CR (#3109, @kolbe)
- Support patching PVCs when the storage request is increased (#3096, @cofyc)
- Support TLS for Backup & Restore with Dumpling & TiDB Lightning (#3100, @lichunzhu)
- Support
cert-allowed-cn
for TiFlash (#3101, @DanielZhangQD) - Add support for the
max-index-length
TiDB config option to the TidbCluster CRD (#3076, @kolbe) - Fix goroutine leak when TLS is enabled (#3081, @DanielZhangQD)
- Fix a memory leak issue caused by etcd client when TLS is enabled (#3064, @DanielZhangQD)
- Support TLS for TiFlash (#3049, @DanielZhangQD)
- Configure TZ environment for admission webhook and advanced statefulset controller deployed in tidb-operator chart (#3034, @cofyc)
- Add a field
cleanPolicy
inBackupSpec
to denote the clean policy for backup data when the Backup CR is deleted from the cluster (default toRetain
). Note that before v1.1.3, TiDB Operator will clean the backup data in the remote storage when the Backup CR is deleted, so if you want to clean backup data as before, setspec.cleanPolicy
inBackup
CR orspec.backupTemplate.cleanPolicy
inBackupSchedule
CR toDelete
. (#3002, @lichunzhu) - Replace
mydumper
withdumpling
for backup. Ifspec.mydumper
is configured in theBackup
CR orspec.backupTemplate.mydumper
is configured in theBackupSchedule
CR, migrate it tospec.dumpling
orspec.backupTemplate.dumpling
. After you upgrade TiDB Operator to v1.1.3, note thatspec.mydumper
orspec.backupTemplate.mydumper
will be lost after the upgrade. (#2870, @lichunzhu)
- Update tools in backup manager to v4.0.3 (#3019, @lichunzhu)
- Support
cleanPolicy
for theBackup
CR to define the clean behavior of the backup data in the remote storage when theBackup
CR is deleted (#3002, @lichunzhu) - Add TLS support for TiCDC (#3011, @weekface)
- Add TLS support between Drainer and the downstream database server (#2993, @lichunzhu)
- Support specifying
mysqlNodePort
andstatusNodePort
for TiDB Service Spec (#2941, @lichunzhu) - Fix the
initialCommitTs
bug in Drainer'svalues.yaml
(#2857, @weekface) - Add
backup
config for TiKV server, addenable-telemetry
, and deprecatedisable-telemetry
config for PD server (#2964, @lichunzhu) - Add commitTS info column in
get restore
command (#2926, @lichunzhu) - Update the used Grafana version from v6.0.1 to v6.1.6 (#2923, @lichunzhu)
- Support showing commitTS in restore status (#2899, @lichunzhu)
- Exit without error if the backup data the user tries to clean does not exist (#2916, @lichunzhu)
- Support auto-scaling by storage for TiKV in
TidbClusterAutoScaler
(#2884, @Yisaer) - Clean temporary files in
Backup
job withDumpling
to save space (#2897, @lichunzhu) - Fail the backup job if existing PVC's size is smaller than the storage request in the backup job (#2894, @lichunzhu)
- Support scaling and auto-failover even if a TiKV store fails in upgrading (#2886, @cofyc)
- Fix a bug that the
TidbMonitor
resource could not be set (#2878, @weekface) - Fix an error for the monitor creation in the tidb-cluster chart (#2869, @8398a7)
- Remove
readyToScaleThresholdSeconds
inTidbClusterAutoScaler
; TiDB Operator won't support de-noise inTidbClusterAutoScaler
(#2862, @Yisaer) - Update the version of TiDB Lightning used in tidb-backup-manager from v3.0.15 to v4.0.2 (#2865, @lichunzhu)
- An incompatible issue with PD 4.0.2 has been fixed. Please upgrade TiDB Operator to v1.1.2 before deploying TiDB 4.0.2 and later versions (#2809, @cofyc)
- Collect metrics for TiCDC, TiDB Lightning and TiKV Importer (#2835, @weekface)
- Update PD/TiDB/TiKV config to v4.0.2 (#2828, @DanielZhangQD)
- Fix the bug that
PD
Member might still exist after scaling-in (#2793, @Yisaer) - Support Auto-Scaler Reference in
TidbCluster
Status when there existsTidbClusterAutoScaler
(#2791, @Yisaer) - Support configuring container lifecycle hooks and
terminationGracePeriodSeconds
in TiDB spec (#2810, @weekface)
- Add the
additionalContainers
andadditionalVolumes
fields so that TiDB Operator can support adding sidecars toTiDB
,TiKV
,PD
, etc. (#2229, @yeya24) - Add cross check to ensure TiKV is not scaled or upgraded at the same time (#2705, @DanielZhangQD)
- Fix the bug that TidbMonitor will scrape multi TidbCluster with the same name in different namespaces when then namespace in
ClusterRef
is not set (#2746, @Yisaer) - Update TiDB Operator examples to deploy TiDB Cluster 4.0.0 images (#2600, @kolbe)
- Add the
alertMangerAlertVersion
option to TidbMonitor (#2744, @weekface) - Fix alert rules lost after rolling upgrade (#2715, @weekface)
- Fix an issue that pods may be stuck in pending for a long time in scale-out after a scale-in (#2709, @cofyc)
- Add
EnableDashboardInternalProxy
inPDSpec
to let user directly visit PD Dashboard (#2713, @Yisaer) - Fix the PV syncing error when
TidbMonitor
andTidbCluster
have different values inreclaimPolicy
(#2707, @Yisaer) - Update Configuration to v4.0.1 (#2702, @Yisaer)
- Change tidb-discovery strategy type to
Recreate
to fix the bug that more than one discovery pod may exist (#2701, @weekface) - Expose the
Dashboard
service withHTTP
endpoint whethertlsCluster
is enabled (#2684, @Yisaer) - Add the
.tikv.dataSubDir
field to specify subdirectory within the data volume to store TiKV data (#2682, @cofyc) - Add the
imagePullSecrets
attribute to all components (#2679, @weekface) - Enable StatefulSet and Pod validation webhook to work at the same time (#2664, @Yisaer)
- Emit an event if it fails to sync labels to TiKV stores (#2587, @PengJi)
- Make
datasource
information hidden in log forBackup
andRestore
jobs (#2652, @Yisaer) - Support the
DynamicConfiguration
switch in TidbCluster Spec (#2539, @Yisaer) - Support
LoadBalancerSourceRanges
in theServiceSpec
for theTidbCluster
andTidbMonitor
(#2610, @shonge) - Support
Dashboard
metrics ability forTidbCluster
whenTidbMonitor
deployed (#2483, @Yisaer) - Bump the DM version to v2.0.0-beta.1 (#2615, @tennix)
- support setting discovery resources (#2434, @shonge)
- Support the Denoising for the
TidbCluster
Auto-scaling (#2307, @vincent178) - Support scraping
Pump
andDrainer
metrics in TidbMonitor (#2750, @Yisaer)
This is the GA release of TiDB Operator 1.1, which focuses on the usability, extensibility and security.
See our official documentation site for new features, guides, and instructions in production, etc.
For v1.0.x users, refer to Upgrade TiDB Operator to upgrade TiDB Operator in your cluster. Note that you should read the release notes (especially breaking changes and action required items) before the upgrade.
-
Change TiDB pod
readiness
probe fromHTTPGet
toTCPSocket
4000 port. This will trigger rolling-upgrade for thetidb-server
component. You can setspec.paused
totrue
before upgrading tidb-operator to avoid the rolling upgrade, and set it back tofalse
when you are ready to upgrade your TiDB server (#2139, @weekface) -
--advertise-address
is configured fortidb-server
, which would trigger rolling-upgrade for the TiDB server. You can setspec.paused
totrue
before upgrading TiDB Operator to avoid the rolling upgrade, and set it back tofalse
when you are ready to upgrade your TiDB server (#2076, @cofyc) -
--default-storage-class-name
and--default-backup-storage-class-name
flags are abandoned, and the storage class defaults to Kubernetes default storage class right now. If you have set default storage class different than Kubernetes default storage class, set them explicitly in your TiDB cluster Helm or YAML files. (#1581, @cofyc) -
Add the
timezone
support for all charts (#1122, @weekface).For the
tidb-cluster
chart, we already have thetimezone
option (UTC
by default). If the user does not change it to a different value (for example,Asia/Shanghai
), none of the Pods will be recreated. If the user changes it to another value (for example,Aisa/Shanghai
), all the related Pods (add aTZ
env) will be recreated, namely rolling updated.The related Pods include
pump
,drainer
,discovery
,monitor
,scheduled backup
,tidb-initializer
, andtikv-importer
.All images' time zone maintained by TiDB Operator is
UTC
. If you use your own images, you need to make sure that the time zone inside your images isUTC
.
- Fix
TidbCluster
upgrade bug whenPodWebhook
andAdvancend StatefulSet
are both enabled (#2507, @Yisaer) - Support preemption in
tidb-scheduler
(#2510, @cofyc) - Update BR to v4.0.0-rc.2 to include the
auto_random
fix (#2508, @DanielZhangQD) - Supports advanced statefulset for TiFlash (#2469, @DanielZhangQD)
- Sync Pump before TiDB (#2515, @DanielZhangQD)
- Improve performance by removing
TidbControl
lock (#2489, @weekface) - Support TiCDC in
TidbCluster
(#2362, @weekface) - Update TiDB/TiKV/PD configuration to 4.0.0 GA version (#2571, @Yisaer)
- TiDB Operator will not do failover for PD pods which are not desired (#2570, @Yisaer)
This is the fourth release candidate of v1.1.0
, which focuses on the usability, extensibility and security of TiDB Operator. While we encourage usage in non-critical environments, it is NOT recommended to use this version in critical environments.
- Separate TiDB client certificates can be used for each component. Users should migrate the old TLS configs of Backup and Restore to the new configs. Refer to #2403 for more details (#2403, @weekface)
- Fix the bug that the service annotations would be exposed in
TidbCluster
specification (#2471, @Yisaer) - Fix a bug when reconciling TiDB service while the
healthCheckNodePort
is already generated by Kubernetes (#2438, @aylei) - Support
TidbMonitorRef
inTidbCluster
Status (#2424, @Yisaer) - Support setting the backup path prefix for remote storage (#2435, @onlymellb)
- Support customizing
mydumper
options in Backup CR (#2407, @onlymellb) - Support TiCDC in TidbCluster CR. (#2338, @weekface)
- Update BR to
v3.1.1
in thetidb-backup-manager
image (#2425, @DanielZhangQD) - Support creating node pools for TiFlash and CDC on ACK (#2420, @DanielZhangQD)
- Support creating node pools for TiFlash and CDC on EKS (#2413, @DanielZhangQD)
- Expose
PVReclaimPolicy
forTidbMonitor
when storage is enabled (#2379, @Yisaer) - Support arbitrary topology-based HA in tidb-scheduler (e.g. node zones) (#2366, @PengJi)
- Skip setting the TLS for PD dashboard when the TiDB version is earlier than 4.0.0 (#2389, @weekface)
- Support backup and restore with GCS using BR (#2267, @shuijing198799)
- Update
TiDBConfig
andTiKVConfig
to support the4.0.0-rc
version (#2322, @Yisaer) - Fix the bug when
TidbCluster
service type isNodePort
, the value ofNodePort
would change frequently (#2284, @Yisaer) - Add external strategy ability for
TidbClusterAutoScaler
(#2279, @Yisaer) - PVC will not be deleted when
TidbMonitor
gets deleted (#2374, @Yisaer) - Support scaling for TiFlash (#2237, @DanielZhangQD)
This is the third release candidate of v1.1.0
, which focuses on the usability, extensibility and security of TiDB Operator. While we encourage usage in non-critical environments, it is NOT recommended to use this version in critical environments.
- Skip auto-failover when pods are not scheduled and perform recovery operation no matter what state failover pods are in (#2263, @cofyc)
- Support
TiFlash
metrics inTidbMonitor
(#2341, @Yisaer) - Do not print
rclone
config in the Pod logs (#2343, @DanielZhangQD) - Using
Patch
inperiodicity
controller to avoid updatingStatefulSet
to the wrong state (#2332, @Yisaer) - Set
enable-placement-rules
totrue
for PD if TiFlash is enabled in the cluster (#2328, @DanielZhangQD) - Support
rclone
options in the Backup and Restore CR (#2318, @DanielZhangQD) - Fix the issue that statefulsets are updated during each sync even if no changes are made to the config (#2308, @DanielZhangQD)
- Support configuring
Ingress
inTidbMonitor
(#2314, @Yisaer) - Fix a bug that auto-created failover pods can't be deleted when they are in the failed state (#2300, @cofyc)
- Add useful
Event
inTidbCluster
during upgrading and scaling whenadmissionWebhook.validation.pods
in operator configuration is enabled (#2305, @Yisaer) - Fix the issue that services are updated during each sync even if no changes are made to the service configuration (#2299, @DanielZhangQD)
- Fix a bug that would cause panic in statefulset webhook when the update strategy of
StatefulSet
is notRollingUpdate
(#2291, @Yisaer) - Fix a panic in syncing
TidbClusterAutoScaler
status when the targetTidbCluster
does not exist (#2289, @Yisaer) - Fix the
pdapi
cache issue while the cluster TLS is enabled (#2275, @weekface) - Fix the config error in restore (#2250, @Yisaer)
- Support failover for TiFlash (#2249, @DanielZhangQD)
- Update the default
eks
version in terraform scripts to 1.15 (#2238, @Yisaer) - Support upgrading for TiFlash (#2246, @DanielZhangQD)
- Add
stderr
logs from BR to the backup-manager logs (#2213, @DanielZhangQD) - Add field
TiKVEncryptionConfig
inTiKVConfig
, which defines how to encrypt data key and raw data in TiKV, and how to back up and restore the master key. See the description for details intikv_config.go
(#2151, @shuijing198799)
This is the second release candidate of v1.1.0
, which focuses on the usability, extensibility and security of TiDB Operator. While we encourage usage in non-critical environments, it is NOT recommended to use this version in critical environments.
- Change TiDB pod
readiness
probe fromHTTPGet
toTCPSocket
4000 port. This will trigger rolling-upgrade for thetidb-server
component. You can setspec.paused
totrue
before upgrading tidb-operator to avoid the rolling upgrade, and set it back tofalse
when you are ready to upgrade your tidb server (#2139, @weekface)
- Add
status
field forTidbAutoScaler
CR (#2182, @Yisaer) - Add
spec.pd.maxFailoverCount
field to limit max failover replicas for PD (#2184, @cofyc) - Emit more events for
TidbCluster
andTidbClusterAutoScaler
to help users know TiDB running status (#2150, @Yisaer) - Add the
AGE
column to show creation timestamp for all CRDs (#2168, @cofyc) - Add a switch to skip PD Dashboard TLS configuration (#2143, @weekface)
- Support deploying TiFlash with TidbCluster CR (#2157, @DanielZhangQD)
- Add TLS support for TiKV metrics API (#2137, @weekface)
- Set PD DashboardConfig when TLS between the MySQL client and TiDB server is enabled (#2085, @weekface)
- Remove unnecessary informer caches to reduce the memory footprint of tidb-controller-manager (#1504, @aylei)
- Fix the failure that Helm cannot load the kubeconfig file when deleting the tidb-operator release during
terraform destroy
(#2148, @DanielZhangQD) - Support configuring the Webhook TLS setting by loading a secret (#2135, @Yisaer)
- Support TiFlash in TidbCluster CR (#2122, @DanielZhangQD)
- Fix the error that alertmanager couldn't be set in
TidbMonitor
(#2108, @Yisaer)
This is a release candidate of v1.1.0
, which focuses on the usability, extensibility and security of TiDB Operator. While we encourage usage in non-critical environments, it is NOT recommended to use this version in critical environments.
--advertise-address
will be configured fortidb-server
, which would trigger rolling-upgrade for thetidb-server
component. You can setspec.paused
totrue
before upgrading tidb-operator to avoid the rolling upgrade, and set it back tofalse
when you are ready to upgrade your tidb server (#2076, @cofyc)- Add the
tlsClient.tlsSecret
field in the backup and restore spec, which supports specifying a secret name that includes the cert (#2003, @shuijing198799) - Remove
spec.br.pd
,spec.br.ca
,spec.br.cert
,spec.br.key
and addspec.br.cluster
,spec.br.clusterNamespace
for theBackup
,Restore
andBackupSchedule
custom resources, which makes the BR configuration more reasonable (#1836, @shuijing198799)
- Use
tidb-lightning
inRestore
instead ofloader
(#2068, @Yisaer) - Add
cert-allowed-cn
support to TiDB components (#2061, @weekface) - Fix the PD
location-labels
configuration (#1941, @aylei) - Able to pause and unpause tidb cluster deployment via
spec.paused
(#2013, @cofyc) - Default the
max-backups
for TiDB server configuration to3
if the TiDB cluster is deployed by CR (#2045, @Yisaer) - Able to configure custom environments for components (#2052, @cofyc)
- Fix the error that
kubectl get tc
cannot show correct images (#2031, @Yisaer) - Support deploying TiDB clusters with TidbCluster and TidbMonitor CRs via Terraform on ACK (#2012, @DanielZhangQD)
- Update PDConfig for TidbCluster to PD v3.1.0 (#1928, @Yisaer)
- Support deploying TiDB clusters with TidbCluster and TidbMonitor CRs via Terraform on AWS (#2004, @DanielZhangQD)
- Update TidbConfig for TidbCluster to TiDB v3.1.0 (#1906, @Yisaer)
- Allow users to define resources for initContainers in TiDB initializer job (#1938, @tfulcrand)
- Add TLS support for Pump and Drainer (#1979, @weekface)
- Add documents and examples for auto-scaler and initializer (#1772, @Yisaer)
- Make tidb-initializer support TLS (#1931, @weekface)
- Fix the drainer installation error when
drainerName
is set (#1961, @DanielZhangQD) - Fix some TiKV configuration keys in toml (#1887, @aylei)
- Support using a remote directory as data source for tidb-lightning (#1629, @aylei)
- Add the API document and a script that generates documentation (#1945, @Yisaer)
- Add the tikv-importer chart (#1910, @shonge)
- Fix the Prometheus scrape config issue while TLS is enabled (#1919, @weekface)
- Enable TLS between TiDB components (#1870, @weekface)
- Fix the timeout error when
.Values.admission.validation.pods
istrue
during the TiKV upgrade (#1875, @Yisaer) - Enable TLS for MySQL clients (#1878, @weekface)
- Fix the bug which would cause broken TiDB image property (#1860, @Yisaer)
- TidbMonitor would use its namespace for the targetRef if it is not defined (#1834, @Yisaer)
- Support starting tidb-server with
--advertise-address
parameter (#1859, @LinuxGit) - Backup/Restore: support configuring TiKV GC life time (#1835, @LinuxGit)
- Support no secret for S3/Ceph when the OIDC authentication is used (#1817, @tirsen)
-
- Change the setting from the previous
admission.hookEnabled.pods
to theadmission.validation.pods
- Change the setting from the previous
admission.hookEnabled.statefulSets
to theadmission.validation.statefulSets
- Change the setting from the previous
admission.hookEnabled.validating
to theadmission.validation.pingcapResources
- Change the setting from the previous
admission.hookEnabled.defaulting
to theadmission.mutation.pingcapResources
- Change the setting from the previous
admission.failurePolicy.defaulting
to theadmission.failurePolicy.mutation
- Change the setting from the previous
admission.failurePolicy.*
to theadmission.failurePolicy.validation
(#1832, @Yisaer)
- Change the setting from the previous
- Enable TidbCluster defaulting mutation by default which is recommended when admission webhook is used (#1816, @Yisaer)
- Fix a bug that TiKV fails to start while creating the cluster using CR with cluster TLS enabled (#1808, @weekface)
- Support using prefix in remote storage during backup/restore (#1790, @DanielZhangQD)
This is a pre-release of v1.1.0
, which focuses on the usability, extensibility and security of TiDB Operator. While we encourage usage in non-critical environments, it is NOT recommended to use this version in critical environments.
--default-storage-class-name
and--default-backup-storage-class-name
are abandoned, and the storage class defaults to Kubernetes default storage class right now. If you have set default storage class different than Kubernetes default storage class, please set them explicitly in your TiDB cluster helm or YAML files. (#1581, @cofyc)
- Allow users to configure affinity and tolerations for
Backup
andRestore
. (#1737, @Smana) - Allow AdvancedStatefulSet and Admission Webhook to work together. (#1640, @Yisaer)
- Add a basic deployment example of managing TiDB cluster with custom resources only. (#1573, @aylei)
- Support TidbCluster Auto-scaling feature based on CPU average utilization load. (#1731, @Yisaer)
- Support user-defined TiDB server/client certificate (#1714, @weekface)
- Add an option for tidb-backup chart to allow reusing existing PVC or not for restore (#1708, @mightyguava)
- Add
resources
,imagePullPolicy
andnodeSelector
field for tidb-backup chart (#1705, @mightyguava) - Add more SANs (Subject Alternative Name) to TiDB server certificate (#1702, @weekface)
- Support automatically migrating existing Kubernetes StatefulSets to Advanced StatefulSets when AdvancedStatfulSet feature is enabled (#1580, @cofyc)
- Fix the bug in admission webhook which causes PD pod deleting error and allow the deleting pod to request for PD and TiKV when PVC is not found. (#1568, @Yisaer)
- Limit the restart rate for PD and TiKV - only one instance would be restarted each time (#1532, @Yisaer)
- Add default ClusterRef namespace for TidbMonitor as the same as it is deployed and fix the bug that TidbMonitor's Pod can't be created when Spec.PrometheusSpec.logLevel is missing. (#1500, @Yisaer)
- Refine logs for
TidbMonitor
andTidbInitializer
controller (#1493, @aylei) - Avoid unnecessary updates to
Service
andDeployment
of discovery (#1499, @aylei) - Remove some update events that are not very useful (#1486, @weekface)
This is a pre-release of v1.1.0
, which focuses on the usability, extensibility and security of TiDB Operator. While we encourage usage in non-critical environments, it is NOT recommended to use this version in critical environments.
-
ACTION REQUIRED: Add the
timezone
support for all charts (#1122, @weekface).For the
tidb-cluster
chart, we already have thetimezone
option (UTC
by default). If the user does not change it to a different value (for example:Aisa/Shanghai
), all Pods will not be recreated. If the user changes it to another value (for example:Aisa/Shanghai
), all the related Pods (add aTZ
env) will be recreated (rolling update).Regarding other charts, we don't have a
timezone
option in theirvalues.yaml
. We add thetimezone
option in this PR. No matter whether the user uses the oldvalues.yaml
or the newvalues.yaml
, all the related Pods (add aTZ
env) will not be recreated (rolling update).The related Pods include
pump
,drainer
,discovery
,monitor
,scheduled backup
,tidb-initializer
, andtikv-importer
.All images' time zone maintained by
tidb-operator
isUTC
. If you use your own images, you need to make sure that the time zone inside your images isUTC
.
- Support backup to S3 with Backup & Restore (BR) (#1280, @DanielZhangQD)
- Add basic defaulting and validating for
TidbCluster
(#1429, @aylei) - Support scaling in/out with deleted slots feature of advanced StatefulSets (#1361, @cofyc)
- Support initializing the TiDB cluster with TidbInitializer Custom Resource (#1403, @DanielZhangQD)
- Refine the configuration schema of PD/TiKV/TiDB (#1411, @aylei)
- Set the default name of the instance label key for
tidbcluster
-owned resources to the cluster name (#1419, @aylei) - Extend the custom resource
TidbCluster
to support managing the Pump cluster (#1269, @aylei) - Fix the default TiKV-importer configuration (#1415, @aylei)
- Expose ephemeral-storage in resource configuration (#1398, @aylei)
- Add e2e case of operating tidb-cluster without helm (#1396, @aylei)
- Expose terraform Aliyun ACK version and specify the default version to '1.14.8-aliyun.1' (#1284, @shonge)
- Refine error messages for the scheduler (#1373, @weekface)
- Bind the cluster-role
system:kube-scheduler
to the service accounttidb-scheduler
(#1355, @shonge) - Add a new CRD TidbInitializer (#1391, @aylei)
- Upgrade the default backup image to pingcap/tidb-cloud-backup:20191217 and facilitate the
-r
option (#1360, @aylei) - Fix Docker ulimit configuring for the latest EKS AMI (#1349, @aylei)
- Support sync pump status to tidb-cluster (#1292, @shonge)
- Support automatically creating and reconciling the tidb-discovery-service for
tidb-controller-manager
(#1322, @aylei) - Make backup and restore more universal and secure (#1276, @onlymellb)
- Manage PD and TiKV configurations in the
TidbCluster
resource (#1330, @aylei) - Support managing the configuration of tidb-server in the
TidbCluster
resource (#1291, @aylei) - Add schema for configuration of TiKV (#1306, @aylei)
- Wait for the TiDB
host:port
to be opened before processing to initialize TiDB to speed up TiDB initialization (#1296, @cofyc) - Remove DinD related scripts (#1283, @shonge)
- Allow retrieving credentials from metadata on AWS and GCP (#1248, @gregwebs)
- Add the privilege to operate configmap for tidb-controller-manager (#1275, @aylei)
- Manage TiDB service in tidb-controller-manager (#1242, @aylei)
- Support the cluster-level setting for components (#1193, @aylei)
- Get the time string from the current time instead of the Pod name (#1229, @weekface)
- Operator will not resign the ddl owner anymore when upgrading tidb-servers because tidb-server will transfer ddl owner automatically on shutdown (#1239, @aylei)
- Fix the Google terraform module
use_ip_aliases
error (#1206, @tennix) - Upgrade the default TiDB version to v3.0.5 (#1179, @shonge)
- Upgrade the base system of Docker images to the latest stable (#1178, @AstroProfundis)
tkctl get TiKV
now can show store state for each TiKV Pod (#916, @Yisaer)- Add an option to monitor across namespaces (#907, @gregwebs)
- Add the
STOREID
column to show the store ID for each TiKV Pod intkctl get TiKV
(#842, @Yisaer) - Users can designate permitting host in chart values.tidb.permitHost (#779, @shonge)
- Add the zone label and reserved resources arguments to kubelet (#871, @aylei)
- Fix an issue that kubeconfig may be destroyed in the apply phrase (#861, @cofyc)
- Support canary release for the TiKV component (#869, @onlymellb)
- Make the latest charts compatible with the old controller manager (#856, @onlymellb)
- Add the basic support of TLS encrypted connections in the TiDB cluster (#750, @AstroProfundis)
- Support tidb-operator to spec nodeSelector, affinity and tolerations (#855, @shonge)
- Support configuring resources requests and limits for all containers of the TiDB cluster (#853, @aylei)
- Support using Kind (Kubernetes IN Docker) to set up a testing environment (#791, @xiaojingchen)
- Support add-hoc data source to be restored with the tidb-lightning chart (#827, @tennix)
- Add the
tikvGCLifeTime
option (#835, @weekface) - Update the default backup image to pingcap/tidb-cloud-backup:20190828 (#846, @aylei)
- Fix the Pump/Drainer data directory to avoid potential data loss (#826, @aylei)
- Fix the issue that
tkctl
ouputs nothing with the-oyaml
or-ojson
flag and support viewing details of a specific Pod or PV, also improve the output of thetkctl get
command (#822, @onlymellb) - Add recommendations options to mydumper:
-t 16 -F 64 --skip-tz-utc
(#828, @weekface) - Support zonal and multi-zonal clusters in deploy/gcp (#809, @cofyc)
- Fix ad-hoc backup when the default backup name is used (#836, @DanielZhangQD)
- Add the support for tidb-lightning (#817, @tennix)
- Support restoring the TiDB cluster from a specified scheduled backup directory (#804, @onlymellb)
- Fix an exception in the log of
tkctl
(#797, @onlymellb) - Add the
hostNetwork
field in PD/TiKV/TiDB spec to make it possible to run TiDB components in host network (#774, @cofyc) - Use mdadm and RAID rather than LVM when it is available on GKE (#789, @gregwebs)
- Users can now expand cloud storage PV dynamically by increasing the PVC storage size (#772, @tennix)
- Support configuring node image types for PD/TiDB/TiKV node pools (#776, @cofyc)
- Add a script to delete unused disk for GKE (#771, @gregwebs)
- Support
binlog.pump.config
andbinlog.drainer.config
configurations for Pump and Drainer (#693, @weekface) - Prevent the Pump progress from exiting with 0 if the Pump becomes
offline
(#769, @weekface) - Introduce a new helm chart, tidb-drainer, to facilitate multiple Drainers management (#744, @aylei)
- Add the backup-manager tool to support backing up, restoring, and cleaning backup data (#694, @onlymellb)
- Add
affinity
to Pump/Drainer configration (#741, @weekface) - Fix the TiKV scaling failure in some cases after TiKV failover (#726, @onlymellb)
- Fix error handling for UpdateService (#718, @DanielZhangQD)
- Reduce e2e run time from 60 m to 20 m (#713, @weekface)
- Add the
AdvancedStatefulset
feature to use advanced StatefulSet instead of Kubernetes builtin StatefulSet (#1108, @cofyc) - Enable auto generate certificates for the TiDB cluster (#782, @AstroProfundis)
- Support backup to gcs (#1127, @onlymellb)
- Support configuring
net.ipv4.tcp_keepalive_time
andnet.core.somaxconn
for TiDB and configuringnet.core.somaxconn
for TiKV (#1107, @DanielZhangQD) - Add basic e2e tests for aggregated apiserver (#1109, @aylei)
- Add the
enablePVReclaim
option to reclaim PV when tidb-operator scales in TiKV or PD (#1037, @onlymellb) - Unify all S3 compliant storage to support backup and restore (#1088, @onlymellb)
- Set podSecuriyContext to nil by default (#1079, @aylei)
- Add tidb-apiserver in the tidb-operator chart (#1083, @aylei)
- Add new component TiDB aggregated apiserver (#1048, @aylei)
- Fix the issue that the tkctl version does not work when the release name is un-wanted (#1065, @aylei)
- Support pause for backup schedule (#1047, @onlymellb)
- Fix the issue that TiDB Loadbalancer is empty in terraform output (#1045, @DanielZhangQD)
- Fix that the
create_tidb_cluster_release
variable in AWS terraform script does not work (#1062, @aylei) - Enable
ConfigMapRollout
by default in the stability test (#1036, @aylei) - Migrate to use app/v1 and do not support Kubernetes before 1.9 anymore (#1012, @Yisaer)
- Suspend the ReplaceUnhealthy process for AWS TiKV auto-scaling-group (#1014, @aylei)
- Change the tidb-monitor-reloader image to pingcap/tidb-monitor-reloader:v1.0.1 (#898, @qiffang)
- Add some sysctl kernel parameter settings for tuning (#1016, @tennix)
- Support maximum retention time backups for backup schedule (#979, @onlymellb)
- Upgrade the default TiDB version to v3.0.4 (#837, @shonge)
- Fix values file customization for tidb-operator on Aliyun (#971, @DanielZhangQD)
- Add the
maxFailoverCount
limit to TiKV (#965, @weekface) - Support setting custom tidb-operator values in terraform script for AWS (#946, @aylei)
- Convert the TiKV capacity into MiB when it is not a multiple of GiB (#942, @cofyc)
- Fix Drainer misconfiguration (#939, @weekface)
- Support correctly deploying tidb-operator and tidb-cluster with customized
values.yaml
(#959, @DanielZhangQD) - Support specifying SecurityContext for PD, TiKV and TiDB Pods and enable tcp keepalive for AWS (#915, @aylei)