-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creating the LightningTensor
device class based on the new API
#671
Merged
Merged
Changes from 76 commits
Commits
Show all changes
107 commits
Select commit
Hold shift + click to select a range
06f1285
empty commit (triggering CI)
PietropaoloFrisoni 6ffe0be
Auto update version
github-actions[bot] 95d2bf7
Definition of the two front-end classes
PietropaoloFrisoni 3a35a09
adding the `lightning_tensor` string to the supported backends in `se…
PietropaoloFrisoni 568df4c
adding `__init__.py` file to directly import the `lightning_tensor` d…
PietropaoloFrisoni ac3a397
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni 2fcf200
re-naming file
PietropaoloFrisoni 2a6cabd
Auto update version
github-actions[bot] f3b8e8f
Creating the first prototype of initial MPS state tensor using `quimb`
PietropaoloFrisoni 98891cb
providing the `backend`, `method` parameters and making `wires` optional
PietropaoloFrisoni 9f3c094
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni a63241e
Changing names and structure
PietropaoloFrisoni 10e5d22
Auto update version
github-actions[bot] 68f4296
adding method required by the new device API design
PietropaoloFrisoni f78c2f8
Auto update version
github-actions[bot] f5ba10f
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni 5f64582
using the `kwargs` parameter in `LightningTensor` and `CircuitMPS` in…
PietropaoloFrisoni 621082c
taking some further inputs from the new device API
PietropaoloFrisoni 6395940
Perhaps decided the overall structure of `LIghtningTensor`
PietropaoloFrisoni 8de1a46
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni 3e89306
Auto update version
github-actions[bot] 9510eb2
adding docs to methods
PietropaoloFrisoni a16f7de
temporary changes so that `pylint` does not complain at this stage
PietropaoloFrisoni 295aee7
running `isort`
PietropaoloFrisoni 710dffd
re-running formatter after `isort`
PietropaoloFrisoni 870e0e4
re-running formatter after `isort`
PietropaoloFrisoni 34b23a9
Applying suggested formatting change from CI
PietropaoloFrisoni f5eb35a
adding tmp unit tests
PietropaoloFrisoni 95839b9
Adding `quimb` in `requirements.txt`
PietropaoloFrisoni 72a54d4
runing `isort` on mps test
PietropaoloFrisoni a235592
removing `quimb` from requirement and deleting unit tests for `lightn…
PietropaoloFrisoni 6d7d879
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni 4139412
Auto update version
github-actions[bot] b0a55b3
re-inserting unit tests with an additional `yml` file
PietropaoloFrisoni ed1ba39
running isort on quimb test
PietropaoloFrisoni 6da1c94
changing name of yml file
PietropaoloFrisoni 54d430d
preventing error in import
PietropaoloFrisoni 943af7b
updating yml file
PietropaoloFrisoni 9c6d5e6
inserting `quimb` package in requirements-dev
PietropaoloFrisoni a7e7327
strange error with `quimb`
PietropaoloFrisoni 73428a6
strange error with `quimb`
PietropaoloFrisoni 9729440
specifying scipy version
PietropaoloFrisoni 1d0bce7
removing installation of scipy from yml file
PietropaoloFrisoni 2a4b1cd
removing the new `yml` file
PietropaoloFrisoni 706dc93
testing if tests are tested
PietropaoloFrisoni b5c0a63
Covering all lines in tests
PietropaoloFrisoni 50928ad
forgot final line for formatter
PietropaoloFrisoni 1ed59a3
Python formatter on CI complaints
PietropaoloFrisoni 8108cfd
covering missing lines
PietropaoloFrisoni c9c3cb2
formatter on CI complaints
PietropaoloFrisoni 69f9ce0
Trying not to skip test if Cpp is enabled
PietropaoloFrisoni 159418b
skipping tests if Cpp is enabled
PietropaoloFrisoni 8789d5c
removing the only line not covered by tests so far
PietropaoloFrisoni a033eac
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni 2df5486
Auto update version
github-actions[bot] b470af1
Applying suggestions from code review and making the `state` attribut…
PietropaoloFrisoni 754dae1
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni eb348b3
Python formatter
PietropaoloFrisoni cdc585b
Merge branch 'lightning-tensor-device' of https://github.com/PennyLan…
PietropaoloFrisoni f9dc84e
removing params from `QuimbMPS`
PietropaoloFrisoni 651730d
Auto update version
github-actions[bot] eb84ded
removing `**kwargs` from `QuimbMPS`
PietropaoloFrisoni 36effb4
Merge branch 'lightning-tensor-device' of https://github.com/PennyLan…
PietropaoloFrisoni 3925d37
removing unnecessary param at this stage
PietropaoloFrisoni da0518b
covering test line
PietropaoloFrisoni df2350d
formatter...
PietropaoloFrisoni 017a924
removing param description
PietropaoloFrisoni ba89c13
Making `pylint` happy
PietropaoloFrisoni 505e54a
forgot new arg in test
PietropaoloFrisoni c0a9df9
Updating base class and `preprocess` function
PietropaoloFrisoni 364ff80
Updating `LightningTensor` class with new names from more advanced PR
PietropaoloFrisoni 1629675
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni aebfd13
Auto update version
github-actions[bot] f3c5f40
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni 6333163
Auto update version
github-actions[bot] dd60aa9
Triggering CI
PietropaoloFrisoni f62400d
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni b419e86
Auto update version
github-actions[bot] 74c6562
Trying to remove pin from `quimb` in `requirements.dev`
PietropaoloFrisoni cad8e60
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni d13d373
Auto update version
github-actions[bot] 5286592
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni 5487b57
Auto update version
github-actions[bot] 5e47874
Removing infos on derivatives and using config options to pass parame…
PietropaoloFrisoni 6945ae7
Merge branch 'lightning-tensor-device' of https://github.com/PennyLan…
PietropaoloFrisoni 804df34
Usual `pylint` failures
PietropaoloFrisoni b1fbe3e
Trying to solve formatting errors
PietropaoloFrisoni 9dcff51
typo in docstring
PietropaoloFrisoni 50fa0e1
Sunday update: improved docstrings and structure
PietropaoloFrisoni 0d4c870
Removing method that was supposed to be in next PR
PietropaoloFrisoni e08afb0
removing old TODO comment
PietropaoloFrisoni eaa6e8d
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni e3ed59f
Removing changes from the `setup.py` file
PietropaoloFrisoni 11fae6d
restoring previous format to `setup.py`
PietropaoloFrisoni 9c3d630
Auto update version from '0.36.0-dev34' to '0.36.0-dev41'
ringo-but-quantum 2bff970
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni 2e91195
Auto update version from '0.36.0-dev40' to '0.36.0-dev41'
ringo-but-quantum 149d126
Removing kwargs as suggested from code review
PietropaoloFrisoni 9ef5625
Addressing comments from CR
PietropaoloFrisoni 6a894ca
Skipping tests if CPP binary is available
PietropaoloFrisoni bb5f7f0
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni 5981e98
Auto update version from '0.36.0-dev42' to '0.36.0-dev43'
ringo-but-quantum 458afbe
Merge branch 'master' into lightning-tensor-device
PietropaoloFrisoni aa1f658
Auto update version from '0.36.0-dev43' to '0.36.0-dev44'
ringo-but-quantum 54ebe91
Restoring name in changelog (?)
PietropaoloFrisoni fd82258
Increasing time limit for Python tests
PietropaoloFrisoni 13d1c11
Applying suggestions from code review
PietropaoloFrisoni File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -16,4 +16,4 @@ | |
Version number (major.minor.patch[-label]) | ||
""" | ||
|
||
__version__ = "0.36.0-dev30" | ||
__version__ = "0.36.0-dev31" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
# Copyright 2018-2024 Xanadu Quantum Technologies Inc. | ||
|
||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
|
||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
|
||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
"""PennyLane lightning_tensor package.""" | ||
|
||
from pennylane_lightning.core import __version__ | ||
|
||
from .lightning_tensor import LightningTensor |
337 changes: 337 additions & 0 deletions
337
pennylane_lightning/lightning_tensor/lightning_tensor.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,337 @@ | ||
# Copyright 2018-2024 Xanadu Quantum Technologies Inc. | ||
|
||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
|
||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
|
||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
""" | ||
This module contains the LightningTensor class that inherits from the new device interface. | ||
""" | ||
from dataclasses import replace | ||
from numbers import Number | ||
from typing import Callable, Optional, Sequence, Tuple, Union | ||
|
||
import numpy as np | ||
import pennylane as qml | ||
from pennylane.devices import DefaultExecutionConfig, Device, ExecutionConfig | ||
from pennylane.devices.modifiers import simulator_tracking, single_tape_support | ||
from pennylane.tape import QuantumTape | ||
from pennylane.transforms.core import TransformProgram | ||
from pennylane.typing import Result, ResultBatch | ||
|
||
from .quimb._mps import QuimbMPS | ||
mlxd marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
Result_or_ResultBatch = Union[Result, ResultBatch] | ||
QuantumTapeBatch = Sequence[QuantumTape] | ||
QuantumTape_or_Batch = Union[QuantumTape, QuantumTapeBatch] | ||
PostprocessingFn = Callable[[ResultBatch], Result_or_ResultBatch] | ||
|
||
|
||
_backends = frozenset({"quimb"}) | ||
# The set of supported backends. | ||
|
||
_methods = frozenset({"mps"}) | ||
# The set of supported methods. | ||
|
||
|
||
def accepted_backends(backend: str) -> bool: | ||
"""A function that determines whether or not a backend is supported by ``lightning.tensor``.""" | ||
return backend in _backends | ||
|
||
|
||
def accepted_methods(method: str) -> bool: | ||
"""A function that determines whether or not a method is supported by ``lightning.tensor``.""" | ||
return method in _methods | ||
|
||
|
||
@simulator_tracking | ||
@single_tape_support | ||
class LightningTensor(Device): | ||
"""PennyLane Lightning Tensor device. | ||
|
||
A device to perform tensor network operations on a quantum circuit. | ||
PietropaoloFrisoni marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
Args: | ||
wires (int): The number of wires to initialize the device with. | ||
Defaults to ``None`` if not specified. | ||
backend (str): Supported backend. Must be one of ``quimb`` or ``cutensornet``. | ||
method (str): Supported method. Must be one of ``mps`` or ``tn``. | ||
shots (int): How many times the circuit should be evaluated (or sampled) to estimate | ||
the expectation values. Currently, it can only be ``None``, so that computation of | ||
statistics like expectation values and variances is performed analytically. | ||
c_dtype: Datatypes for statevector representation. Must be one of | ||
``np.complex64`` or ``np.complex128``. | ||
**kwargs: keyword arguments. | ||
""" | ||
|
||
# TODO: decide whether to move some of the attributes in interfaces classes | ||
# pylint: disable=too-many-instance-attributes | ||
|
||
# So far we just insert the options for MPS simulator | ||
_device_options = ( | ||
"apply_reverse_lightcone", | ||
"backend", | ||
PietropaoloFrisoni marked this conversation as resolved.
Show resolved
Hide resolved
|
||
"c_dtype", | ||
"cutoff", | ||
"method", | ||
"max_bond_dim", | ||
"measure_algorithm", | ||
"return_tn", | ||
"rehearse", | ||
) | ||
|
||
_new_API = True | ||
|
||
# TODO: decide if `backend` and `method` should be keyword args as well | ||
# pylint: disable=too-many-arguments | ||
def __init__( | ||
self, | ||
*, | ||
wires=None, | ||
backend="quimb", | ||
method="mps", | ||
shots=None, | ||
c_dtype=np.complex128, | ||
**kwargs, | ||
): | ||
|
||
if not accepted_backends(backend): | ||
raise ValueError(f"Unsupported backend: {backend}") | ||
|
||
if not accepted_methods(method): | ||
raise ValueError(f"Unsupported method: {method}") | ||
|
||
if shots is not None: | ||
raise ValueError("LightningTensor does not support the `shots` parameter.") | ||
PietropaoloFrisoni marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
super().__init__(wires=wires, shots=shots) | ||
|
||
self._num_wires = len(self.wires) if self.wires else 0 | ||
mlxd marked this conversation as resolved.
Show resolved
Hide resolved
|
||
self._backend = backend | ||
self._method = method | ||
self._c_dtype = c_dtype | ||
|
||
# options for MPS | ||
self._max_bond_dim = kwargs.get("max_bond_dim", None) | ||
self._cutoff = kwargs.get("cutoff", 1e-16) | ||
mlxd marked this conversation as resolved.
Show resolved
Hide resolved
|
||
self._measure_algorithm = kwargs.get("measure_algorithm", None) | ||
|
||
# common options (MPS and TN) | ||
self._apply_reverse_lightcone = kwargs.get("apply_reverse_lightcone", None) | ||
self._return_tn = kwargs.get("return_tn", None) | ||
self._rehearse = kwargs.get("rehearse", None) | ||
|
||
self._interface = None | ||
|
||
# TODO: implement the remaining combs of `backend` and `interface` | ||
if self.backend == "quimb" and self.method == "mps": | ||
self._interface = QuimbMPS(self._num_wires, self._c_dtype) | ||
|
||
@property | ||
def name(self): | ||
"""The name of the device.""" | ||
return "lightning.tensor" | ||
|
||
@property | ||
def num_wires(self): | ||
"""Number of wires addressed on this device.""" | ||
return self._num_wires | ||
|
||
@property | ||
def backend(self): | ||
"""Supported backend.""" | ||
return self._backend | ||
|
||
@property | ||
def method(self): | ||
"""Supported method.""" | ||
return self._method | ||
|
||
@property | ||
def c_dtype(self): | ||
"""State vector complex data type.""" | ||
return self._c_dtype | ||
|
||
dtype = c_dtype | ||
mlxd marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
def _setup_execution_config(self, config): | ||
""" | ||
Update the execution config with choices for how the device should be used and the device options. | ||
""" | ||
updated_values = {} | ||
if config.use_device_gradient is None: | ||
PietropaoloFrisoni marked this conversation as resolved.
Show resolved
Hide resolved
|
||
updated_values["use_device_gradient"] = config.gradient_method in ( | ||
"best", | ||
"adjoint", | ||
) | ||
if config.grad_on_execution is None: | ||
updated_values["grad_on_execution"] = True | ||
|
||
new_device_options = dict(config.device_options) | ||
for option in self._device_options: | ||
if option not in new_device_options: | ||
new_device_options[option] = getattr(self, f"_{option}", None) | ||
|
||
return replace(config, **updated_values, device_options=new_device_options) | ||
|
||
def preprocess( | ||
self, | ||
execution_config: ExecutionConfig = DefaultExecutionConfig, | ||
): | ||
"""This function defines the device transform program to be applied and an updated device configuration. | ||
|
||
Args: | ||
execution_config (Union[ExecutionConfig, Sequence[ExecutionConfig]]): A data structure describing the | ||
parameters needed to fully describe the execution. | ||
|
||
Returns: | ||
TransformProgram, ExecutionConfig: A transform program that when called returns :class:`~.QuantumTape`'s that the | ||
device can natively execute as well as a postprocessing function to be called after execution, and a configuration | ||
with unset specifications filled in. | ||
|
||
This device: | ||
|
||
* Supports any qubit operations that provide a matrix. | ||
* Currently does not support finite shots. | ||
""" | ||
|
||
config = self._setup_execution_config(execution_config) | ||
|
||
program = TransformProgram() | ||
|
||
# TODO: remove comments in next PR | ||
# program.add_transform(validate_measurements, name=self.name) | ||
# program.add_transform( | ||
# validate_observables, accepted_observables, name=self.name | ||
# ) | ||
# program.add_transform(validate_device_wires, self.wires, name=self.name) | ||
|
||
return program, config | ||
|
||
def execute( | ||
self, | ||
circuits: QuantumTape_or_Batch, | ||
execution_config: ExecutionConfig = DefaultExecutionConfig, | ||
) -> Result_or_ResultBatch: | ||
"""Execute a circuit or a batch of circuits and turn it into results. | ||
|
||
Args: | ||
circuits (Union[QuantumTape, Sequence[QuantumTape]]): the quantum circuits to be executed. | ||
execution_config (ExecutionConfig): a datastructure with additional information required for execution. | ||
|
||
Returns: | ||
TensorLike, tuple[TensorLike], tuple[tuple[TensorLike]]: A numeric result of the computation. | ||
""" | ||
# TODO: remove comments in next PR | ||
# return self._interface.execute(circuits, execution_config) | ||
|
||
def supports_derivatives( | ||
self, | ||
execution_config: Optional[ExecutionConfig] = None, | ||
circuit: Optional[qml.tape.QuantumTape] = None, | ||
) -> bool: | ||
"""Check whether or not derivatives are available for a given configuration and circuit. | ||
|
||
Args: | ||
execution_config (ExecutionConfig): The configuration of the desired derivative calculation. | ||
circuit (QuantumTape): An optional circuit to check derivatives support for. | ||
|
||
Returns: | ||
Bool: Whether or not a derivative can be calculated provided the given information. | ||
|
||
""" | ||
# TODO: call the function implemented in the appropriate interface | ||
|
||
def compute_derivatives( | ||
self, | ||
circuits: QuantumTape_or_Batch, | ||
execution_config: ExecutionConfig = DefaultExecutionConfig, | ||
): | ||
"""Calculate the jacobian of either a single or a batch of circuits on the device. | ||
|
||
Args: | ||
circuits (Union[QuantumTape, Sequence[QuantumTape]]): the circuits to calculate derivatives for. | ||
execution_config (ExecutionConfig): a datastructure with all additional information required for execution. | ||
|
||
Returns: | ||
Tuple: The jacobian for each trainable parameter. | ||
""" | ||
# TODO: call the function implemented in the appropriate interface | ||
|
||
def execute_and_compute_derivatives( | ||
self, | ||
circuits: QuantumTape_or_Batch, | ||
execution_config: ExecutionConfig = DefaultExecutionConfig, | ||
): | ||
"""Compute the results and jacobians of circuits at the same time. | ||
|
||
Args: | ||
circuits (Union[QuantumTape, Sequence[QuantumTape]]): the circuits or batch of circuits. | ||
execution_config (ExecutionConfig): a datastructure with all additional information required for execution. | ||
|
||
Returns: | ||
tuple: A numeric result of the computation and the gradient. | ||
""" | ||
# TODO: call the function implemented in the appropriate interface | ||
|
||
def supports_vjp( | ||
self, | ||
execution_config: Optional[ExecutionConfig] = None, | ||
circuit: Optional[QuantumTape] = None, | ||
) -> bool: | ||
"""Whether or not this device defines a custom vector jacobian product. | ||
|
||
Args: | ||
execution_config (ExecutionConfig): The configuration of the desired derivative calculation. | ||
circuit (QuantumTape): An optional circuit to check derivatives support for. | ||
|
||
Returns: | ||
Bool: Whether or not a derivative can be calculated provided the given information. | ||
""" | ||
# TODO: call the function implemented in the appropriate interface | ||
|
||
def compute_vjp( | ||
self, | ||
circuits: QuantumTape_or_Batch, | ||
cotangents: Tuple[Number], | ||
execution_config: ExecutionConfig = DefaultExecutionConfig, | ||
): | ||
r"""The vector jacobian product used in reverse-mode differentiation. | ||
|
||
Args: | ||
circuits (Union[QuantumTape, Sequence[QuantumTape]]): the circuit or batch of circuits. | ||
cotangents (Tuple[Number, Tuple[Number]]): Gradient-output vector. Must have shape matching the output shape of the | ||
corresponding circuit. If the circuit has a single output, ``cotangents`` may be a single number, not an iterable | ||
of numbers. | ||
execution_config (ExecutionConfig): a datastructure with all additional information required for execution. | ||
|
||
Returns: | ||
tensor-like: A numeric result of computing the vector jacobian product. | ||
""" | ||
# TODO: call the function implemented in the appropriate interface | ||
|
||
def execute_and_compute_vjp( | ||
self, | ||
circuits: QuantumTape_or_Batch, | ||
cotangents: Tuple[Number], | ||
execution_config: ExecutionConfig = DefaultExecutionConfig, | ||
): | ||
"""Calculate both the results and the vector jacobian product used in reverse-mode differentiation. | ||
|
||
Args: | ||
circuits (Union[QuantumTape, Sequence[QuantumTape]]): the circuit or batch of circuits to be executed. | ||
cotangents (Tuple[Number, Tuple[Number]]): Gradient-output vector. Must have shape matching the output shape of the | ||
corresponding circuit. | ||
execution_config (ExecutionConfig): a datastructure with all additional information required for execution. | ||
|
||
Returns: | ||
Tuple, Tuple: the result of executing the scripts and the numeric result of computing the vector jacobian product | ||
""" | ||
# TODO: call the function implemented in the appropriate interface |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This changelog would be better placed in Release 0.37.0-dev.
Let's come back later and fix that.