Enhancements
- tensor network fitting: add
method="tree"
for when ansatz is a tree -tensor_network_fit_tree
- tensor network fitting: fix
method="als"
for complex dtype networks - tensor network fitting: allow
method="als"
to use a iterative solver suited to much larger tensors, by default a custom conjugate gradient implementation. -
tensor_network_distance
and fitting: support hyper indices explicitly viaoutput_inds
kwarg - add
tn.make_overlap
andtn.overlap
for computing the overlap between two tensor networks,$\langle O |T \rangle$ , with explicit handling of outer indices to address hyper networks. Addoutput_inds
totn.norm
andtn.make_norm
also, as well as thesquared
kwarg. - replace all
numba
based paralellism (prange
and parallel vectorize) with explicit thread pool based parallelism. Should be more reliable and no need to setNUMBA_NUM_THREADS
anymore. Remove env varQUIMB_NUMBA_PAR
. -
Circuit
: adddtype
andconvert_eager
options.dtype
specifies what the computation should be performed in.convert_eager
specifies whether to apply this (and anyto_backend
calls) as soon as gates are applied (the default for MPS circuit simulation) or just prior to contraction (the default for exact contraction simulation). -
tn.full_simplify
: addcheck_zero
(by default set of"auto"
) option which explicitly checks for zero tensor norms when equalizing norms to avoidlog10(norm)
resulting in -inf or nan. Since it creates a data dependency that breaks e.g.jax
tracing, it is optional. -
schematic.Drawing
: addshorten
kwarg to line drawing and curve drawing and examples to the docs. -
TensorNetwork
: add.backend
and.dtype_name
properties.
PRs:
- Circuit: add default dtype and convert_eager options by @jcmgray in #273
- add fit(method="tree") and fix ALS for complex TNs by @jcmgray in #274
Full Changelog: v1.9.0...v1.10.0