Skip to content

v1.10.0

Latest
Compare
Choose a tag to compare
@jcmgray jcmgray released this 18 Dec 23:49
· 1 commit to main since this release

Enhancements

  • tensor network fitting: add method="tree" for when ansatz is a tree - tensor_network_fit_tree
  • tensor network fitting: fix method="als" for complex dtype networks
  • tensor network fitting: allow method="als" to use a iterative solver suited to much larger tensors, by default a custom conjugate gradient implementation.
  • tensor_network_distance and fitting: support hyper indices explicitly via output_inds kwarg
  • add tn.make_overlap and tn.overlap for computing the overlap between two tensor networks, $\langle O |T \rangle$, with explicit handling of outer indices to address hyper networks. Add output_inds to tn.norm and tn.make_norm also, as well as the squared kwarg.
  • replace all numba based paralellism (prange and parallel vectorize) with explicit thread pool based parallelism. Should be more reliable and no need to set NUMBA_NUM_THREADS anymore. Remove env var QUIMB_NUMBA_PAR.
  • Circuit: add dtype and convert_eager options. dtype specifies what the computation should be performed in. convert_eager specifies whether to apply this (and any to_backend calls) as soon as gates are applied (the default for MPS circuit simulation) or just prior to contraction (the default for exact contraction simulation).
  • tn.full_simplify: add check_zero (by default set of "auto") option which explicitly checks for zero tensor norms when equalizing norms to avoid log10(norm) resulting in -inf or nan. Since it creates a data dependency that breaks e.g. jax tracing, it is optional.
  • schematic.Drawing: add shorten kwarg to line drawing and curve drawing and examples to the docs.
  • TensorNetwork: add .backend and .dtype_name properties.

PRs:

  • Circuit: add default dtype and convert_eager options by @jcmgray in #273
  • add fit(method="tree") and fix ALS for complex TNs by @jcmgray in #274

Full Changelog: v1.9.0...v1.10.0