From 306cc91f781487b879d55f1c1a95465a8be5e704 Mon Sep 17 00:00:00 2001
From: JW1992 <31080444+JW1992@users.noreply.github.com>
Date: Tue, 18 Oct 2022 13:34:47 -0700
Subject: [PATCH 01/20] Create 2022-10-18-promotion-semantics.md
---
rfcs/2022-10-18-promotion-semantics.md | 8747 ++++++++++++++++++++++++
1 file changed, 8747 insertions(+)
create mode 100644 rfcs/2022-10-18-promotion-semantics.md
diff --git a/rfcs/2022-10-18-promotion-semantics.md b/rfcs/2022-10-18-promotion-semantics.md
new file mode 100644
index 000000000..c9f010b20
--- /dev/null
+++ b/rfcs/2022-10-18-promotion-semantics.md
@@ -0,0 +1,8747 @@
+
+
+
+
+## RFC: Making dtype promotion semantics in Tensorflow more consistent
+
+
+```
+
+Username
+Role
+Status
+Last Change
+rohanj
+Approver
+PENDING
+2022-09-12
+wangpeng
+Approver
+APPROVED
+2022-09-20
+yanhuasun
+Approver
+PENDING
+2022-09-12
+fchollet
+Reviewer
+PENDING
+2022-09-12
+vanderplas
+Reviewer
+PENDING
+2022-09-12
+#begin-approvals-addon-section
+Using go/DocWrap. Table auto-updated hourly. Don't edit this section and don't add comments in it.
+```
+
+
+**Status**: Draft | In Review | Approved | Final | Obsolete
+
+**Authors**: , LDAP
+
+**Contributors**:
+
+**Sponsors**: , , LDAP
+
+**Last Updated**:
+
+go/tf-np-style-promotion
+
+
+## **Note: the main design has been updated on 2020-09-05. We updated the proposed modes. The deprecated modes can be seen in [the bottom of this doc](#bookmark=id.kp5bjdbldb2g). **
+
+
+## **Acknowledgement**
+
+This is written by referencing investigations by ex-Googler , as well as projects TF-numpy and JAX:
+
+[Implicit promotion, inference and conversion in tensorflow](https://docs.google.com/document/d/1jOXJ1YAQAtseyYDDGm4gsr7xJ4F2FtvfwZJgNsW8cVA/edit#heading=h.lloqtrv9juf1)
+
+[tf.experimental.numpy](https://www.tensorflow.org/api_docs/python/tf/experimental/numpy)
+
+[Design of Type Promotion Semantics for JAX](https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html#)
+
+Specifically, thanks to TF-numpy already implemented the majority of the proposed dtype promotion behaviors.
+
+
+## **Objective**
+
+Currently TF has no consistent, well-defined type promotion rules. This document proposes a well-defined, consistent and clear dtype promotion rule for TF. The introduced changes make TF APIs more similar to NumPy, with some differences that emphasize TF’s applications in machine learning. This should make dtype promotions in TF much more consistent and predictable.
+
+[**What the doc is**] This doc discusses the preferred dtype promotion semantics/behaviors of Tensorflow (TF) and Tensorflow-numpy (TF-numpy) in the binary ops including add, sub, mul, div, pow and mod.
+
+[**What the doc is not**] This doc does not discuss the implementation plans.
+
+
+## **Motivation**
+
+In Tensorflow’s APIs, dtype promotion is very broken compared to JAX or NumPy. Many users have complained about these surprising behaviors (example: go/broccoli-tf-add, b/158346631, b/154456219). See [this doc ](https://docs.google.com/document/d/1jOXJ1YAQAtseyYDDGm4gsr7xJ4F2FtvfwZJgNsW8cVA/edit#)for a more detailed description.
+
+[TF-numpy](https://www.tensorflow.org/guide/tf_numpy#type_promotion) is a strategic project in Tensorflow. Compared to Tensorflow, TF-numpy’s dtype promotion behavior is more consistent because it [mainly relies on NumPy’s dtype promotion semantics](https://source.corp.google.com/piper///depot/google3/third_party/tensorflow/python/ops/numpy_ops/np_dtypes.py;rcl=399530502;l=112). In the [long-term vision doc](https://goto.google.com/tf-numpy-path-forward), unifying the dtype promotion semantics of TF and TF-numpy is a necessary first step.
+
+The unification of TF and TF-numpy’s dtype promotion semantics is a great chance to think about what semantics should be the long-term, user-friendly solution for TF.
+
+
+## **User Benefit**
+
+There have been ongoing complaints about the inconsistent dtype promotion behaviors of TF. To give an example, switching the inputs’ positions directly affects the outcome:
+
+
+```
+c = tf.constant(1.0)
+tf.add(c, 3) # tf.Tensor(4.0, shape=(), dtype=float32)
+tf.add(3, c) # InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a int32 tensor but is a float tensor [Op:AddV2]
+```
+
+
+For brevity we will sometimes use shortened terms to represent the dtypes as shown in the following examples:
+
+
+
+* `b` means tf.bool
+* `u8` means tf.uint8
+* `i16` means tf.int16
+* `bf16` means [tf.bfloat16](https://cloud.google.com/tpu/docs/bfloat16)
+* `f64` means tf.float64
+* `c64` means tf.complex64
+* `i*` means python int
+* `f*` means python float
+* `c*` means python complex
+
+The table below summarizes the dtype promotion behavior between two TF tensors/python scalars with method `__add__` and API `tf.add` respectively. Each row and column represents a combination of input types.
+
+Table: TF dtype promotion result of dunder method `__add__` and `tf.add`
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u8
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+
+
+ u16
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+
+
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+
+
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+
+
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+
+
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+
+
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+
+
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+
+
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ bf16
+ |
+ -
+ |
+
+
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ f16
+ |
+ -
+ |
+
+
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+
+
+ f64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ f64
+ |
+ -
+ |
+
+
+ c64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+ i*
+ |
+ -
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i32
+ |
+
+ |
+
+ |
+
+
+ f*
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f32
+ |
+ f32
+ |
+
+ |
+
+
+ c*
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+In the table, the dash symbol `-` means unsupported: an error gets raised if the two dtypes are used in the call to that op. The cells highlighted in red are only supported in `__add__`, and the cells in yellow are only supported in `tf.add`. In summary, the existing TF APIs have these significant issues:
+
+
+
+1. Boolean dtypes are not supported;
+2. Implicit conversion between two TF dtypes are strictly disallowed, resulting in verbose code at times;
+3. Implicit conversion from some python dtypes to some TF dtypes are disallowed;
+4. `tf.add` is not even commutative - switching the position of the two arguments sometimes produces errors, otherwise results in correct values.
+
+When the inputs involve NumPy arrays, the behaviors can be even more broken. For brevity we have included NumPy-input behaviors in the appendix at the end of the doc.
+
+We aim to make the system commutative, predictable and correct.
+
+
+## **Design Proposal**
+
+We propose introducing three dtype promotion modes in TF:
+
+
+
+* `tf.ImplicitPromotion.ALL`
+* `tf.ImplicitPromotion.SAFE`
+* `tf.ImplicitPromotion.NONE`
+
+The three modes determine how often implicit dtype promotions happen in TF APIs. In the following examples we will use `tf.add` to demonstrate the modes. The dunder method `__add__` is expected to have the same behavior as `tf.add`. For brevity, NumPy (np) array inputs are not discussed in this section. However the proposed modes in this RFC will treat the np array inputs in the same way as the tensor inputs (see more in the appendix).
+
+Table: TF dtype promotion result of `tf.add` after the proposed changes. `NONE` only allows the unhighlighted cells. `SAFE` allows all `NONE` cases plus the cells highlighted in green. `ALL` allows all `SAFE` cases plus the cells highlighted in yellow. Rows/columns highlighted in blue only show the default result without overflow of inputs.
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i16
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ u8
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ u16
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ u32
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ u64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i8
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ f64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i8
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ f64
+ |
+ i16
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i16
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i64
+ |
+ f64
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ i32
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ f64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ bf16
+ |
+ bf16
+ |
+ c64
+ |
+
+
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f16
+ |
+ f16
+ |
+ c64
+ |
+
+
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+ i*
+ |
+ i64
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ f*
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ c*
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+
+### **Mode 1: tf.ImplicitPromotion.ALL**
+
+In this mode, we allow all implicit promotions to happen in the op, even if both inputs have TF dtypes. To summarize the rules of this mode:
+
+
+
+* When the two inputs have non-matching TF dtypes, they will be promoted to the lowest width dtype that guarantees the precisions and ranges of the inputs. However, the promoted integer, float and complex dtype widths are capped at 64, 64 and 128 bits respectively. For example:
+ * The summation of `tf.bfloat16` and `tf.uint8` will be type `tf.bfloat16`.
+ * The summation of `tf.int64` and `tf.float64` will be capped at type `tf.float64` despite precision loss and overflow risks.
+* When one of the inputs is a python integer/float/complex, and the other is a TF dtype:
+ * If the python scalar value falls within the range of the TF dtype, the result dtype is loosely described with the following promotion direction: TF bool -> python integer -> TF signed/unsigned integers -> python float -> TF float -> python complex/TF complex.
+ * The dtype promotion is allowed only if the python scalar is within the range of the determined dtype. Otherwise, an exception is raised. For example, `tf.constant([1], tf.int8) + 1` produces a `tf.int8` Tensor, while `tf.constant([1], tf.int8) + 1000` raises an error.
+
+This mode is intended to provide a user experience similar to NumPy behaviors.
+
+
+### **Mode 2: tf.ImplicitPromotion.SAFE**
+
+Mode `SAFE` follows the same rules as mode `ALL`, but disallows “unsafe” dtype promotions. In general, we think the following two types of conversions are unsafe:
+
+
+
+* If the inputs are both Tensors and the result dtype would be “wider” than all the inputs. For example, the summation of `tf.uint8` and `tf.int8` would be type `tf.i16`. These dtype promotions risk increasing the model’s memory consumption and slowing down computation speed.
+* If the inputs are both Tensors and the result dtype cannot preserve all the precisions of the inputs. For example, the summation of `tf.uint64` and `tf.float32` would be type `tf.float64`.
+
+With the above principle, if one of the inputs is a Tensor while the other is a python scalar, we only allow implicit promotion if the scalar can fit into the Tensor’s dtype. For example:
+
+
+
+* The summation between a `tf.int32` and a python float is always disallowed;
+* The summation between a `tf.uint8` and a python int is allowed only if the python int is between 0 and 255.
+
+For these disallowed dtype promotions, we require the users to explicitly cast them.
+
+Another advantage is that mode `ALL` can be silently nonassociative in the following scenario, while mode `SAFE` avoids it by not allowing any dtype widening:
+
+
+
+* `(tf.zeros(tf.uint8) + tf.zeros(tf.int8)) + tf.zeros(tf.float16)` evaluates to dtype `tf.float32`;
+* `tf.zeros(tf.uint8) + (tf.zeros(tf.int8) + tf.zeros(tf.float16))` evaluates to dtype `tf.float16`.
+
+
+### **Mode 3: tf.ImplicitPromotion.NONE**
+
+The dtype behavior of mode `NONE` is the most conservative: no dtype promotion is allowed between Tensors. When the inputs include python scalars, it follows the same rule as mode `SAFE`.
+
+
+### **Alternatives Considered: Capping the promotion system at float32**
+
+The TF-NumPy project contains another flag [allow_float64](https://source.corp.google.com/piper///depot/google3/third_party/tensorflow/python/ops/numpy_ops/np_dtypes.py;rcl=399530502;l=88)
. When disabled, it caps the converted floating numbers to 32 bits, and complex numbers to 64 bits. This is useful when users want to enjoy implicit conversions in all cases and avoid any performance regressions related to double precision floating point numbers. This feature is now used by [Trax](https://b.corp.google.com/issues/178862061).
+
+Our current plan does not involve adding this flag in TF, as it’s orthogonal to the proposed modes. If there are more user needs we can consider exposing it in TF as well.
+
+
+### **Alternatives Considered: A dtype promotion “lattice”**
+
+Unlike the approaches above, JAX used a dtype promotion lattice to define its semantics. More details can be seen in their [design doc](https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html#properties-of-a-type-promotion-lattice). Their lattice system has these advantages:
+
+
+
+* A lattice can be much more concise to describe the promotion semantics;
+* The lattice can ensure associativity as well as commutativity. In contrast, the promotion behaviors in this proposal is not associative:
+ * `(tf.zeros(tf.uint8) + tf.zeros(tf.int8)) + tf.zeros(tf.float16)` evaluates to dtype `tf.float32`;
+ * `tf.zeros(tf.uint8) + (tf.zeros(tf.int8) + tf.zeros(tf.float16))` evaluates to dtype `tf.float16`.
+
+~~However the lattice system also introduced extra complexity in the design. For example:~~
+
+
+
+* ~~JAX ended up introducing the concepts of “weak types” in place of python types;~~
+* ~~JAX adopted very aggressive dtype promotion semantics - an int64 dtype can be promoted to a float16, resulting in higher risks of overflow.~~
+
+~~Since our ultimate goal is to help the users maintain a clear and concise mental model, adopting a dtype promotion table is an easier approach.~~
+
+Compared to a lattice system, the proposed table-based system does not guarantee associativity. However, we can avoid such usage scenarios in the proposed SAFE mode (no implicit promotion to wider dtypes). As a result, it’s not clearly justifiable to switch to a lattice system from the table-based approach, which already exists in project TF-NumPy.
+
+
+### **Alternatives Considered: Value-dependent promotion (a.k.a. “safe casting” in NumPy)**
+
+NumPy has a feature “safe casting”: the dtype promotion API, [numpy.result_type](https://numpy.org/doc/stable/reference/generated/numpy.result_type.html)
, can return a different dtype depending on the input python scalar value:
+
+
+```
+np.result_type(np.int8, 200) # dtype('int16')
+np.result_type(np.int8, 100) # dtype('int8')
+```
+
+
+This slightly reduces the risk of overflow. However, the result is unpredictable. NumPy is considering removing this feature in a future release (see proposal: [NEP 50](https://numpy.org/neps/nep-0050-scalar-promotion.html)). JAX [did not adopt this feature](https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html#enter-python-scalars) either. We omit it in this design.
+
+
+### **Performance Implications**
+
+For an existing piece of TF code in python, we do not expect the performance to change after switching to any of the proposed modes. This is because all the allowed implicit dtype promotions in TF will stay unchanged. As to the currently disallowed dtype promotions, users must have used explicit dtype promotions which are not affected by the change.
+
+When users develop a new piece of code, however, the new modes have higher chances of computation slowdown due to implicit promotions in mode `ALL`.
+
+We can carry out the following steps to mitigate user surprises:
+
+
+
+* Add benchmark tests to verify that the new modes won’t affect the performance of existing TF code.
+* Publish tutorials that help users select the most appropriate dtype promotion modes.
+* [optional] Add an API that instruments the number of implicit, unsafe dtype conversions in the program (e.g. promotions from 32 bits to 64 bits). This helps users understand whether the promotions contribute to computation slowdown in the graphs.
+
+
+### **Dependencies**
+
+This proposal is not expected to affect dependencies to Tensorflow.
+
+
+#### **Engineering Impact**
+
+
+#### We do not expect changes in binary size/startup time/build time/test time.
+
+The majority of the dtype promotion logic already exists in Tensorflow under namescope `tensorflow.experimental.numpy`. However, refactoring/updates will be needed under directory tensorflow/python/ops.
+
+
+#### **Platforms and Environments**
+
+This proposal is expected to take effect on all platforms in the python environment.
+
+
+#### **Best Practices/Tutorials and Examples**
+
+As mentioned in section “Performance Implications” we will publish tutorials that help users select the most appropriate dtype promotion modes.
+
+
+#### **Compatibility/User Impact**
+
+This is a breaking change. As mentioned above, existing code is expected to continue working. However, some tests will fail because more implicit dtype conversions are allowed. We can mitigate the user impact by adding a flag that sticks to the existing dtype promotion behaviors.
+
+The proposed changes will be rolled out in various stages:
+
+
+
+1. Default behaviors stay unchanged, with the flags introduced under namescope `tf.experiment` to collect user feedbacks;
+2. Default behaviors switched to one of the modes above, while the old behaviors can be re-enabled via a flag;
+3. [optional] Completely disable the old, broken dtype promotion behaviors.
+
+
+## **Questions and Discussion Topics**
+
+
+#### **Are there changes expected in tf.Variable?**
+
+In APIs such as `tf.add` and dunder methods `__add__`, the same changes are expected to take place because these methods first read the Variable into a Tensor, then carry out the Tensor-to-Tensor computations.
+
+Currently, inplace ops such as `assign_add` are also broken. The two tables below show the result of `assign_add` when the input is a Tensor, a python scalar or a NumPy array:
+
+
+
+* When the input is a Tensor, no dtype conversion is allowed.
+* When the input is a python scalar, it defers to the variable’s dtype. However, python float cannot be converted to int, and complex cannot be converted to either int or float dtypes.
+* When the input is a NumPy array, it always gets converted to the variable’s dtype regardless of any precision loss (for example `tf.Variable(1) + np.array(1.5)` returns 2).
+
+Table: Result of tf.Variable (rows) inplace op `assign_add` with a Tensor or python scalar as the argument (columns).
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u8
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+
+
+ u16
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+
+
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+
+
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+
+
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+
+
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+
+
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+
+
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+
+
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ bf16
+ |
+ -
+ |
+
+
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ f16
+ |
+ -
+ |
+
+
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+
+
+ f64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ f64
+ |
+ -
+ |
+
+
+ c64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+Table: Result of tf.Variable (rows) inplace op `assign_add` with a NumPy array as the argument (columns).
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+
+
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ -
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+
+
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ -
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+
+
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ -
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+
+
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ -
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+
+
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ -
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+
+
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ -
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+
+
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ -
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+
+
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ -
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+
+
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ -
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+
+
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ -
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+
+
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+
+
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ -
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+
+
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ -
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+We can make these ops more consistent as well, following the rules in the three proposed modes. However, we also have another constraint: the Variable’s dtype cannot change. As a result, any dtype promotion that results in a dtype different from the Variable’s dtype is disabled.
+
+Table: Result of tf.Variable (rows) inplace op `assign_add` with a Tensor or python scalar as the argument (columns) after the proposed changes. `NONE` only allows the unhighlighted cells. `SAFE` allows all `NONE` cases plus the cells highlighted in green. `ALL` allows all `SAFE` cases plus the cells highlighted in yellow.
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+
+
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+
+
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+
+
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+
+
+ i8
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+
+
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+
+
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+
+
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ -
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+
+
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ bf16
+ |
+ -
+ |
+
+
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ f16
+ |
+ -
+ |
+
+
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+
+
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ f64
+ |
+ -
+ |
+
+
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+
+#### **Are there also issues in conversions between Tensor and numpy arrays?**
+
+There exists arguably more serious issues when it comes to the mixture of Tensor and numpy arrays - wrong results can appear silently:
+
+
+```
+a = np.array(3.1)
+c = tf.constant(1)
+print(tf.add(a, c)) # InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a double tensor but is a int32 tensor [Op:AddV2]
+print(tf.add(c, a)) # tf.Tensor(4, shape=(), dtype=int32) - Even worse than exceptions
+```
+
+
+To give a comprehensive overview, the two tables below show the results of `tf.add(tensor, numpy array)` and `tf.add(numpy array, tensor)` :
+
+Table: TF dtype promotion result of `tf.add` with a Tensor as the first argument (rows) and np array in the second argument (columns).
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+
+
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ -
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+
+
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ -
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+
+
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ -
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+
+
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ -
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+
+
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ -
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+
+
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ -
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+
+
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ -
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+
+
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ -
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+
+
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ -
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+
+
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ -
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+
+
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+
+
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ -
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+
+
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ -
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+Table: TF dtype promotion result of `tf.add` with a np array as the first argument (rows) and tensor in the second argument (columns).
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+
+
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u8
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u16
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ f64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ -
+ |
+ -
+ |
+
+
+ c64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ -
+ |
+
+
+ c128
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c128
+ |
+
+
+
+
+To briefly describe the issues:
+
+
+
+1. dtypes `tf.bfloat16` and `tf.bool` are poorly supported;
+2. If the first input is a tensor, the second input (the np array) is always converted to the tensor’s dtype, resulting into significant precision loss in many scenarios;
+3. If the first input is a np array and the second is a tensor, the np array is first converted to a tensor with the corresponding dtype. TF then does not attempt any implicit dtype conversion, causing `InvalidArgumentError` whenever the two inputs have mismatching dtypes.
+
+Though the behaviors with np arrays inputs are not the same as python scalars, they share the same root cause: At present, TF does not have a centralized dtype promotion system, and incorrectly uses the [dtype of the first Tensor in its list of arguments](https://source.corp.google.com/piper///depot/google3/third_party/tensorflow/python/ops/math_ops.py;rcl=449753780;l=1371) to promote all arguments to. The proposed modes in this RFC will treat the np array inputs in the same way as the tensor inputs.
+
+
+#### **What happens to e.g. bool - bool
or bool / bool
?
+
+With NumPy:
+
+
+
+* Addition between two Booleans: equivalent to OR
+* Subtraction between two Booleans: forbidden
+* Multiplication between two Booleans: AND
+* Division between two Booleans: converted to float64 then divide
+
+
+#### **What happens to truediv operations?**
+
+The op truediv is special because it implicitly promotes integral types. For example when the inputs are matching dtypes, 16-bit integral types are promoted to `f32`, and 32-bit integral types are promoted to `float64`.
+
+Using this rule of thumb, we can easily deduce the promotion behavior between non-matching dtypes when this proposal is landed. For example:
+
+
+
+* `int8` and `int16` -> both `int16` -> `f32` (allowed in `SAFE/ALL` modes)
+* `uint8` and `uint32` -> both `uint32` -> `f64` (allowed in `SAFE/ALL` modes)
+* `int8` and `i*` -> both `int8` -> `f32` (allowed in `NONE/SAFE/ALL` modes)
+* `int8` and `uint16` -> both `int32` -> `f64` (allowed in `ALL` mode)
+
+In the above examples, the first arrow represents the dtype promotion in this proposal, and the second arrow represents the promotion in op truediv. Note the result of `int8` and `uint16`, `f64`, turns out to be an overkill but is consistent with the combination of the dtype promotion rules and the truediv promotion rules.
+
+~~Table: TF dtype promotion result of `tf.math.truediv` after the proposed changes. `NONE` only allows the unhighlighted cells. `SAFE` allows all `NONE` cases plus the cells highlighted in green. `ALL` allows all `SAFE` cases plus the cells highlighted in yellow. Rows/columns highlighted in blue only show the default result without overflow of inputs.~~
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u8
+ |
+ u8
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f32
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u16
+ |
+ u16
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f32
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u32
+ |
+ u32
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u64
+ |
+ u64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i8
+ |
+ i8
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f32
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i16
+ |
+ i16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f32
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i32
+ |
+ i32
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i64
+ |
+ i64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ bf16
+ |
+ bf16
+ |
+ c64
+ |
+
+
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f16
+ |
+ f16
+ |
+ c64
+ |
+
+
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+ i*
+ |
+ i64
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ f*
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ c*
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+
+#### **~~Is it possible to avoid silent overflow in e.g. tf.constant(100, dtype=tf.uint8) + tf.constant(200, dtype=tf.uint8)
?
+
+~~In both JAX and TF, when overflow happens in float types, the result is an inf value:~~
+
+
+```
+jnp.array(4e4, 'float16') + jnp.array(4e4, 'float16') # inf
+tf.constant(4e4, 'float16') + tf.constant(4e4, 'float16') # inf
+```
+
+
+~~However if the dtype is integral, overflow is silent:~~
+
+
+```
+jnp.array(100, 'int8') + jnp.array(100, 'int8') # -56
+tf.constant(100, 'int8') + tf.constant(100, 'int8') # -56
+```
+
+
+~~ ~~
+
+
+
+
+### [Deprecated]: old propositions before 2022/09/05
+
+
+## **Design Proposal**
+
+We propose introducing two sets of flags that control the dtype promotion behaviors in TF:
+
+
+
+* `enable_implicit_promotion/disable_implicit_promotion`
+* `enable_promotion_to_float64/disable_promotion_to_float64`
+
+The two sets of flags influence the dtype promotion behaviors orthogonally. Combined together, the user can achieve four dtype promotion modes. In the following examples we will use `tf.add` to demonstrate the modes. The dunder method `__add__` is expected to have the same behavior as `tf.add`.
+
+
+### **Mode 1: enable\_implicit\_promotion + enable\_promotion\_to\_float64**
+
+In this mode, we allow all implicit promotions to happen in the op, even if both inputs have TF dtypes. To summarize the rules of this mode:
+
+
+
+* When the two inputs have non-matching TF dtypes, they will be promoted to the lowest width dtype that guarantees the precisions and ranges of the inputs. However, the promoted integer, float and complex dtype widths are capped at 64, 64 and 128 bits respectively. For example:
+ * The summation of `tf.bfloat16` and `tf.uint8` will be type `tf.bfloat16`.
+ * The summation of `tf.int64` and `tf.float64` will be capped at type `tf.float64` despite precision loss and overflow risks.
+* When one of the inputs is a python integer/float/complex, and the other is a TF dtype:
+ * If no overflow happens, the result dtype is loosely described with the following promotion direction: TF bool -> python integer -> TF signed/unsigned integers -> python float -> TF float -> python complex/TF complex.
+ * The dtype promotion is value-dependent. If the python scalar is over the range of the determined dtype, it is widened to prevent overflow. For example, `tf.constant([1], tf.int8) + 1` produces `tf.int8` while `tf.constant([1], tf.int8) + 1000` produces `tf.int16`.
+
+This mode is intended to provide a user experience similar to NumPy behaviors.
+
+Table: TF dtype promotion result of `tf.add` with enable\_implicit\_promotion + enable\_promotion\_to\_float64
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ bool
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i16
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ u8
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ u16
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ u32
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ u64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i8
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ f64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i8
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ f64
+ |
+ i16
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i16
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i64
+ |
+ f64
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ i32
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ f64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ bf16
+ |
+ bf16
+ |
+ c64
+ |
+
+
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f16
+ |
+ f16
+ |
+ c64
+ |
+
+
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+ i*
+ |
+ i64
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ f*
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ c*
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+Highlighted: When one of the inputs is a python scalar, the dtype in the table is the default result without value-dependent promotion.
+
+
+### **Mode 2: enable\_implicit\_promotion + disable\_promotion\_to\_float64**
+
+Mode 2 follows the same rules as mode 1 except for one thing: no `tf.float64` or `tf.complex128` will appear in the program - the promoted integer, float and complex dtype widths are capped at 64, 32 and 64 bits respectively. The purpose of this mode is to avoid training slowdown due to the introduction of double precision floating point numbers in a machine learning environment.
+
+Table: TF dtype promotion result of `tf.add` with enable\_implicit\_promotion + disable\_promotion\_to\_float64
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ bool
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ i64
+ |
+ f32
+ |
+ c64
+ |
+
+
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i16
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ u8
+ |
+ f32
+ |
+ c64
+ |
+
+
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ u16
+ |
+ f32
+ |
+ c64
+ |
+
+
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ u32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ u64
+ |
+ f32
+ |
+ c64
+ |
+
+
+ i8
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ f32
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ i8
+ |
+ f32
+ |
+ c64
+ |
+
+
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ f32
+ |
+ i16
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ i16
+ |
+ f32
+ |
+ c64
+ |
+
+
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i64
+ |
+ f32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ i32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ f32
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ i64
+ |
+ f32
+ |
+ c64
+ |
+
+
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ bf16
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ bf16
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ bf16
+ |
+ bf16
+ |
+ c64
+ |
+
+
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f16
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f16
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ f16
+ |
+ f16
+ |
+ c64
+ |
+
+
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ i*
+ |
+ i64
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ i64
+ |
+ f32
+ |
+ c64
+ |
+
+
+ f*
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ c*
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+
+
+Highlighted: When one of the inputs is a python scalar, the dtype in the table is the default result without value-dependent promotion.
+
+
+### **Mode 3: disable\_implicit\_promotion + enable\_promotion\_to\_float64**
+
+The dtype behavior of mode 3 is the same as mode 1, except that implicit conversions between TF dtypes are disallowed.
+
+Table: TF dtype promotion result of `tf.add` with disable\_implicit\_promotion + enable\_promotion\_to\_float64
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u8
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u8
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u16
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ bf16
+ |
+ c64
+ |
+
+
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ f16
+ |
+ c64
+ |
+
+
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ f64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ c64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+ i*
+ |
+ i64
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ f*
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ c*
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+Highlighted: When one of the inputs is a python scalar, the dtype in the table is the default result without value-dependent promotion.
+
+
+### **Mode 4: disable\_implicit\_promotion + disable\_promotion\_to\_float64**
+
+The dtype behavior of mode 4 is the same as mode 2, except that implicit conversions between TF dtypes are disallowed.
+
+Table: TF dtype promotion result of `tf.add` with disable\_implicit\_promotion + disable\_promotion\_to\_float64
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ f32
+ |
+ c64
+ |
+
+
+ u8
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u8
+ |
+ f32
+ |
+ c64
+ |
+
+
+ u16
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ f32
+ |
+ c64
+ |
+
+
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ f32
+ |
+ c64
+ |
+
+
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ f32
+ |
+ c64
+ |
+
+
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ f32
+ |
+ c64
+ |
+
+
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ f32
+ |
+ c64
+ |
+
+
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ bf16
+ |
+ c64
+ |
+
+
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ f16
+ |
+ c64
+ |
+
+
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ f64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ c64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ i*
+ |
+ i64
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ i64
+ |
+ f32
+ |
+ c64
+ |
+
+
+ f*
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+ c64
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ c*
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+
+
+Highlighted: When one of the inputs is a python scalar, the dtype in the table is the default result without value-dependent promotion.
From 0a583a2856d79d66b90e50d2fa1944db7457cac1 Mon Sep 17 00:00:00 2001
From: JW1992 <31080444+JW1992@users.noreply.github.com>
Date: Tue, 18 Oct 2022 13:38:37 -0700
Subject: [PATCH 02/20] Update 2022-10-18-promotion-semantics.md
---
rfcs/2022-10-18-promotion-semantics.md | 31 --------------------------
1 file changed, 31 deletions(-)
diff --git a/rfcs/2022-10-18-promotion-semantics.md b/rfcs/2022-10-18-promotion-semantics.md
index c9f010b20..8e1619081 100644
--- a/rfcs/2022-10-18-promotion-semantics.md
+++ b/rfcs/2022-10-18-promotion-semantics.md
@@ -5,37 +5,6 @@
## RFC: Making dtype promotion semantics in Tensorflow more consistent
-```
-
-Username
-Role
-Status
-Last Change
-rohanj
-Approver
-PENDING
-2022-09-12
-wangpeng
-Approver
-APPROVED
-2022-09-20
-yanhuasun
-Approver
-PENDING
-2022-09-12
-fchollet
-Reviewer
-PENDING
-2022-09-12
-vanderplas
-Reviewer
-PENDING
-2022-09-12
-#begin-approvals-addon-section
-Using go/DocWrap. Table auto-updated hourly. Don't edit this section and don't add comments in it.
-```
-
-
**Status**: Draft | In Review | Approved | Final | Obsolete
**Authors**: , LDAP
From cda6d73572255599ed3780e899b083e9902f348b Mon Sep 17 00:00:00 2001
From: JW1992 <31080444+JW1992@users.noreply.github.com>
Date: Tue, 18 Oct 2022 13:41:15 -0700
Subject: [PATCH 03/20] Update 2022-10-18-promotion-semantics.md
---
rfcs/2022-10-18-promotion-semantics.md | 8712 ------------------------
1 file changed, 8712 deletions(-)
diff --git a/rfcs/2022-10-18-promotion-semantics.md b/rfcs/2022-10-18-promotion-semantics.md
index 8e1619081..6173b245b 100644
--- a/rfcs/2022-10-18-promotion-semantics.md
+++ b/rfcs/2022-10-18-promotion-semantics.md
@@ -1,8716 +1,4 @@
-
-
-
-
-## RFC: Making dtype promotion semantics in Tensorflow more consistent
-
-
-**Status**: Draft | In Review | Approved | Final | Obsolete
-
-**Authors**: , LDAP
-
-**Contributors**:
-
-**Sponsors**: , , LDAP
-
-**Last Updated**:
-
-go/tf-np-style-promotion
-
-
-## **Note: the main design has been updated on 2020-09-05. We updated the proposed modes. The deprecated modes can be seen in [the bottom of this doc](#bookmark=id.kp5bjdbldb2g). **
-
-
-## **Acknowledgement**
-
-This is written by referencing investigations by ex-Googler , as well as projects TF-numpy and JAX:
-
-[Implicit promotion, inference and conversion in tensorflow](https://docs.google.com/document/d/1jOXJ1YAQAtseyYDDGm4gsr7xJ4F2FtvfwZJgNsW8cVA/edit#heading=h.lloqtrv9juf1)
-
-[tf.experimental.numpy](https://www.tensorflow.org/api_docs/python/tf/experimental/numpy)
-
-[Design of Type Promotion Semantics for JAX](https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html#)
-
-Specifically, thanks to TF-numpy already implemented the majority of the proposed dtype promotion behaviors.
-
## **Objective**
Currently TF has no consistent, well-defined type promotion rules. This document proposes a well-defined, consistent and clear dtype promotion rule for TF. The introduced changes make TF APIs more similar to NumPy, with some differences that emphasize TF’s applications in machine learning. This should make dtype promotions in TF much more consistent and predictable.
-
-[**What the doc is**] This doc discusses the preferred dtype promotion semantics/behaviors of Tensorflow (TF) and Tensorflow-numpy (TF-numpy) in the binary ops including add, sub, mul, div, pow and mod.
-
-[**What the doc is not**] This doc does not discuss the implementation plans.
-
-
-## **Motivation**
-
-In Tensorflow’s APIs, dtype promotion is very broken compared to JAX or NumPy. Many users have complained about these surprising behaviors (example: go/broccoli-tf-add, b/158346631, b/154456219). See [this doc ](https://docs.google.com/document/d/1jOXJ1YAQAtseyYDDGm4gsr7xJ4F2FtvfwZJgNsW8cVA/edit#)for a more detailed description.
-
-[TF-numpy](https://www.tensorflow.org/guide/tf_numpy#type_promotion) is a strategic project in Tensorflow. Compared to Tensorflow, TF-numpy’s dtype promotion behavior is more consistent because it [mainly relies on NumPy’s dtype promotion semantics](https://source.corp.google.com/piper///depot/google3/third_party/tensorflow/python/ops/numpy_ops/np_dtypes.py;rcl=399530502;l=112). In the [long-term vision doc](https://goto.google.com/tf-numpy-path-forward), unifying the dtype promotion semantics of TF and TF-numpy is a necessary first step.
-
-The unification of TF and TF-numpy’s dtype promotion semantics is a great chance to think about what semantics should be the long-term, user-friendly solution for TF.
-
-
-## **User Benefit**
-
-There have been ongoing complaints about the inconsistent dtype promotion behaviors of TF. To give an example, switching the inputs’ positions directly affects the outcome:
-
-
-```
-c = tf.constant(1.0)
-tf.add(c, 3) # tf.Tensor(4.0, shape=(), dtype=float32)
-tf.add(3, c) # InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a int32 tensor but is a float tensor [Op:AddV2]
-```
-
-
-For brevity we will sometimes use shortened terms to represent the dtypes as shown in the following examples:
-
-
-
-* `b` means tf.bool
-* `u8` means tf.uint8
-* `i16` means tf.int16
-* `bf16` means [tf.bfloat16](https://cloud.google.com/tpu/docs/bfloat16)
-* `f64` means tf.float64
-* `c64` means tf.complex64
-* `i*` means python int
-* `f*` means python float
-* `c*` means python complex
-
-The table below summarizes the dtype promotion behavior between two TF tensors/python scalars with method `__add__` and API `tf.add` respectively. Each row and column represents a combination of input types.
-
-Table: TF dtype promotion result of dunder method `__add__` and `tf.add`
-
-
-
-
-
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i*
- |
- f*
- |
- c*
- |
-
-
- b
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- u8
- |
- -
- |
- u8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u8
- |
- -
- |
- -
- |
-
-
- u16
- |
- -
- |
- -
- |
- u16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u16
- |
- -
- |
- -
- |
-
-
- u32
- |
- -
- |
- -
- |
- -
- |
- u32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u32
- |
- -
- |
- -
- |
-
-
- u64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u64
- |
- -
- |
- -
- |
-
-
- i8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i8
- |
- -
- |
- -
- |
-
-
- i16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i16
- |
- -
- |
- -
- |
-
-
- i32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i32
- |
- -
- |
- -
- |
-
-
- i64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i64
- |
- -
- |
- -
- |
-
-
- bf16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- bf16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- bf16
- |
- bf16
- |
- -
- |
-
-
- f16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f16
- |
- f16
- |
- -
- |
-
-
- f32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f32
- |
- -
- |
- -
- |
- -
- |
- f32
- |
- f32
- |
- -
- |
-
-
- f64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f64
- |
- -
- |
- -
- |
- f64
- |
- f64
- |
- -
- |
-
-
- c64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- c64
- |
- -
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- c128
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
- i*
- |
- -
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i32
- |
-
- |
-
- |
-
-
- f*
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f32
- |
- f32
- |
-
- |
-
-
- c*
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- c64
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
-
-
-In the table, the dash symbol `-` means unsupported: an error gets raised if the two dtypes are used in the call to that op. The cells highlighted in red are only supported in `__add__`, and the cells in yellow are only supported in `tf.add`. In summary, the existing TF APIs have these significant issues:
-
-
-
-1. Boolean dtypes are not supported;
-2. Implicit conversion between two TF dtypes are strictly disallowed, resulting in verbose code at times;
-3. Implicit conversion from some python dtypes to some TF dtypes are disallowed;
-4. `tf.add` is not even commutative - switching the position of the two arguments sometimes produces errors, otherwise results in correct values.
-
-When the inputs involve NumPy arrays, the behaviors can be even more broken. For brevity we have included NumPy-input behaviors in the appendix at the end of the doc.
-
-We aim to make the system commutative, predictable and correct.
-
-
-## **Design Proposal**
-
-We propose introducing three dtype promotion modes in TF:
-
-
-
-* `tf.ImplicitPromotion.ALL`
-* `tf.ImplicitPromotion.SAFE`
-* `tf.ImplicitPromotion.NONE`
-
-The three modes determine how often implicit dtype promotions happen in TF APIs. In the following examples we will use `tf.add` to demonstrate the modes. The dunder method `__add__` is expected to have the same behavior as `tf.add`. For brevity, NumPy (np) array inputs are not discussed in this section. However the proposed modes in this RFC will treat the np array inputs in the same way as the tensor inputs (see more in the appendix).
-
-Table: TF dtype promotion result of `tf.add` after the proposed changes. `NONE` only allows the unhighlighted cells. `SAFE` allows all `NONE` cases plus the cells highlighted in green. `ALL` allows all `SAFE` cases plus the cells highlighted in yellow. Rows/columns highlighted in blue only show the default result without overflow of inputs.
-
-
-
-
-
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i*
- |
- f*
- |
- c*
- |
-
-
- b
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i64
- |
- f64
- |
- c128
- |
-
-
- u8
- |
- u8
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i16
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- u8
- |
- f64
- |
- c128
- |
-
-
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u32
- |
- u64
- |
- i32
- |
- i32
- |
- i32
- |
- i64
- |
- f32
- |
- f32
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- u16
- |
- f64
- |
- c128
- |
-
-
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- u32
- |
- f64
- |
- c128
- |
-
-
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- u64
- |
- f64
- |
- c128
- |
-
-
- i8
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- f64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i8
- |
- f64
- |
- c128
- |
-
-
- i16
- |
- i16
- |
- i16
- |
- i32
- |
- i64
- |
- f64
- |
- i16
- |
- i16
- |
- i32
- |
- i64
- |
- f32
- |
- f32
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i16
- |
- f64
- |
- c128
- |
-
-
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i64
- |
- f64
- |
- i32
- |
- i32
- |
- i32
- |
- i64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- i32
- |
- f64
- |
- c128
- |
-
-
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- f64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- i64
- |
- f64
- |
- c128
- |
-
-
- bf16
- |
- bf16
- |
- bf16
- |
- f32
- |
- f64
- |
- f64
- |
- bf16
- |
- f32
- |
- f64
- |
- f64
- |
- bf16
- |
- f32
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- bf16
- |
- bf16
- |
- c64
- |
-
-
- f16
- |
- f16
- |
- f16
- |
- f32
- |
- f64
- |
- f64
- |
- f16
- |
- f32
- |
- f64
- |
- f64
- |
- f32
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f16
- |
- f16
- |
- c64
- |
-
-
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- f32
- |
- f32
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f32
- |
- f32
- |
- c64
- |
-
-
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c128
- |
- c128
- |
- c64
- |
- c64
- |
- c128
- |
- c128
- |
- c64
- |
- c64
- |
- c64
- |
- c128
- |
- c64
- |
- c128
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
- i*
- |
- i64
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i64
- |
- f64
- |
- c128
- |
-
-
- f*
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- c*
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c64
- |
- c64
- |
- c64
- |
- c128
- |
- c64
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
-
-
-
-### **Mode 1: tf.ImplicitPromotion.ALL**
-
-In this mode, we allow all implicit promotions to happen in the op, even if both inputs have TF dtypes. To summarize the rules of this mode:
-
-
-
-* When the two inputs have non-matching TF dtypes, they will be promoted to the lowest width dtype that guarantees the precisions and ranges of the inputs. However, the promoted integer, float and complex dtype widths are capped at 64, 64 and 128 bits respectively. For example:
- * The summation of `tf.bfloat16` and `tf.uint8` will be type `tf.bfloat16`.
- * The summation of `tf.int64` and `tf.float64` will be capped at type `tf.float64` despite precision loss and overflow risks.
-* When one of the inputs is a python integer/float/complex, and the other is a TF dtype:
- * If the python scalar value falls within the range of the TF dtype, the result dtype is loosely described with the following promotion direction: TF bool -> python integer -> TF signed/unsigned integers -> python float -> TF float -> python complex/TF complex.
- * The dtype promotion is allowed only if the python scalar is within the range of the determined dtype. Otherwise, an exception is raised. For example, `tf.constant([1], tf.int8) + 1` produces a `tf.int8` Tensor, while `tf.constant([1], tf.int8) + 1000` raises an error.
-
-This mode is intended to provide a user experience similar to NumPy behaviors.
-
-
-### **Mode 2: tf.ImplicitPromotion.SAFE**
-
-Mode `SAFE` follows the same rules as mode `ALL`, but disallows “unsafe” dtype promotions. In general, we think the following two types of conversions are unsafe:
-
-
-
-* If the inputs are both Tensors and the result dtype would be “wider” than all the inputs. For example, the summation of `tf.uint8` and `tf.int8` would be type `tf.i16`. These dtype promotions risk increasing the model’s memory consumption and slowing down computation speed.
-* If the inputs are both Tensors and the result dtype cannot preserve all the precisions of the inputs. For example, the summation of `tf.uint64` and `tf.float32` would be type `tf.float64`.
-
-With the above principle, if one of the inputs is a Tensor while the other is a python scalar, we only allow implicit promotion if the scalar can fit into the Tensor’s dtype. For example:
-
-
-
-* The summation between a `tf.int32` and a python float is always disallowed;
-* The summation between a `tf.uint8` and a python int is allowed only if the python int is between 0 and 255.
-
-For these disallowed dtype promotions, we require the users to explicitly cast them.
-
-Another advantage is that mode `ALL` can be silently nonassociative in the following scenario, while mode `SAFE` avoids it by not allowing any dtype widening:
-
-
-
-* `(tf.zeros(tf.uint8) + tf.zeros(tf.int8)) + tf.zeros(tf.float16)` evaluates to dtype `tf.float32`;
-* `tf.zeros(tf.uint8) + (tf.zeros(tf.int8) + tf.zeros(tf.float16))` evaluates to dtype `tf.float16`.
-
-
-### **Mode 3: tf.ImplicitPromotion.NONE**
-
-The dtype behavior of mode `NONE` is the most conservative: no dtype promotion is allowed between Tensors. When the inputs include python scalars, it follows the same rule as mode `SAFE`.
-
-
-### **Alternatives Considered: Capping the promotion system at float32**
-
-The TF-NumPy project contains another flag [allow_float64](https://source.corp.google.com/piper///depot/google3/third_party/tensorflow/python/ops/numpy_ops/np_dtypes.py;rcl=399530502;l=88)
. When disabled, it caps the converted floating numbers to 32 bits, and complex numbers to 64 bits. This is useful when users want to enjoy implicit conversions in all cases and avoid any performance regressions related to double precision floating point numbers. This feature is now used by [Trax](https://b.corp.google.com/issues/178862061).
-
-Our current plan does not involve adding this flag in TF, as it’s orthogonal to the proposed modes. If there are more user needs we can consider exposing it in TF as well.
-
-
-### **Alternatives Considered: A dtype promotion “lattice”**
-
-Unlike the approaches above, JAX used a dtype promotion lattice to define its semantics. More details can be seen in their [design doc](https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html#properties-of-a-type-promotion-lattice). Their lattice system has these advantages:
-
-
-
-* A lattice can be much more concise to describe the promotion semantics;
-* The lattice can ensure associativity as well as commutativity. In contrast, the promotion behaviors in this proposal is not associative:
- * `(tf.zeros(tf.uint8) + tf.zeros(tf.int8)) + tf.zeros(tf.float16)` evaluates to dtype `tf.float32`;
- * `tf.zeros(tf.uint8) + (tf.zeros(tf.int8) + tf.zeros(tf.float16))` evaluates to dtype `tf.float16`.
-
-~~However the lattice system also introduced extra complexity in the design. For example:~~
-
-
-
-* ~~JAX ended up introducing the concepts of “weak types” in place of python types;~~
-* ~~JAX adopted very aggressive dtype promotion semantics - an int64 dtype can be promoted to a float16, resulting in higher risks of overflow.~~
-
-~~Since our ultimate goal is to help the users maintain a clear and concise mental model, adopting a dtype promotion table is an easier approach.~~
-
-Compared to a lattice system, the proposed table-based system does not guarantee associativity. However, we can avoid such usage scenarios in the proposed SAFE mode (no implicit promotion to wider dtypes). As a result, it’s not clearly justifiable to switch to a lattice system from the table-based approach, which already exists in project TF-NumPy.
-
-
-### **Alternatives Considered: Value-dependent promotion (a.k.a. “safe casting” in NumPy)**
-
-NumPy has a feature “safe casting”: the dtype promotion API, [numpy.result_type](https://numpy.org/doc/stable/reference/generated/numpy.result_type.html)
, can return a different dtype depending on the input python scalar value:
-
-
-```
-np.result_type(np.int8, 200) # dtype('int16')
-np.result_type(np.int8, 100) # dtype('int8')
-```
-
-
-This slightly reduces the risk of overflow. However, the result is unpredictable. NumPy is considering removing this feature in a future release (see proposal: [NEP 50](https://numpy.org/neps/nep-0050-scalar-promotion.html)). JAX [did not adopt this feature](https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html#enter-python-scalars) either. We omit it in this design.
-
-
-### **Performance Implications**
-
-For an existing piece of TF code in python, we do not expect the performance to change after switching to any of the proposed modes. This is because all the allowed implicit dtype promotions in TF will stay unchanged. As to the currently disallowed dtype promotions, users must have used explicit dtype promotions which are not affected by the change.
-
-When users develop a new piece of code, however, the new modes have higher chances of computation slowdown due to implicit promotions in mode `ALL`.
-
-We can carry out the following steps to mitigate user surprises:
-
-
-
-* Add benchmark tests to verify that the new modes won’t affect the performance of existing TF code.
-* Publish tutorials that help users select the most appropriate dtype promotion modes.
-* [optional] Add an API that instruments the number of implicit, unsafe dtype conversions in the program (e.g. promotions from 32 bits to 64 bits). This helps users understand whether the promotions contribute to computation slowdown in the graphs.
-
-
-### **Dependencies**
-
-This proposal is not expected to affect dependencies to Tensorflow.
-
-
-#### **Engineering Impact**
-
-
-#### We do not expect changes in binary size/startup time/build time/test time.
-
-The majority of the dtype promotion logic already exists in Tensorflow under namescope `tensorflow.experimental.numpy`. However, refactoring/updates will be needed under directory tensorflow/python/ops.
-
-
-#### **Platforms and Environments**
-
-This proposal is expected to take effect on all platforms in the python environment.
-
-
-#### **Best Practices/Tutorials and Examples**
-
-As mentioned in section “Performance Implications” we will publish tutorials that help users select the most appropriate dtype promotion modes.
-
-
-#### **Compatibility/User Impact**
-
-This is a breaking change. As mentioned above, existing code is expected to continue working. However, some tests will fail because more implicit dtype conversions are allowed. We can mitigate the user impact by adding a flag that sticks to the existing dtype promotion behaviors.
-
-The proposed changes will be rolled out in various stages:
-
-
-
-1. Default behaviors stay unchanged, with the flags introduced under namescope `tf.experiment` to collect user feedbacks;
-2. Default behaviors switched to one of the modes above, while the old behaviors can be re-enabled via a flag;
-3. [optional] Completely disable the old, broken dtype promotion behaviors.
-
-
-## **Questions and Discussion Topics**
-
-
-#### **Are there changes expected in tf.Variable?**
-
-In APIs such as `tf.add` and dunder methods `__add__`, the same changes are expected to take place because these methods first read the Variable into a Tensor, then carry out the Tensor-to-Tensor computations.
-
-Currently, inplace ops such as `assign_add` are also broken. The two tables below show the result of `assign_add` when the input is a Tensor, a python scalar or a NumPy array:
-
-
-
-* When the input is a Tensor, no dtype conversion is allowed.
-* When the input is a python scalar, it defers to the variable’s dtype. However, python float cannot be converted to int, and complex cannot be converted to either int or float dtypes.
-* When the input is a NumPy array, it always gets converted to the variable’s dtype regardless of any precision loss (for example `tf.Variable(1) + np.array(1.5)` returns 2).
-
-Table: Result of tf.Variable (rows) inplace op `assign_add` with a Tensor or python scalar as the argument (columns).
-
-
-
-
-
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i*
- |
- f*
- |
- c*
- |
-
-
- b
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- u8
- |
- -
- |
- u8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u8
- |
- -
- |
- -
- |
-
-
- u16
- |
- -
- |
- -
- |
- u16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u16
- |
- -
- |
- -
- |
-
-
- u32
- |
- -
- |
- -
- |
- -
- |
- u32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u32
- |
- -
- |
- -
- |
-
-
- u64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u64
- |
- -
- |
- -
- |
-
-
- i8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i8
- |
- -
- |
- -
- |
-
-
- i16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i16
- |
- -
- |
- -
- |
-
-
- i32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i32
- |
- -
- |
- -
- |
-
-
- i64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i64
- |
- -
- |
- -
- |
-
-
- bf16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- bf16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- bf16
- |
- bf16
- |
- -
- |
-
-
- f16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f16
- |
- f16
- |
- -
- |
-
-
- f32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f32
- |
- -
- |
- -
- |
- -
- |
- f32
- |
- f32
- |
- -
- |
-
-
- f64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f64
- |
- -
- |
- -
- |
- f64
- |
- f64
- |
- -
- |
-
-
- c64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- c64
- |
- -
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- c128
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
-
-
-Table: Result of tf.Variable (rows) inplace op `assign_add` with a NumPy array as the argument (columns).
-
-
-
-
-
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
-
-
- b
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- -
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
-
-
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- -
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
-
-
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- -
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
-
-
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- -
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
-
-
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- -
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
-
-
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- -
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
-
-
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- -
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
-
-
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- -
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
-
-
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- -
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
-
-
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- -
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
-
-
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- -
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
-
-
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- -
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
-
-
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- -
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- -
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
-
-
-We can make these ops more consistent as well, following the rules in the three proposed modes. However, we also have another constraint: the Variable’s dtype cannot change. As a result, any dtype promotion that results in a dtype different from the Variable’s dtype is disabled.
-
-Table: Result of tf.Variable (rows) inplace op `assign_add` with a Tensor or python scalar as the argument (columns) after the proposed changes. `NONE` only allows the unhighlighted cells. `SAFE` allows all `NONE` cases plus the cells highlighted in green. `ALL` allows all `SAFE` cases plus the cells highlighted in yellow.
-
-
-
-
-
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i*
- |
- f*
- |
- c*
- |
-
-
- b
- |
- b
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- u8
- |
- u8
- |
- u8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u8
- |
- -
- |
- -
- |
-
-
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u16
- |
- -
- |
- -
- |
-
-
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u32
- |
- -
- |
- -
- |
-
-
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u64
- |
- -
- |
- -
- |
-
-
- i8
- |
- i8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i8
- |
- -
- |
- -
- |
-
-
- i16
- |
- i16
- |
- i16
- |
- -
- |
- -
- |
- -
- |
- i16
- |
- i16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i16
- |
- -
- |
- -
- |
-
-
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- -
- |
- -
- |
- i32
- |
- i32
- |
- i32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i32
- |
- -
- |
- -
- |
-
-
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- -
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i64
- |
- -
- |
- -
- |
-
-
- bf16
- |
- bf16
- |
- bf16
- |
- -
- |
- -
- |
- -
- |
- bf16
- |
- -
- |
- -
- |
- -
- |
- bf16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- bf16
- |
- bf16
- |
- -
- |
-
-
- f16
- |
- f16
- |
- f16
- |
- -
- |
- -
- |
- -
- |
- f16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f16
- |
- f16
- |
- -
- |
-
-
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- -
- |
- -
- |
- f32
- |
- f32
- |
- -
- |
- -
- |
- f32
- |
- f32
- |
- f32
- |
- -
- |
- -
- |
- -
- |
- f32
- |
- f32
- |
- -
- |
-
-
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- -
- |
- -
- |
- f64
- |
- f64
- |
- -
- |
-
-
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- -
- |
- -
- |
- c64
- |
- c64
- |
- -
- |
- -
- |
- c64
- |
- c64
- |
- c64
- |
- -
- |
- c64
- |
- -
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
-
-
-
-#### **Are there also issues in conversions between Tensor and numpy arrays?**
-
-There exists arguably more serious issues when it comes to the mixture of Tensor and numpy arrays - wrong results can appear silently:
-
-
-```
-a = np.array(3.1)
-c = tf.constant(1)
-print(tf.add(a, c)) # InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a double tensor but is a int32 tensor [Op:AddV2]
-print(tf.add(c, a)) # tf.Tensor(4, shape=(), dtype=int32) - Even worse than exceptions
-```
-
-
-To give a comprehensive overview, the two tables below show the results of `tf.add(tensor, numpy array)` and `tf.add(numpy array, tensor)` :
-
-Table: TF dtype promotion result of `tf.add` with a Tensor as the first argument (rows) and np array in the second argument (columns).
-
-
-
-
-
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
-
-
- b
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- -
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
- u8
- |
-
-
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- -
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u16
- |
-
-
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- -
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
-
-
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- -
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
-
-
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- -
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
- i8
- |
-
-
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- -
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
- i16
- |
-
-
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- -
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i32
- |
-
-
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- -
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
-
-
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- -
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
- bf16
- |
-
-
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- -
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
- f16
- |
-
-
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- -
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
-
-
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- -
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
-
-
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- -
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- -
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
-
-
-Table: TF dtype promotion result of `tf.add` with a np array as the first argument (rows) and tensor in the second argument (columns).
-
-
-
-
-
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
-
-
- b
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- u8
- |
- -
- |
- u8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- u16
- |
- -
- |
- -
- |
- u16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- u32
- |
- -
- |
- -
- |
- -
- |
- u32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- u64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- i8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- i16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- i32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- i64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- bf16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- f16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f16
- |
- -
- |
- -
- |
- -
- |
- -
- |
-
-
- f32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f32
- |
- -
- |
- -
- |
- -
- |
-
-
- f64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f64
- |
- -
- |
- -
- |
-
-
- c64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- c64
- |
- -
- |
-
-
- c128
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- c128
- |
-
-
-
-
-To briefly describe the issues:
-
-
-
-1. dtypes `tf.bfloat16` and `tf.bool` are poorly supported;
-2. If the first input is a tensor, the second input (the np array) is always converted to the tensor’s dtype, resulting into significant precision loss in many scenarios;
-3. If the first input is a np array and the second is a tensor, the np array is first converted to a tensor with the corresponding dtype. TF then does not attempt any implicit dtype conversion, causing `InvalidArgumentError` whenever the two inputs have mismatching dtypes.
-
-Though the behaviors with np arrays inputs are not the same as python scalars, they share the same root cause: At present, TF does not have a centralized dtype promotion system, and incorrectly uses the [dtype of the first Tensor in its list of arguments](https://source.corp.google.com/piper///depot/google3/third_party/tensorflow/python/ops/math_ops.py;rcl=449753780;l=1371) to promote all arguments to. The proposed modes in this RFC will treat the np array inputs in the same way as the tensor inputs.
-
-
-#### **What happens to e.g. bool - bool
or bool / bool
?
-
-With NumPy:
-
-
-
-* Addition between two Booleans: equivalent to OR
-* Subtraction between two Booleans: forbidden
-* Multiplication between two Booleans: AND
-* Division between two Booleans: converted to float64 then divide
-
-
-#### **What happens to truediv operations?**
-
-The op truediv is special because it implicitly promotes integral types. For example when the inputs are matching dtypes, 16-bit integral types are promoted to `f32`, and 32-bit integral types are promoted to `float64`.
-
-Using this rule of thumb, we can easily deduce the promotion behavior between non-matching dtypes when this proposal is landed. For example:
-
-
-
-* `int8` and `int16` -> both `int16` -> `f32` (allowed in `SAFE/ALL` modes)
-* `uint8` and `uint32` -> both `uint32` -> `f64` (allowed in `SAFE/ALL` modes)
-* `int8` and `i*` -> both `int8` -> `f32` (allowed in `NONE/SAFE/ALL` modes)
-* `int8` and `uint16` -> both `int32` -> `f64` (allowed in `ALL` mode)
-
-In the above examples, the first arrow represents the dtype promotion in this proposal, and the second arrow represents the promotion in op truediv. Note the result of `int8` and `uint16`, `f64`, turns out to be an overkill but is consistent with the combination of the dtype promotion rules and the truediv promotion rules.
-
-~~Table: TF dtype promotion result of `tf.math.truediv` after the proposed changes. `NONE` only allows the unhighlighted cells. `SAFE` allows all `NONE` cases plus the cells highlighted in green. `ALL` allows all `SAFE` cases plus the cells highlighted in yellow. Rows/columns highlighted in blue only show the default result without overflow of inputs.~~
-
-
-
-
-
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i*
- |
- f*
- |
- c*
- |
-
-
- b
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i64
- |
- f64
- |
- c128
- |
-
-
- u8
- |
- u8
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f32
- |
- f64
- |
- c128
- |
-
-
- u16
- |
- u16
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f32
- |
- f32
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f32
- |
- f64
- |
- c128
- |
-
-
- u32
- |
- u32
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- u64
- |
- u64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- i8
- |
- i8
- |
- f32
- |
- f64
- |
- f64
- |
- f64
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f32
- |
- f64
- |
- c128
- |
-
-
- i16
- |
- i16
- |
- f32
- |
- f64
- |
- f64
- |
- f64
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- f32
- |
- f32
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f32
- |
- f64
- |
- c128
- |
-
-
- i32
- |
- i32
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- i64
- |
- i64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- bf16
- |
- bf16
- |
- bf16
- |
- f32
- |
- f64
- |
- f64
- |
- bf16
- |
- f32
- |
- f64
- |
- f64
- |
- bf16
- |
- f32
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- bf16
- |
- bf16
- |
- c64
- |
-
-
- f16
- |
- f16
- |
- f16
- |
- f32
- |
- f64
- |
- f64
- |
- f16
- |
- f32
- |
- f64
- |
- f64
- |
- f32
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f16
- |
- f16
- |
- c64
- |
-
-
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- f32
- |
- f32
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f32
- |
- f32
- |
- c64
- |
-
-
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c128
- |
- c128
- |
- c64
- |
- c64
- |
- c128
- |
- c128
- |
- c64
- |
- c64
- |
- c64
- |
- c128
- |
- c64
- |
- c128
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
- i*
- |
- i64
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- f*
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- c*
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c64
- |
- c64
- |
- c64
- |
- c128
- |
- c64
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
-
-
-
-#### **~~Is it possible to avoid silent overflow in e.g. tf.constant(100, dtype=tf.uint8) + tf.constant(200, dtype=tf.uint8)
?
-
-~~In both JAX and TF, when overflow happens in float types, the result is an inf value:~~
-
-
-```
-jnp.array(4e4, 'float16') + jnp.array(4e4, 'float16') # inf
-tf.constant(4e4, 'float16') + tf.constant(4e4, 'float16') # inf
-```
-
-
-~~However if the dtype is integral, overflow is silent:~~
-
-
-```
-jnp.array(100, 'int8') + jnp.array(100, 'int8') # -56
-tf.constant(100, 'int8') + tf.constant(100, 'int8') # -56
-```
-
-
-~~ ~~
-
-
-
-
-### [Deprecated]: old propositions before 2022/09/05
-
-
-## **Design Proposal**
-
-We propose introducing two sets of flags that control the dtype promotion behaviors in TF:
-
-
-
-* `enable_implicit_promotion/disable_implicit_promotion`
-* `enable_promotion_to_float64/disable_promotion_to_float64`
-
-The two sets of flags influence the dtype promotion behaviors orthogonally. Combined together, the user can achieve four dtype promotion modes. In the following examples we will use `tf.add` to demonstrate the modes. The dunder method `__add__` is expected to have the same behavior as `tf.add`.
-
-
-### **Mode 1: enable\_implicit\_promotion + enable\_promotion\_to\_float64**
-
-In this mode, we allow all implicit promotions to happen in the op, even if both inputs have TF dtypes. To summarize the rules of this mode:
-
-
-
-* When the two inputs have non-matching TF dtypes, they will be promoted to the lowest width dtype that guarantees the precisions and ranges of the inputs. However, the promoted integer, float and complex dtype widths are capped at 64, 64 and 128 bits respectively. For example:
- * The summation of `tf.bfloat16` and `tf.uint8` will be type `tf.bfloat16`.
- * The summation of `tf.int64` and `tf.float64` will be capped at type `tf.float64` despite precision loss and overflow risks.
-* When one of the inputs is a python integer/float/complex, and the other is a TF dtype:
- * If no overflow happens, the result dtype is loosely described with the following promotion direction: TF bool -> python integer -> TF signed/unsigned integers -> python float -> TF float -> python complex/TF complex.
- * The dtype promotion is value-dependent. If the python scalar is over the range of the determined dtype, it is widened to prevent overflow. For example, `tf.constant([1], tf.int8) + 1` produces `tf.int8` while `tf.constant([1], tf.int8) + 1000` produces `tf.int16`.
-
-This mode is intended to provide a user experience similar to NumPy behaviors.
-
-Table: TF dtype promotion result of `tf.add` with enable\_implicit\_promotion + enable\_promotion\_to\_float64
-
-
-
-
-
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i*
- |
- f*
- |
- c*
- |
-
-
- b
- |
- bool
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i64
- |
- f64
- |
- c128
- |
-
-
- u8
- |
- u8
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i16
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- u8
- |
- f64
- |
- c128
- |
-
-
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u32
- |
- u64
- |
- i32
- |
- i32
- |
- i32
- |
- i64
- |
- f32
- |
- f32
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- u16
- |
- f64
- |
- c128
- |
-
-
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- u32
- |
- f64
- |
- c128
- |
-
-
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- u64
- |
- f64
- |
- c128
- |
-
-
- i8
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- f64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i8
- |
- f64
- |
- c128
- |
-
-
- i16
- |
- i16
- |
- i16
- |
- i32
- |
- i64
- |
- f64
- |
- i16
- |
- i16
- |
- i32
- |
- i64
- |
- f32
- |
- f32
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i16
- |
- f64
- |
- c128
- |
-
-
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i64
- |
- f64
- |
- i32
- |
- i32
- |
- i32
- |
- i64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- i32
- |
- f64
- |
- c128
- |
-
-
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- f64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- i64
- |
- f64
- |
- c128
- |
-
-
- bf16
- |
- bf16
- |
- bf16
- |
- f32
- |
- f64
- |
- f64
- |
- bf16
- |
- f32
- |
- f64
- |
- f64
- |
- bf16
- |
- f32
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- bf16
- |
- bf16
- |
- c64
- |
-
-
- f16
- |
- f16
- |
- f16
- |
- f32
- |
- f64
- |
- f64
- |
- f16
- |
- f32
- |
- f64
- |
- f64
- |
- f32
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f16
- |
- f16
- |
- c64
- |
-
-
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- f32
- |
- f32
- |
- f64
- |
- f64
- |
- f32
- |
- f32
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f32
- |
- f32
- |
- c64
- |
-
-
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- c128
- |
- c128
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c128
- |
- c128
- |
- c64
- |
- c64
- |
- c128
- |
- c128
- |
- c64
- |
- c64
- |
- c64
- |
- c128
- |
- c64
- |
- c128
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
- i*
- |
- i64
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i64
- |
- f64
- |
- c128
- |
-
-
- f*
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- c*
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c64
- |
- c64
- |
- c64
- |
- c128
- |
- c64
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
-
-
-Highlighted: When one of the inputs is a python scalar, the dtype in the table is the default result without value-dependent promotion.
-
-
-### **Mode 2: enable\_implicit\_promotion + disable\_promotion\_to\_float64**
-
-Mode 2 follows the same rules as mode 1 except for one thing: no `tf.float64` or `tf.complex128` will appear in the program - the promoted integer, float and complex dtype widths are capped at 64, 32 and 64 bits respectively. The purpose of this mode is to avoid training slowdown due to the introduction of double precision floating point numbers in a machine learning environment.
-
-Table: TF dtype promotion result of `tf.add` with enable\_implicit\_promotion + disable\_promotion\_to\_float64
-
-
-
-
-
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i*
- |
- f*
- |
- c*
- |
-
-
- b
- |
- bool
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- i64
- |
- f32
- |
- c64
- |
-
-
- u8
- |
- u8
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i16
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- u8
- |
- f32
- |
- c64
- |
-
-
- u16
- |
- u16
- |
- u16
- |
- u16
- |
- u32
- |
- u64
- |
- i32
- |
- i32
- |
- i32
- |
- i64
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- u16
- |
- f32
- |
- c64
- |
-
-
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u32
- |
- u64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- u32
- |
- f32
- |
- c64
- |
-
-
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- u64
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- u64
- |
- f32
- |
- c64
- |
-
-
- i8
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- f32
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- i8
- |
- f32
- |
- c64
- |
-
-
- i16
- |
- i16
- |
- i16
- |
- i32
- |
- i64
- |
- f32
- |
- i16
- |
- i16
- |
- i32
- |
- i64
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- i16
- |
- f32
- |
- c64
- |
-
-
- i32
- |
- i32
- |
- i32
- |
- i32
- |
- i64
- |
- f32
- |
- i32
- |
- i32
- |
- i32
- |
- i64
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- i32
- |
- f32
- |
- c64
- |
-
-
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- f32
- |
- i64
- |
- i64
- |
- i64
- |
- i64
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- i64
- |
- f32
- |
- c64
- |
-
-
- bf16
- |
- bf16
- |
- bf16
- |
- f32
- |
- f32
- |
- f32
- |
- bf16
- |
- f32
- |
- f32
- |
- f32
- |
- bf16
- |
- f32
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- bf16
- |
- bf16
- |
- c64
- |
-
-
- f16
- |
- f16
- |
- f16
- |
- f32
- |
- f32
- |
- f32
- |
- f16
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f16
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- f16
- |
- f16
- |
- c64
- |
-
-
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- f32
- |
- f32
- |
- c64
- |
-
-
- f64
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- f32
- |
- f32
- |
- c64
- |
-
-
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- c128
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- i*
- |
- i64
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- i64
- |
- f32
- |
- c64
- |
-
-
- f*
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- bf16
- |
- f16
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- f32
- |
- f32
- |
- c64
- |
-
-
- c*
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
-
-
-
-
-Highlighted: When one of the inputs is a python scalar, the dtype in the table is the default result without value-dependent promotion.
-
-
-### **Mode 3: disable\_implicit\_promotion + enable\_promotion\_to\_float64**
-
-The dtype behavior of mode 3 is the same as mode 1, except that implicit conversions between TF dtypes are disallowed.
-
-Table: TF dtype promotion result of `tf.add` with disable\_implicit\_promotion + enable\_promotion\_to\_float64
-
-
-
-
-
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i*
- |
- f*
- |
- c*
- |
-
-
- b
- |
- b
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i64
- |
- f64
- |
- c128
- |
-
-
- u8
- |
- -
- |
- u8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u8
- |
- f64
- |
- c128
- |
-
-
- u16
- |
- -
- |
- -
- |
- u16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u16
- |
- f64
- |
- c128
- |
-
-
- u32
- |
- -
- |
- -
- |
- -
- |
- u32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u32
- |
- f64
- |
- c128
- |
-
-
- u64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u64
- |
- f64
- |
- c128
- |
-
-
- i8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i8
- |
- f64
- |
- c128
- |
-
-
- i16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i16
- |
- f64
- |
- c128
- |
-
-
- i32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i32
- |
- f64
- |
- c128
- |
-
-
- i64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i64
- |
- f64
- |
- c128
- |
-
-
- bf16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- bf16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- bf16
- |
- bf16
- |
- c64
- |
-
-
- f16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f16
- |
- f16
- |
- c64
- |
-
-
- f32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f32
- |
- -
- |
- -
- |
- -
- |
- f32
- |
- f32
- |
- c64
- |
-
-
- f64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f64
- |
- -
- |
- -
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- c64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- c64
- |
- -
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- c128
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
- i*
- |
- i64
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i64
- |
- f64
- |
- c128
- |
-
-
- f*
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- f64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- f64
- |
- f64
- |
- c128
- |
-
-
- c*
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
- c64
- |
- c64
- |
- c64
- |
- c128
- |
- c64
- |
- c128
- |
- c128
- |
- c128
- |
- c128
- |
-
-
-
-
-Highlighted: When one of the inputs is a python scalar, the dtype in the table is the default result without value-dependent promotion.
-
-
-### **Mode 4: disable\_implicit\_promotion + disable\_promotion\_to\_float64**
-
-The dtype behavior of mode 4 is the same as mode 2, except that implicit conversions between TF dtypes are disallowed.
-
-Table: TF dtype promotion result of `tf.add` with disable\_implicit\_promotion + disable\_promotion\_to\_float64
-
-
-
-
-
- |
- b
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f64
- |
- c64
- |
- c128
- |
- i*
- |
- f*
- |
- c*
- |
-
-
- b
- |
- b
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i64
- |
- f32
- |
- c64
- |
-
-
- u8
- |
- -
- |
- u8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u8
- |
- f32
- |
- c64
- |
-
-
- u16
- |
- -
- |
- -
- |
- u16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u16
- |
- f32
- |
- c64
- |
-
-
- u32
- |
- -
- |
- -
- |
- -
- |
- u32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u32
- |
- f32
- |
- c64
- |
-
-
- u64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- u64
- |
- f32
- |
- c64
- |
-
-
- i8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i8
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i8
- |
- f32
- |
- c64
- |
-
-
- i16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i16
- |
- f32
- |
- c64
- |
-
-
- i32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i32
- |
- f32
- |
- c64
- |
-
-
- i64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- i64
- |
- f32
- |
- c64
- |
-
-
- bf16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- bf16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- bf16
- |
- bf16
- |
- c64
- |
-
-
- f16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f16
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f16
- |
- f16
- |
- c64
- |
-
-
- f32
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f32
- |
- -
- |
- -
- |
- -
- |
- f32
- |
- f32
- |
- c64
- |
-
-
- f64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- f64
- |
- -
- |
- -
- |
- f32
- |
- f32
- |
- c64
- |
-
-
- c64
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- c64
- |
- -
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- c128
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- -
- |
- c128
- |
- c64
- |
- c64
- |
- c64
- |
-
-
- i*
- |
- i64
- |
- u8
- |
- u16
- |
- u32
- |
- u64
- |
- i8
- |
- i16
- |
- i32
- |
- i64
- |
- bf16
- |
- f16
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- i64
- |
- f32
- |
- c64
- |
-
-
- f*
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- f32
- |
- bf16
- |
- f16
- |
- f32
- |
- f32
- |
- c64
- |
- c64
- |
- f32
- |
- f32
- |
- c64
- |
-
-
- c*
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
- c64
- |
-
-
-
-
-Highlighted: When one of the inputs is a python scalar, the dtype in the table is the default result without value-dependent promotion.
From 1e60a174456e00f87ea4828bb4b60cc1980a9d35 Mon Sep 17 00:00:00 2001
From: JW1992 <31080444+JW1992@users.noreply.github.com>
Date: Tue, 18 Oct 2022 13:44:49 -0700
Subject: [PATCH 04/20] Update 2022-10-18-promotion-semantics.md
---
rfcs/2022-10-18-promotion-semantics.md | 4769 ++++++++++++++++++++++++
1 file changed, 4769 insertions(+)
diff --git a/rfcs/2022-10-18-promotion-semantics.md b/rfcs/2022-10-18-promotion-semantics.md
index 6173b245b..5543cf9d3 100644
--- a/rfcs/2022-10-18-promotion-semantics.md
+++ b/rfcs/2022-10-18-promotion-semantics.md
@@ -1,4 +1,4773 @@
+
+
+
+
+## Making dtype promotion semantics in Tensorflow more consistent
+
## **Objective**
Currently TF has no consistent, well-defined type promotion rules. This document proposes a well-defined, consistent and clear dtype promotion rule for TF. The introduced changes make TF APIs more similar to NumPy, with some differences that emphasize TF’s applications in machine learning. This should make dtype promotions in TF much more consistent and predictable.
+
+[**What the doc is**] This doc discusses the preferred dtype promotion semantics/behaviors of Tensorflow (TF) and Tensorflow-numpy (TF-numpy) in the binary ops including add, sub, mul, div, pow and mod.
+
+[**What the doc is not**] This doc does not discuss the implementation plans.
+
+
+## **Motivation**
+
+In Tensorflow’s APIs, dtype promotion is very broken compared to JAX or NumPy. Many users have complained about these surprising behaviors (example: go/broccoli-tf-add, b/158346631, b/154456219). See [this doc ](https://docs.google.com/document/d/1jOXJ1YAQAtseyYDDGm4gsr7xJ4F2FtvfwZJgNsW8cVA/edit#)for a more detailed description.
+
+[TF-numpy](https://www.tensorflow.org/guide/tf_numpy#type_promotion) is a strategic project in Tensorflow. Compared to Tensorflow, TF-numpy’s dtype promotion behavior is more consistent because it [mainly relies on NumPy’s dtype promotion semantics](https://source.corp.google.com/piper///depot/google3/third_party/tensorflow/python/ops/numpy_ops/np_dtypes.py;rcl=399530502;l=112). In the [long-term vision doc](https://goto.google.com/tf-numpy-path-forward), unifying the dtype promotion semantics of TF and TF-numpy is a necessary first step.
+
+The unification of TF and TF-numpy’s dtype promotion semantics is a great chance to think about what semantics should be the long-term, user-friendly solution for TF.
+
+
+## **User Benefit**
+
+There have been ongoing complaints about the inconsistent dtype promotion behaviors of TF. To give an example, switching the inputs’ positions directly affects the outcome:
+
+
+```
+c = tf.constant(1.0)
+tf.add(c, 3) # tf.Tensor(4.0, shape=(), dtype=float32)
+tf.add(3, c) # InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a int32 tensor but is a float tensor [Op:AddV2]
+```
+
+
+For brevity we will sometimes use shortened terms to represent the dtypes as shown in the following examples:
+
+
+
+* `b` means tf.bool
+* `u8` means tf.uint8
+* `i16` means tf.int16
+* `bf16` means [tf.bfloat16](https://cloud.google.com/tpu/docs/bfloat16)
+* `f64` means tf.float64
+* `c64` means tf.complex64
+* `i*` means python int
+* `f*` means python float
+* `c*` means python complex
+
+The table below summarizes the dtype promotion behavior between two TF tensors/python scalars with method `__add__` and API `tf.add` respectively. Each row and column represents a combination of input types.
+
+Table: TF dtype promotion result of dunder method `__add__` and `tf.add`
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u8
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+
+
+ u16
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+
+
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+
+
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+
+
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+
+
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+
+
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+
+
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+
+
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ bf16
+ |
+ -
+ |
+
+
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ f16
+ |
+ -
+ |
+
+
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+
+
+ f64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ f64
+ |
+ -
+ |
+
+
+ c64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+ i*
+ |
+ -
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i32
+ |
+
+ |
+
+ |
+
+
+ f*
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f32
+ |
+ f32
+ |
+
+ |
+
+
+ c*
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+In the table, the dash symbol `-` means unsupported: an error gets raised if the two dtypes are used in the call to that op. The cells highlighted in red are only supported in `__add__`, and the cells in yellow are only supported in `tf.add`. In summary, the existing TF APIs have these significant issues:
+
+
+
+1. Boolean dtypes are not supported;
+2. Implicit conversion between two TF dtypes are strictly disallowed, resulting in verbose code at times;
+3. Implicit conversion from some python dtypes to some TF dtypes are disallowed;
+4. `tf.add` is not even commutative - switching the position of the two arguments sometimes produces errors, otherwise results in correct values.
+
+When the inputs involve NumPy arrays, the behaviors can be even more broken. For brevity we have included NumPy-input behaviors in the appendix at the end of the doc.
+
+We aim to make the system commutative, predictable and correct.
+
+
+## **Design Proposal**
+
+We propose introducing three dtype promotion modes in TF:
+
+
+
+* `tf.ImplicitPromotion.ALL`
+* `tf.ImplicitPromotion.SAFE`
+* `tf.ImplicitPromotion.NONE`
+
+The three modes determine how often implicit dtype promotions happen in TF APIs. In the following examples we will use `tf.add` to demonstrate the modes. The dunder method `__add__` is expected to have the same behavior as `tf.add`. For brevity, NumPy (np) array inputs are not discussed in this section. However the proposed modes in this RFC will treat the np array inputs in the same way as the tensor inputs (see more in the appendix).
+
+Table: TF dtype promotion result of `tf.add` after the proposed changes. `NONE` only allows the unhighlighted cells. `SAFE` allows all `NONE` cases plus the cells highlighted in green. `ALL` allows all `SAFE` cases plus the cells highlighted in yellow. Rows/columns highlighted in blue only show the default result without overflow of inputs.
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i16
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ u8
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ u16
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ u32
+ |
+ f64
+ |
+ c128
+ |
+
+
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ u64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i8
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ f64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i8
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ f64
+ |
+ i16
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i16
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i64
+ |
+ f64
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ i32
+ |
+ f64
+ |
+ c128
+ |
+
+
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ f64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ bf16
+ |
+ bf16
+ |
+ c64
+ |
+
+
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f16
+ |
+ f16
+ |
+ c64
+ |
+
+
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ f64
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f32
+ |
+ f32
+ |
+ c64
+ |
+
+
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+ i*
+ |
+ i64
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ f*
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ f64
+ |
+ f64
+ |
+ c128
+ |
+
+
+ c*
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c128
+ |
+ c64
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+
+### **Mode 1: tf.ImplicitPromotion.ALL**
+
+In this mode, we allow all implicit promotions to happen in the op, even if both inputs have TF dtypes. To summarize the rules of this mode:
+
+
+
+* When the two inputs have non-matching TF dtypes, they will be promoted to the lowest width dtype that guarantees the precisions and ranges of the inputs. However, the promoted integer, float and complex dtype widths are capped at 64, 64 and 128 bits respectively. For example:
+ * The summation of `tf.bfloat16` and `tf.uint8` will be type `tf.bfloat16`.
+ * The summation of `tf.int64` and `tf.float64` will be capped at type `tf.float64` despite precision loss and overflow risks.
+* When one of the inputs is a python integer/float/complex, and the other is a TF dtype:
+ * If the python scalar value falls within the range of the TF dtype, the result dtype is loosely described with the following promotion direction: TF bool -> python integer -> TF signed/unsigned integers -> python float -> TF float -> python complex/TF complex.
+ * The dtype promotion is allowed only if the python scalar is within the range of the determined dtype. Otherwise, an exception is raised. For example, `tf.constant([1], tf.int8) + 1` produces a `tf.int8` Tensor, while `tf.constant([1], tf.int8) + 1000` raises an error.
+
+This mode is intended to provide a user experience similar to NumPy behaviors.
+
+
+### **Mode 2: tf.ImplicitPromotion.SAFE**
+
+Mode `SAFE` follows the same rules as mode `ALL`, but disallows “unsafe” dtype promotions. In general, we think the following two types of conversions are unsafe:
+
+
+
+* If the inputs are both Tensors and the result dtype would be “wider” than all the inputs. For example, the summation of `tf.uint8` and `tf.int8` would be type `tf.i16`. These dtype promotions risk increasing the model’s memory consumption and slowing down computation speed.
+* If the inputs are both Tensors and the result dtype cannot preserve all the precisions of the inputs. For example, the summation of `tf.uint64` and `tf.float32` would be type `tf.float64`.
+
+With the above principle, if one of the inputs is a Tensor while the other is a python scalar, we only allow implicit promotion if the scalar can fit into the Tensor’s dtype. For example:
+
+
+
+* The summation between a `tf.int32` and a python float is always disallowed;
+* The summation between a `tf.uint8` and a python int is allowed only if the python int is between 0 and 255.
+
+For these disallowed dtype promotions, we require the users to explicitly cast them.
+
+Another advantage is that mode `ALL` can be silently nonassociative in the following scenario, while mode `SAFE` avoids it by not allowing any dtype widening:
+
+
+
+* `(tf.zeros(tf.uint8) + tf.zeros(tf.int8)) + tf.zeros(tf.float16)` evaluates to dtype `tf.float32`;
+* `tf.zeros(tf.uint8) + (tf.zeros(tf.int8) + tf.zeros(tf.float16))` evaluates to dtype `tf.float16`.
+
+
+### **Mode 3: tf.ImplicitPromotion.NONE**
+
+The dtype behavior of mode `NONE` is the most conservative: no dtype promotion is allowed between Tensors. When the inputs include python scalars, it follows the same rule as mode `SAFE`.
+
+
+### **Alternatives Considered: Capping the promotion system at float32**
+
+The TF-NumPy project contains another flag [allow_float64](https://source.corp.google.com/piper///depot/google3/third_party/tensorflow/python/ops/numpy_ops/np_dtypes.py;rcl=399530502;l=88)
. When disabled, it caps the converted floating numbers to 32 bits, and complex numbers to 64 bits. This is useful when users want to enjoy implicit conversions in all cases and avoid any performance regressions related to double precision floating point numbers. This feature is now used by [Trax](https://b.corp.google.com/issues/178862061).
+
+Our current plan does not involve adding this flag in TF, as it’s orthogonal to the proposed modes. If there are more user needs we can consider exposing it in TF as well.
+
+
+### **Alternatives Considered: A dtype promotion “lattice”**
+
+Unlike the approaches above, JAX used a dtype promotion lattice to define its semantics. More details can be seen in their [design doc](https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html#properties-of-a-type-promotion-lattice). Their lattice system has these advantages:
+
+
+
+* A lattice can be much more concise to describe the promotion semantics;
+* The lattice can ensure associativity as well as commutativity. In contrast, the promotion behaviors in this proposal is not associative:
+ * `(tf.zeros(tf.uint8) + tf.zeros(tf.int8)) + tf.zeros(tf.float16)` evaluates to dtype `tf.float32`;
+ * `tf.zeros(tf.uint8) + (tf.zeros(tf.int8) + tf.zeros(tf.float16))` evaluates to dtype `tf.float16`.
+
+~~However the lattice system also introduced extra complexity in the design. For example:~~
+
+
+
+* ~~JAX ended up introducing the concepts of “weak types” in place of python types;~~
+* ~~JAX adopted very aggressive dtype promotion semantics - an int64 dtype can be promoted to a float16, resulting in higher risks of overflow.~~
+
+~~Since our ultimate goal is to help the users maintain a clear and concise mental model, adopting a dtype promotion table is an easier approach.~~
+
+Compared to a lattice system, the proposed table-based system does not guarantee associativity. However, we can avoid such usage scenarios in the proposed SAFE mode (no implicit promotion to wider dtypes). As a result, it’s not clearly justifiable to switch to a lattice system from the table-based approach, which already exists in project TF-NumPy.
+
+
+### **Alternatives Considered: Value-dependent promotion (a.k.a. “safe casting” in NumPy)**
+
+NumPy has a feature “safe casting”: the dtype promotion API, [numpy.result_type](https://numpy.org/doc/stable/reference/generated/numpy.result_type.html)
, can return a different dtype depending on the input python scalar value:
+
+
+```
+np.result_type(np.int8, 200) # dtype('int16')
+np.result_type(np.int8, 100) # dtype('int8')
+```
+
+
+This slightly reduces the risk of overflow. However, the result is unpredictable. NumPy is considering removing this feature in a future release (see proposal: [NEP 50](https://numpy.org/neps/nep-0050-scalar-promotion.html)). JAX [did not adopt this feature](https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html#enter-python-scalars) either. We omit it in this design.
+
+
+### **Performance Implications**
+
+For an existing piece of TF code in python, we do not expect the performance to change after switching to any of the proposed modes. This is because all the allowed implicit dtype promotions in TF will stay unchanged. As to the currently disallowed dtype promotions, users must have used explicit dtype promotions which are not affected by the change.
+
+When users develop a new piece of code, however, the new modes have higher chances of computation slowdown due to implicit promotions in mode `ALL`.
+
+We can carry out the following steps to mitigate user surprises:
+
+
+
+* Add benchmark tests to verify that the new modes won’t affect the performance of existing TF code.
+* Publish tutorials that help users select the most appropriate dtype promotion modes.
+* [optional] Add an API that instruments the number of implicit, unsafe dtype conversions in the program (e.g. promotions from 32 bits to 64 bits). This helps users understand whether the promotions contribute to computation slowdown in the graphs.
+
+
+### **Dependencies**
+
+This proposal is not expected to affect dependencies to Tensorflow.
+
+
+#### **Engineering Impact**
+
+
+#### We do not expect changes in binary size/startup time/build time/test time.
+
+The majority of the dtype promotion logic already exists in Tensorflow under namescope `tensorflow.experimental.numpy`. However, refactoring/updates will be needed under directory tensorflow/python/ops.
+
+
+#### **Platforms and Environments**
+
+This proposal is expected to take effect on all platforms in the python environment.
+
+
+#### **Best Practices/Tutorials and Examples**
+
+As mentioned in section “Performance Implications” we will publish tutorials that help users select the most appropriate dtype promotion modes.
+
+
+#### **Compatibility/User Impact**
+
+This is a breaking change. As mentioned above, existing code is expected to continue working. However, some tests will fail because more implicit dtype conversions are allowed. We can mitigate the user impact by adding a flag that sticks to the existing dtype promotion behaviors.
+
+The proposed changes will be rolled out in various stages:
+
+
+
+1. Default behaviors stay unchanged, with the flags introduced under namescope `tf.experiment` to collect user feedbacks;
+2. Default behaviors switched to one of the modes above, while the old behaviors can be re-enabled via a flag;
+3. [optional] Completely disable the old, broken dtype promotion behaviors.
+
+
+## **Questions and Discussion Topics**
+
+
+#### **Are there changes expected in tf.Variable?**
+
+In APIs such as `tf.add` and dunder methods `__add__`, the same changes are expected to take place because these methods first read the Variable into a Tensor, then carry out the Tensor-to-Tensor computations.
+
+Currently, inplace ops such as `assign_add` are also broken. The two tables below show the result of `assign_add` when the input is a Tensor, a python scalar or a NumPy array:
+
+
+
+* When the input is a Tensor, no dtype conversion is allowed.
+* When the input is a python scalar, it defers to the variable’s dtype. However, python float cannot be converted to int, and complex cannot be converted to either int or float dtypes.
+* When the input is a NumPy array, it always gets converted to the variable’s dtype regardless of any precision loss (for example `tf.Variable(1) + np.array(1.5)` returns 2).
+
+Table: Result of tf.Variable (rows) inplace op `assign_add` with a Tensor or python scalar as the argument (columns).
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u8
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+
+
+ u16
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+
+
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+
+
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+
+
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+
+
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+
+
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+
+
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+
+
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ bf16
+ |
+ -
+ |
+
+
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ f16
+ |
+ -
+ |
+
+
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+
+
+ f64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ f64
+ |
+ -
+ |
+
+
+ c64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+Table: Result of tf.Variable (rows) inplace op `assign_add` with a NumPy array as the argument (columns).
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+
+
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ -
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+
+
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ -
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+
+
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ -
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+
+
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ -
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+
+
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ -
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+
+
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ -
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+
+
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ -
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+
+
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ -
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+
+
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ -
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+
+
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ -
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+
+
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+
+
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ -
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+
+
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ -
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+We can make these ops more consistent as well, following the rules in the three proposed modes. However, we also have another constraint: the Variable’s dtype cannot change. As a result, any dtype promotion that results in a dtype different from the Variable’s dtype is disabled.
+
+Table: Result of tf.Variable (rows) inplace op `assign_add` with a Tensor or python scalar as the argument (columns) after the proposed changes. `NONE` only allows the unhighlighted cells. `SAFE` allows all `NONE` cases plus the cells highlighted in green. `ALL` allows all `SAFE` cases plus the cells highlighted in yellow.
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+ i*
+ |
+ f*
+ |
+ c*
+ |
+
+
+ b
+ |
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+
+
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+
+
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+
+
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+
+
+ i8
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+
+
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+
+
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+
+
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ -
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+
+
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ bf16
+ |
+ bf16
+ |
+ -
+ |
+
+
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ f16
+ |
+ -
+ |
+
+
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+
+
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ f64
+ |
+ -
+ |
+
+
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+
+#### **Are there also issues in conversions between Tensor and numpy arrays?**
+
+There exists arguably more serious issues when it comes to the mixture of Tensor and numpy arrays - wrong results can appear silently:
+
+
+```
+a = np.array(3.1)
+c = tf.constant(1)
+print(tf.add(a, c)) # InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a double tensor but is a int32 tensor [Op:AddV2]
+print(tf.add(c, a)) # tf.Tensor(4, shape=(), dtype=int32) - Even worse than exceptions
+```
+
+
+To give a comprehensive overview, the two tables below show the results of `tf.add(tensor, numpy array)` and `tf.add(numpy array, tensor)` :
+
+Table: TF dtype promotion result of `tf.add` with a Tensor as the first argument (rows) and np array in the second argument (columns).
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+
+
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ -
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+ u8
+ |
+
+
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ -
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+ u16
+ |
+
+
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ -
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+ u32
+ |
+
+
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ -
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+ u64
+ |
+
+
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ -
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+ i8
+ |
+
+
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ -
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+ i16
+ |
+
+
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ -
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+ i32
+ |
+
+
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ -
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+ i64
+ |
+
+
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ -
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+ bf16
+ |
+
+
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ -
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+ f16
+ |
+
+
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ -
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+ f32
+ |
+
+
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ -
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+ f64
+ |
+
+
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ -
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+ c64
+ |
+
+
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ -
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+ c128
+ |
+
+
+
+
+Table: TF dtype promotion result of `tf.add` with a np array as the first argument (rows) and tensor in the second argument (columns).
+
+
+
+
+
+ |
+ b
+ |
+ u8
+ |
+ u16
+ |
+ u32
+ |
+ u64
+ |
+ i8
+ |
+ i16
+ |
+ i32
+ |
+ i64
+ |
+ bf16
+ |
+ f16
+ |
+ f32
+ |
+ f64
+ |
+ c64
+ |
+ c128
+ |
+
+
+ b
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u8
+ |
+ -
+ |
+ u8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u16
+ |
+ -
+ |
+ -
+ |
+ u16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ u64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i8
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ i64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ bf16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f16
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f32
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+
+
+ f64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ f64
+ |
+ -
+ |
+ -
+ |
+
+
+ c64
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c64
+ |
+ -
+ |
+
+
+ c128
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ -
+ |
+ c128
+ |
+
+
+
+
+To briefly describe the issues:
+
+
+
+1. dtypes `tf.bfloat16` and `tf.bool` are poorly supported;
+2. If the first input is a tensor, the second input (the np array) is always converted to the tensor’s dtype, resulting into significant precision loss in many scenarios;
+3. If the first input is a np array and the second is a tensor, the np array is first converted to a tensor with the corresponding dtype. TF then does not attempt any implicit dtype conversion, causing `InvalidArgumentError` whenever the two inputs have mismatching dtypes.
+
+Though the behaviors with np arrays inputs are not the same as python scalars, they share the same root cause: At present, TF does not have a centralized dtype promotion system, and incorrectly uses the [dtype of the first Tensor in its list of arguments](https://source.corp.google.com/piper///depot/google3/third_party/tensorflow/python/ops/math_ops.py;rcl=449753780;l=1371) to promote all arguments to. The proposed modes in this RFC will treat the np array inputs in the same way as the tensor inputs.
+
+
+#### **What happens to e.g. bool - bool
or bool / bool
?
+
+With NumPy:
+
+
+
+* Addition between two Booleans: equivalent to OR
+* Subtraction between two Booleans: forbidden
+* Multiplication between two Booleans: AND
+* Division between two Booleans: converted to float64 then divide
+
+
+#### **What happens to truediv operations?**
+
+The op truediv is special because it implicitly promotes integral types. For example when the inputs are matching dtypes, 16-bit integral types are promoted to `f32`, and 32-bit integral types are promoted to `float64`.
+
+Using this rule of thumb, we can easily deduce the promotion behavior between non-matching dtypes when this proposal is landed. For example:
+
+
+
+* `int8` and `int16` -> both `int16` -> `f32` (allowed in `SAFE/ALL` modes)
+* `uint8` and `uint32` -> both `uint32` -> `f64` (allowed in `SAFE/ALL` modes)
+* `int8` and `i*` -> both `int8` -> `f32` (allowed in `NONE/SAFE/ALL` modes)
+* `int8` and `uint16` -> both `int32` -> `f64` (allowed in `ALL` mode)
+
+In the above examples, the first arrow represents the dtype promotion in this proposal, and the second arrow represents the promotion in op truediv. Note the result of `int8` and `uint16`, `f64`, turns out to be an overkill but is consistent with the combination of the dtype promotion rules and the truediv promotion rules.
From f60be814bc8ab8c595297e24a233f1fcbaa9b382 Mon Sep 17 00:00:00 2001
From: JW1992 <31080444+JW1992@users.noreply.github.com>
Date: Tue, 18 Oct 2022 13:48:13 -0700
Subject: [PATCH 05/20] Update 2022-10-18-promotion-semantics.md
---
rfcs/2022-10-18-promotion-semantics.md | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/rfcs/2022-10-18-promotion-semantics.md b/rfcs/2022-10-18-promotion-semantics.md
index 5543cf9d3..4c57d738a 100644
--- a/rfcs/2022-10-18-promotion-semantics.md
+++ b/rfcs/2022-10-18-promotion-semantics.md
@@ -1,8 +1,14 @@
-
-## Making dtype promotion semantics in Tensorflow more consistent
+# Making dtype promotion semantics in Tensorflow more consistent
+
+| Status | (Proposed / Accepted / Implemented / Obsolete) |
+:-------------- |:---------------------------------------------------- |
+| **RFC #** | [NNN](https://github.com/tensorflow/community/pull/NNN) (update when you have community PR #)|
+| **Author(s)** | Jiawei Xia (jiaweix@google.com) |
+| **Sponsor** | Peng Wang (pengwang@google.com) |
+| **Updated** | 2022-10-18 |
## **Objective**
From 68e9291cfdcfb78547a57e754ddd5faa9875ce8f Mon Sep 17 00:00:00 2001
From: JW1992 <31080444+JW1992@users.noreply.github.com>
Date: Tue, 18 Oct 2022 14:11:07 -0700
Subject: [PATCH 06/20] Update 2022-10-18-promotion-semantics.md
---
rfcs/2022-10-18-promotion-semantics.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/rfcs/2022-10-18-promotion-semantics.md b/rfcs/2022-10-18-promotion-semantics.md
index 4c57d738a..bd4e81536 100644
--- a/rfcs/2022-10-18-promotion-semantics.md
+++ b/rfcs/2022-10-18-promotion-semantics.md
@@ -3,7 +3,7 @@
# Making dtype promotion semantics in Tensorflow more consistent
-| Status | (Proposed / Accepted / Implemented / Obsolete) |
+| Status | (Proposed) |
:-------------- |:---------------------------------------------------- |
| **RFC #** | [NNN](https://github.com/tensorflow/community/pull/NNN) (update when you have community PR #)|
| **Author(s)** | Jiawei Xia (jiaweix@google.com) |
From 657523c5459d29c8c5103f27646faf6dbabc0e03 Mon Sep 17 00:00:00 2001
From: JW1992 <31080444+JW1992@users.noreply.github.com>
Date: Tue, 18 Oct 2022 14:15:50 -0700
Subject: [PATCH 07/20] Rename 2022-10-18-promotion-semantics.md to
20221018-promotion-semantics.md
---
...-18-promotion-semantics.md => 20221018-promotion-semantics.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename rfcs/{2022-10-18-promotion-semantics.md => 20221018-promotion-semantics.md} (100%)
diff --git a/rfcs/2022-10-18-promotion-semantics.md b/rfcs/20221018-promotion-semantics.md
similarity index 100%
rename from rfcs/2022-10-18-promotion-semantics.md
rename to rfcs/20221018-promotion-semantics.md
From 2dca00115436142a49ad2d7d6429ceb4741fef09 Mon Sep 17 00:00:00 2001
From: JW1992 <31080444+JW1992@users.noreply.github.com>
Date: Tue, 18 Oct 2022 14:18:12 -0700
Subject: [PATCH 08/20] Create test.md
---
rfcs/20221018-promotion-semantics/test.md | 1 +
1 file changed, 1 insertion(+)
create mode 100644 rfcs/20221018-promotion-semantics/test.md
diff --git a/rfcs/20221018-promotion-semantics/test.md b/rfcs/20221018-promotion-semantics/test.md
new file mode 100644
index 000000000..323fae03f
--- /dev/null
+++ b/rfcs/20221018-promotion-semantics/test.md
@@ -0,0 +1 @@
+foobar
From 6a064b923be243fac24cb75956da1d8cea0c6394 Mon Sep 17 00:00:00 2001
From: JW1992 <31080444+JW1992@users.noreply.github.com>
Date: Tue, 18 Oct 2022 14:39:03 -0700
Subject: [PATCH 09/20] Add files via upload
---
.../Screen Shot 2022-10-18 at 2.24.47 PM.png | Bin 0 -> 209409 bytes
.../Table-TF-dtype-promotion-add-current.png | Bin 0 -> 236147 bytes
.../Table-TF-dtype-promotion-add-proposed.png | Bin 0 -> 692106 bytes
...F-dtype-promotion-add_np_tensor-current.png | Bin 0 -> 131231 bytes
...F-dtype-promotion-add_tensor_np-current.png | Bin 0 -> 256574 bytes
...e-TF-dtype-promotion-assign_add-current.png | Bin 0 -> 169791 bytes
...-TF-dtype-promotion-assign_add-proposed.png | Bin 0 -> 307970 bytes
...F-dtype-promotion-assign_add_np-current.png | Bin 0 -> 256990 bytes
8 files changed, 0 insertions(+), 0 deletions(-)
create mode 100644 rfcs/20221018-promotion-semantics/Screen Shot 2022-10-18 at 2.24.47 PM.png
create mode 100644 rfcs/20221018-promotion-semantics/Table-TF-dtype-promotion-add-current.png
create mode 100644 rfcs/20221018-promotion-semantics/Table-TF-dtype-promotion-add-proposed.png
create mode 100644 rfcs/20221018-promotion-semantics/Table-TF-dtype-promotion-add_np_tensor-current.png
create mode 100644 rfcs/20221018-promotion-semantics/Table-TF-dtype-promotion-add_tensor_np-current.png
create mode 100644 rfcs/20221018-promotion-semantics/Table-TF-dtype-promotion-assign_add-current.png
create mode 100644 rfcs/20221018-promotion-semantics/Table-TF-dtype-promotion-assign_add-proposed.png
create mode 100644 rfcs/20221018-promotion-semantics/Table-TF-dtype-promotion-assign_add_np-current.png
diff --git a/rfcs/20221018-promotion-semantics/Screen Shot 2022-10-18 at 2.24.47 PM.png b/rfcs/20221018-promotion-semantics/Screen Shot 2022-10-18 at 2.24.47 PM.png
new file mode 100644
index 0000000000000000000000000000000000000000..dd01309321543bfc31b3895a64af0e9e9f3ac8b6
GIT binary patch
literal 209409
zcmafa1yo!;*Dg@3rC4ck_ZElZ?k>gM9R_!|BE?H_FBEsTL5jN!?i8QFox$$(`|Vrz
zuKT~U)>$WWlD&8Gt_ri<7IkI9l4+Tfo7|L?x%7sH-jEhEDvfc&z`CR{&}{HtY1o!9Fe;@>4l-
zC^vVHdO13Ghvj0Qx;6T0_KJLZ`SXqlbJaDTU@u}(@e9JMK;t(RL9g@RwSux5K2Ja!
zmMF6pF1S%!oX>64`CQt0)RE&Y8e(j1?QP$59~oo|zd1dbToaYS=qmYjMJ28MAvjSo
zcIj1js}CG~E-oj!*$#d>{M6pyyEilYoL5IE$oF^&aO@)RC2;6@Xm1nWG6gd^Nw8lb
z29V=R1kD8@{Sjry3UW8bS;VLl!B|AY2r^YcjtjAMLdZfG-xO#;tP0l4L%9l-HpYqx
z(lSQyg9Bk9ZjrM~1ivTOkU?V)QlKC-$7~f_S0R57FDdSfh51e_GcQ{O;|Jyk@m&@4
zD+V{*HpI0cpL~=ltk1~6uR^`_X^5lUBQ;zd#6>-BFpJ?@JISC7(c7@P=fujlkE?%1d_RHrV93z+?qP|JKR?$;gDiKphDkGiy29{V8MgKYQ
zC3i~uGm@et(NDlM=>gVfwCs@ekO@h=g7Rs_1BL_s1Bxp|C{cLKm*4ot_N;FecxW-|
zF_ON1>{B--WYc1MO2ZyQVZ>#KnCsCs5v;*pL_CtRXTD)=f?tU{?#D2rbLPoHn1Hj2
zxa?(fwrOEprC4=dmDDG5K)l6f3D@e+{IhnJ=tUZU(urjh?bv&}WwcrR`a2?12zpO^
zH?2MS4t5;2?(62)RADwh-+zg)mVQf1i`9%19eLeX-Z$9iX=-W;Hk~(xH`T8Nm`a+`
z^;bosNBL9ACSA!+6w-fBXoygkw51ay2T~;sFPlrdQ1ixiMCglmDg?+q(<@<^{PDVim(J!CrC*I>)(Cy;yezK52W+z2ZFxfE^eQGyslDG*u$|IIj(u<==rB0>ulsm~;-KQCmKal5<_h2bCnrSwd
z3n=X<5Rewq>lE!Ydag&%=+U6~Vv;LX{~JfL;b*65&>y~js&85!22)2E8x2NuGk{mK
zGZAt%-#bMtxa*K>cIcO_}~2!d!G2
zOKF`tA7_-V1W+tnMNRcm1*a^z+`U|}98wZda;oB}68KYeHqWNd_Pw>6wSsk-E!5g!
zrUK+wB02Tbrr(Za7Cf6VGe3<1TK~0HAy`IL-8(frRSNR1ke_#-Qk<@w0+vV26&%ul
zN#-Hb5!Xyt+t>0}jzrzaDaiVvy~w~(*3pX8^Art6B35dG8FUaj4@Yr
zKWD0O1uhXT(=N|1$5l<~^yoO-_t#{%k$|dJ9j0<@$pA0)PkEskuR!^~x6%v3sa9%d
z&XEk_b3}xd0*}p)sGV%xGF+OR*N;|DLR_3ZhdmBGl0DF!nLPZ#MJKrXn)_T^4qKhm
zz_nM~Tf4QBWhK}@6Pdn
z?)a_djnk;ExWI2Ot#I~1fg-Zqmm*2siow~UKk_(58%4=NvO_LI$i5zgql)v1(TEj@
z*@+eZ*8hFci`<(U;fT@oAs@dd);g+BGF(biN~y}tz}t9N=7*%Z<}6TyuYA6`tyb6B
zeFltXM?KHFkQFx|5h1xC6=(uwV@OM4i%FYFd!51oA%}ceNi^st-ClIQ;AJsAHoY{H
z-ah(sGZ@>?--kC4@0#7(zsmiZ^tHunwWxq7jVKOT6It}(yPbobdI1|l%F_|ufAukt6d40w8K>rdGquZf`yjaI#X?jMGf#@B)Z9
zk9#CJ^$b6@A6K7PP2qhEeZX|r`J)q35p2-ZJ`y1DGW1mI+q`rl-ri+VU+=hwzyE&E
zZ`fjY{c(7w_-C%=P2b7kMEw|l9bS8C^XOH}I?}Psh_XicUI<$2%ZP!hy
zd~cH_(LbVFJw24CKU2grB5p{bs>DXCMt{_2RLa!b+TFmc4^A7{gH+cRBNn_Cxq*An
zeqF1;j#9DqpJ((p*3jwwL9uxEXR(K0MXI&z#TRxTs%>ldYM^ql8j6aQssKY0eYy%F
zep2_2{jkxf6rWz`1#)Y6R?=qS+0Lj@K_Jj`1fo7@vwR5U(6y2y`WU;Ur~(QvirTx|
z>q&`BS+CEr;@1V}5ls?gm-AJ7bnb*4eeK{f)w5g~>yL>}I)1!4mv>GL=bv{IXd@1|
zKS>T^SknSl8FuY?9J|l~FaNl=L-rjPS9yGsJnHXLb}NgRqpJos7z6B%nvNXW0PgL-
z2HT1oJtZCWa>X}(@iTi?ty!!B=T28Pd|7tZpMpz*dmfG+3NXKuc6g^fJ3*(L2Ga(Y
zl&N#y3u*Z1p0}@`qKBr0;)Uynx8)!S`hFfG47ipClNm{|=O*Rib*lJ(;jrcN2^a%Z
z?C*}GQdQqo7Y)`w|A5wCHJ(5?0(qb3II)G;E~L&Z_W&pSUp+NW$2U9`pvpo|36CZ|
zpB@T+4}z5&l(Tbtg}TxVZ;@_)9h9#x%sU!xkWNxhj-90=?CCQEi6Ijcp>?l(f*Zn?
zISsnP>Px-^US)}1KD`hJzw})`-DK$?sMMne8B4?YB~(izA9ZfBmy#P*sMc@Ry_+UwZg_{Yv
z7r@@ZmCs9%@?RzRVClb)St!Z>Rm9CskWyPwgf
zgw%hE!~O|UTD!S9@v*RYdU`T@axgo(Sh2A2^768wQxq3UenRqcdxKjOx
zlK;{pVc}}#V&mjyxCt?NAwTIuc=JOjg-Te<{Q7J#W{jcqW>7x}y5Dim;GRGk!{{t#m`T4K@3AaOR
zR7$rQPv_69{~Nrpybx4$l7E0ASTVlz)K2c!ml~;-K*KkG1hX;fqbk_YY7YB#{P}CUt&=b*C{Y#aZ_&%tADl4%YDu
zP6npVyFWI->v{Q6#`>|EYLAEG=3={QAlFC7!(XXNL=G(`8SEA#HKS2DQ1Lv_Vc{XD
zxVXH!lkdLc&o_o5_XPuPrZ(#ewCf0C;tw#idnGqlT_>4bzg}{!du?Nnf9M77X%h!d
z3f>lGI<$RU^V*I*+{<)izMOr-P?11-+cKE!?Z{Zg=DIiLHqN>W-QSJ%wcD>>8Vzjt%UYCKXw(lKUAv4;Fosmb}iMR{kZ8}j60dN>IVgHY3i!Fkx!L9gQn3;njuLvsrq&8
zktR?6``MmF$^jQivf1-GR@qn2w_U=3&ilRIPrW@>Io_7gL*>8>ht}Pct4L`9O8-N7
zBi&6=_P0pCWR|(6J$iDs6S@$#xh`;PA>7nd(nhSu#G;fa*5KKE#e^Vj)}_f>fcx>Y
zQr`J#;0ymTto?^QC}ax!ryZ_7lfLVqGKRJNfsNqq5UK9PTBkZFxI7h(67>;jjq1tZ
z#WnXqpTc@^n8j}k%_bAjba>i_p&KBYtP06-4M<=
z*1CGF@I6Aiio~3U)%~L5rj@F{y}(5N*h}`TGO^?aQUpPaUZ5t~Mwn9ekK1G~^wcNw
zdX9n#*eZcWSf}hdal+4c?~a-b$ZX2jS&h7RlR>uSmGc=SS3}Z5?7%-Cs%7ff6J~vn
z8?_E|<(XTlhoOnG{67wg_k~~Xzw(fP7Xt#M!`hy)rHQ{_NYBosCx{q#P#Ld2ZK4Rv
z6lb}}4HLgy6$@uC)+e0Dcf&B-57TbQyPWXsbX~>C_uTOJtbH+h0&6ANYc_ShQMF+N
z(3-x$idpd;H-Nqwc5RcKmZWARoJOt{#PNMHfYq-bZ^?C1kj|W+*7VomVc^TdhFC+1
zurPqp-QCc{1yb3ZB8jr(()Zh0#u3pWhfUDH1ri7vdpK3JV{CU>Hs5sznW}~3%_w#4
zSGj#*-mdbsD%40dWCL_uEZZFROQn)ew-21lOK#4pDJ$-YI}gZwm$v0Kdt#O~g93Ro
zA8#Na0LQ6}=Q=kPokL*-j(nj7p3$euuHP9p;}v#lc-dZ_M{8cUfWwf?YPovStmJtE
zlk0|aIlE#>Jc4xtC(ku*j%=Ut`;Sa@GP*Xy7NCOpRr88=@)0z|J${7VOwK*u>pvmE
z-C3xptT!u;T`qc;SzT8zsTHHKT+7z864ozTSzgjEDoR54I}MXU4>4sIlL0sNplHLZ
zpJHKcAD$jIUY=`0zo&Ly{i3l`Lq2?b*E6J{WuOs4pYAEMa-Q>ry{je)GDd`0)qc|E
z!e5+c5Z?Ng7>!RcG4XcP=-FknX)?8AhlYm3E%7KVTtOd^aiS(3H<}USFmT14Q6@3_
z{xbK)F9@-*^YDN
zy7pYl$PqaHa%d3m>#S=ta>j1W(t4cHa661r3_0zf5i|i-f`~|gb{S27GSE7mzl54l
z1m%egAaoh!cIE-lR!jdmyZp7Ken**@iu(ip?_+-
zh7iG&HqmoMzks<&zf2S=H!SFA?{Uo{r;^O^}%s>qt`)bXLR%1+)ea8{(
zzgB-QFoh@(#83dSgK|auinBd3Ams3od27+L{)!gi9?||)ZNtQa^nuY#O)?CX4XU>v
zU7rB`UIC}o9s>kc?|+l9ofpyxegy6^Im$w+yPoykqgfDwYsEJEaFiL*kYv~hrmB7B
z%J)WYegMY;2z&>MnW)5Qg%Ijk;sq{-t}-_six>%Y1bljl)@wwFJSKSef5b=;cxnCPkFax36uH2URy&P#IhNI`oEI6U4z4V8858
zIWG01GKyGzenFXs{As@p4aKe0Qej=d>;~6lr5fnnb{gCfaf-Nda^I6R?=u?1Khw%R
zzRs=VQF64fHG=Lv-IFFBhQa;d)3Wew8#V@hg%OWSw8NH(H1bW{D@^|baSO-v69mDb
zU^4|QWXFKZOpsd6#gCI-IZGp=M81V!v~#>VGLr=s{k=*1I52=$!
z9@dtQbB{(TCE#|@kQ&poMtzq@^egI?JM$Ch7&eG%QHWhbLf(sh2Uyw%iZLP!J{>Bv
zU_PBUf31LxdZOe`{U5Y`0DQ-WVUx>JBmxxa%V~=xjH$$sAncU{8an$xd%9j!985e{
z+VmQWi?${eBYTW;?o-_^IQKsU+J@@SPwl({;PoDH;63A_dpVkm8aEUAEhdh)D=`
zVX59M^usQ{mFULvlB552i-@rRIXRkgQ^{CIt&i8Dm$lZci5~?y+N_^0UXVSVS?+$q
ze{nivQ7ZD>^P7%A2uXS#aV)typ!%$N)N(^H^3iXuX;O}5*>ArA%+B!R=jpgkofx?&h=HWb0n;;+cj!GOv&x~=JFGf$q=topZ05>TsJx57pRdu+68nZ$=hU|1J;EyW@KcksHJxmUlND48(
zrT+HT#D9y^Y#?SP6`H?bSOAsp>RAMgoY7e2`g;t|w!!AXk;)+%_$Ad5>a(!GOdjN$
zx(DOVn`JvZ;P)rG`ma0-N#Bp<%Z!c{eK!1WllAfr<2D`+SsvQ>n!&Y9uNxNb-C6+A
zA*#Wm&q>~1Fanqgq3QsH=B*pv*c3Q|>D8tU%K*>_mC}bwTe`ObW4}qmI=Kft*4!6b
zZS%=i5vr79{q@Nbwk~T2N%dI6%iBg~ij?kF}A!{qm2u?c9q`~fBE-*
z*zRdjr$Z!K3lU`W!{o-)ft;8%Re)RA3Q3n=8Tl{!`*{LmwrtU29dBB1Ftd^um?nZz
z*jT1+o1Q}EjMfe+zw!WdZA*W(MHAf-O|T}r%NhyVWKB^#otn7sq4}S3zGIU*%wM=V
z1fkd`Uo2lOt?q04pRRra_tO53Mo*~adq01@3ay$Fe#;*A3JDI$d_lf1mQAW`xf>Icl}9
zX;3kEeW_%$X{TQrfB%{pI^=bX3&?*sit*{5#8<`S(h!Cbws;myio+)E@w%*0A)bhw
zT6d@X(~%IomFismUdQp`G$P>|HOGYp5x`raJ+^oP_0zeY#wD!?z_
z`wf>n_B=j?eIcXI4T8gfe&+163lqz6I6)QwxV@=f_iuIwNCE&}*FCqjr
zpQ7=QhJC?P@7;Wj5u&)q9XjEUOI?u>&A?;8FF~+QHHEfDQP5Opcoj(T`SI8>P`(Q~
zS3SZT5CMe4ty;Ey_)3dy%Q!tp_H-bw!c=Cr-)84bh@ylvk|K0nV`
z=qO51jUe^0rpN-f%#!X3+z)$$?qVvrO7@>0PK|2ZvW$uHcUwq2S8K6c^Vms9%Oh^4oSPlW<%zt_ABkL1NWF{?$@YIoR)KweXzqMr4i?8CkE?;lQz+S^ap@pV
z?p@y%jB7Xs88a9OUF9O-@Qr#{gZ?@O!JQ7ITEZX}Uy=-tPXv41VMa7PjguWF@qf0)
zh@iS%-s~)bS1#yWNQMKeqlPziGcV&VOCqi@T(fYR*)iV7ph4&kY~nUyn20-0u|*Nw
zISa#4qUTB%3=>>!Ub4(owWQ%|pX62&x=f#ZA2Q(xj(B^Fi|
zn{7bno4*0exiom4*w$h0$!)`mL;E;e6L_&`w5o@jO^d{JjP0w679O9iJz)V*!uR^{
zuT%I_?4x4MCv)>ODAEd{p8gIt1q#<#@zkVLqr`Z%0;9;gRW%_M^4P0HHBtkD%TFBF5Yg>+-=
zM!-pmFq1CfVZ2slH256G^KZ?W8;rj^lun@IhUMIkS*7rCNB!{$@aGmzD_B$PO*xc)
z&gVO0SnRfaf$-wouf5Ve`sf9qO*cPYKOi$cK*~_^R~m2m84U;(3(I$GPAE_!?gHa
zSNg=_lXunwEt85{zAz8l2w&T2Pq_9Rm*AkiZhUwzp3hU=s7fbpvg#fNSnZv$`*aU}
znJZ?dra$Dw73D#Xz;mx+jQ>JyP4;jyVn{x4BZti1ZJdPrSAX%CqFqJP)wf+yxj0D5
zgXLOo)i9rE5hEW>W!=t&&9y~%fQP6jZ^6^Ux;`c}^2ha#k8P`8fDFjCmD!G^U2YQE
zY~ZO$%5sAUWNwyk&GD%o+d}PdO!t-^*eh~#T<7;Z6fhJJ_(YABW{)}sV58MTD-aA8
z!9-Me4k6e<8SJtoa3gbY*FX-BkZeGbiqo1m^gY&y!*O#*lh#jQ$-`CAHaOPexj-=M
z$&0va;Kd%Ifk${J4o6R39%6qF`yO<_^k-8Th5DHwqsjE4Qv|~vi}9K=z!tH~o)Yiw
zk3abHr;zrbd0x$~cFAr@MEfhmPU!EAp{&0HaCW%@d})AmH>y*D4JunmOT`Kr>Nivp
ziA6Vgf9v6Wq1)D01ijri1BvxFcon`-&e18$Hqi*dAd8Z@$Gz6&%A70tC05njBJ$_W
z@KrI^+QEucUz*G>aGDG+Q4u1U83dZ9l$i*OX#P7Kgq7fuY2
z8EjgEebGLw{ak}4AF%WwonzqfNzayk90}u~RAb#<2Dn@d)1@MPpssngMV11`NB(LL
z8xP?oDt;`gp}Tssb+NiH)O-)h%SF;!hSy3P4fuSL0W~_VeFhnZ)z%i0DxPOPYmq+r
zfHK!V53Ub@V;b&Z>r6(=)3F2(N(Zf%OG~ura>WpB$&7g)PRJ&QL1^ZUwe)fQ$u@Z7
z>g|^HXw`-<_e4kJ{ZFFX3-S1u22mB?K5XgtnKJ@o*QnVA52)nkCI$koicd<)nWjft
zD|jl64M*pZ
zXPg%mI4GzBFwHW+n5iqdvV5fdwAS94Uw>wr3+8bf=Th)WzWU`4Xe%)A
zWAq>xUU`50bRHk5k87;Nf7}qFbLp9T?{Y*XFop?;b`5x0zwJ!LItBPBbzO^RIxhGF
z&!2Cv6ttgDLIQsU8u^M+n3*qAm?dz?E2R6|v9Vt*#V=9_-eSj-{n!-kBX&P5J!~nT
z_$0BA-O7mBbe@-@cj;Kg`GdSIu48BCLaZ^qbKP^e%_0sO361b4TF&7xo$b!N9lIC`#VYP>HFK~o5uPRgvDhJJ%JF1gj
zsCAW}%G|V{W>&sZyJ)@Szr5`7TupoSnvOcWg0L`46zH9uX1g0vh^>8#uQ3X|=bWj^
z6l4<&SYtvl5Md?7;MsSC6=at4T+4NTT2DU=o~UTJ`>8fcTHH7u3S`($xO~weCA2s<
zF&^7xw^RzLW!GNWZvKiB{JfO*6(`sFnp#q1lfvxVB{_#;Q8e)Oh~d+sBrUPuSh(UzOvRRI|jVop$?
z?nGJwdc@d!>S07|YE<(E>3f98PjRWGK_X5D@P3cX!%uO$H-f8yks^p~B45Zq>cu)(M>7U*
zp%!Zdjo$zAt&5m_m2;)c8aKNPn_JP82EJsfe?einZyQu7j%*{;0a=cSDZ^)^J?VtMAM>a^!GJwK=&7E5{)x)
z{2{I+pygZ0Z`grBwDIqZ^0sR%Q7VIo_lS{
zP7nF{!Q~w-D(-P!d(GGK=jNb1BgDv@<+veZ$Nk0VPdYEMJDZZ
zX=DEB-8Kbn5|1tC^u<>C@sYH~;UAKCig@dO8e#}Qh_mmQJE{wKHM2_QA@@4$eif!uCfb#vj}
z5<)JpUo<-N8WTHzZQsaarE$1i5zFVk^ZR24zlW=UYO=KzsEgTXV*7=8++FBwwX1ma
z(wF5)z#rUyctymlL>4$xK9j_~$Yli-ItnbyS`p&^l
z(0-a8s5H3$H6qh;b=~!G!c6AporrNKC2R}L?<+qumm${XCfY>@PBUW{Y#L(0VBYWk
zMuW1-WGM4W0@GA|bJp%>Lvz|@c<3naCJ&YIC$?bJtprq3M4=~hb4o%UTunQqUg9^w
zRo!Sh-LF;YCeZhh2tL_;Q!~=?OC*ncgVD_A65`Wn(s4B@0}0N0Z~vN}CoJS+T$;#&
z=Mb|*GRguJ#NHH6;$D#nF^jtt
zF8CA6X2f7ECE<{-9C5Jdi`jPc8`#|HPO*qsf=UpKP&bx#)(j&w>88*XXV&D{h%j-|
zX%c^mG)dBTJv`Y-(*@5V&gokdpaAzG6d5DFeSJPEyCtS<1f(8r(e7?lUZ}Lu1BsC0
zpGM-<3$pAFHT6hUQ7Y0-t%@mzz0!KICjIg(FvFU~`xxSFKSwYZ`=eZ-1sIUL7
z=)c1!a2O~G0tJj2${i@51eL9YQifmxISorBsOr=L_?SRUqFIou9AVKsq
z{D3H|#SEJcJQh$l4P9$-X9~`C4nGndxDCCRS=ODLu6Dy~Wz=pYr%Nzjh|SH2Eg@w+
zl}1b-y2`kx6(8@gJOT%9w8y6AWYWcO-t_vJysZtsN<0>Mr$IS;8hOTdGUrS`?Ro8C
z>?2LK^koH;e6nHm_9&QF(MhnYuLD};3A78sFLyU_P1roDt~D2Gynda}z$%(IO#_YB
z-sbPtBBPj#@i(hs-g9ey%{)NHK4rB-SU3KNpp{nRdbRo(OkR-GE+8S{nY!lNY!nvt?7xIX9Z1w68pQ
z&?pATRwj_OzUKlLSca&c3LTml*d~8}6AVf1T0`%qrM$k^YvL$1g&*k9lU0G5c5=@j
zWP9Kzoy@ZqE4rN<6Ex@TSIMnZoJTeseu5NnURDCxk5^NkBhc8fQ#eMM+n<)NR@QAQ0QrU!b
z*Re*rvpzP$Yv51yIV-VFz_7(k9K->(cUZbH*?L~*c3%0y)Nh}BZB~$cA#|i84BHk@
zlmvw{RPkb64q-^uXw^#7i{-2eLxQj3D_E)@*Y?4auTCMAHBvtJRJl2%t9-qt?3seG0U(X}$?EHi%6kUO5syT-Ehepfx~1GqhfHs)HlD!ef!AHkUDdgenfE$b
zWJ2JFT`x3b2eZH#=t`kY>ss`li?Lfq3`oGxXIKN|C?hO%eC9T5QM@-c>5H}mEkv&~
zYOxKF63tnu>Uavjs?Kuu-KQs0nu9>k{mEq((38yLE
zQSev~MTmDWp0wfS*F`La2mL4&-%B2;v;B^MP_U)o0Mr%z#+HA1L@Tf-+rBXX5gWxU
zjttugts@hjqF8BwT6(H7N)dKs0K4UlYUJ}(^Jek4XHAyi+8~*{UhEzEcX5T?az&`3
z35S{dn9%T5C;eVS^Bs
z1;S3*K2#}}(s7>-+ZKa0b!LHY{g71km?QUI;Skfkh(8&1jx?;W_Jp{tP*9OQnE9Lr
ze7rVtfZNs6KA|Rq<|iM;JoY2c@e#c`V)2(qDDRcP-5K>I>$1|dDBT4Sj#lZ*5KvRE
z9&xs;Sn~hkCca5_8Lh0SPkQz}A(l80yibFr456TYZ%C9vh(p~)+aYJXkHfNesPwO0
zT%)psy)nNrc&iuAC-70h#qn)Dz?a)Ha58M(h@-TiWA|dckZk$f;_xGgG)Z=j2J}O+
zsLN7wKM>zWXBQ2kP
zKWskvCLPJ!msToK!Y}?~l8j3;ds1oY^phzQ8vQ$zaQU#Ry4g22&5u|GR{O{ABP+H}
zb1qikDX+XUkJ1bi9LbDKM~)Ut3|$i+O_G0I{`P;Cu1X6q==}Ozanbs$#UYmL{j}Cx
zM0C-L;Sdvc7>?c#ONEhl*$2Kt)
zMuKeQ4{+K%`{Q{2rVJLf3=6is%R5_Ez4+}yr}yT}PMg@JsQl?vPpE@!$W5F>W$>bc
z8yk_8J}aDJVmC%=uros`;6Cm^J*-?LJYqTt)#}vl>Ly?=-9p|!rt`c0PEf04}
z7-~*uS9YqjBe*65uj%)7kz(fjvd7aLwU@cz^&5o)4eszW30m;0kXeE;
zA&fbtXgIg>_FP20Wby(WnI{L5!fiKY0*RnO2lbz&>$a!d7SnhH_KGI^*rA3)scn(7
z0+t4Vi{RZ{6`_EX05bHHBKpH`VU)(kUwsoGQQFZBda_~2=Sc)nt-Y^f3o)RnNc
z5Qw#m%5Irwl1S;o46eT#An!D`pY}DFKaGNsp;orKjG54J=y%2!Ta21x5yAO;iF6sU
zLPnF+gKO))$0)`sS9hk-)jxdhU;0)H;>*=%hBk1GGJL+oPg^dgm7`|%c9mb|-pu6N
zMl;Gl%1@rdUkSAB$N6MOU7ZBWC~sIlT(xxM=D!7UauBr5sE+>naB)j2R{yk|S-jTl
z+@k^5*i_qa)Xt;1kd>ND(-Ib`b4lt7=7*gDP)&XGRE
zGs<%i?~5Q9Lurtiu8elm#PwSXAJX1>P#3K-OvI~?@`|Y9mYuo~e^w@-s?|_)HL7me
zsguk-_`zs~IbK%x`(1$Euk&`Zi&v_bqpcd6hA#IPHO%He@kq3}aE>$t`pTc@pO}-p
zDv)}h4d`S^gZ-AguI`oWQkuFhtuvZFor5Xf4rALgxy!bH>Q<^gVv?3$*L>)Iwt8wD
zqm!BSJgt$Ycw86;)Z>2{kbmwNU}soR+w<|{f0E_BY!Fg8JdR%z6tvqWnL+mcQjn6L
zbB6jFk(LM{5u0zx=kqcM4QmGIRe(yDl>&xZ~q_Z2m_p>9o-_a$mzWQfI8hgW*a?%;F7=-s;5Chjlb)OWS
zK7j_Aib6^=OyszEvy-RJGZnI+zJeS43K@MZn+nLf)plHymAQvb)iLr=YZLfd@o=JI
zm)`t7m&S5Kp$Jg3@(FtT9o_a3v{3t=PlZk0N~ap&?yLy-n(tBC)tx=TH5{kQ-Dj3L
z&K8HLSta{|f>vRx16)Uv0=R#5WJ;}~%E{bOXNm_neBZKf@;KB&L$68Q{ivNJ_zg9Q
zg4Q(}K&sFM;T(A2_x~QNCRt>`!gfm#?VuJ$k_lJe^n9u@pt1py_QF|6di`df%LKC2
zc29E@SZz$~t?ynNRXjP@Nl9kjGQZsh0XVR7Eg!uj(s9?7Y8vZpYOBeg+s_*X;u4OL
z0iBMk7K{UlwgJ~uQeje!!I2||ol34fg*FVa211%20A?sFXJ*l5ik$Q1MrxBJC+%F<
z8K;fq&qZ+`9No2y_!Jhu8RAo^CuS}?5bH#4iBrnNmbSH636*m-SE8Q*dJB#h&mXh7
z-rjT*JcPEzrVn+ua|rhUKh{Z121XLE!Px&x1A232=$S3)3Z!0;f~D)Y?SjJ;0$4c}
zI%4-p7F_Qv-f-`UekSC=wI7Sv6WETJ{L{|o$>pEcbkl&H7U#<38g}Ghq5cATWZ5g>
z`nEx{en_*2aH%2KwcvdD=V2evX14tCv|dNDsZfwFFsq*V7ah{l9K>qsDWL*xcYtw4(l
zOWi<~ZV4cLOP{1dq@yw^nxMWf=A=&Y1of@H^^s5B;3aS{u)|T3MX#DQx(COh;>U;R
z2j3e{%s9OsrN#aPGOKyUM^l(pZpzY&MpAMZos&kOgm*pWr+%wyOD$HN@i0so9{Ic%
zqq%hji)GC{B9NI2nmA8JO@5lKE|Jzt(4Sn$T~_?TGR5hTweUXJ*poJKgCv@q9AUzS
z*Vm-xnIi{$ozPY_SbBn?i83l=ol>0xtyg@UUyJj}dUo>vIT31N&d3)|F2+!*+KC_f
zlAv^zKpu#qXFlbzh-$m|jgS?+nNaab)vY$VPNt*(ME8exb|I2Nw+&;i(9z<-JC1qSE0FX)vo#bqGQ4Vh+1sVDWg{F56y}U>`jKvugto#%$|G)|*_u&P}c1h$cu~1kf!byZi=U
z>O{c?2Y+@uW?GKDfrP)h%WheO+OfK|V1-ou@yf=vlf4SSFh#p;@Y=eC3U|fcccr!W
zscNZ8SIeRX`^e{Mf_QbU$%n~8+sj2E3H91H36--c;Yni>a-Vd)23k|@-0P@o&iChW
z%nDu%WED&!J2rKc{ykcSpUQF($Z^uuEFieD!493*3%G`=ZJd&3s0~v}l#DeEGY&*i
zqpff+p7iJ+@K=AUX4^RpC`^sJA!Hf#^`ycqSlPl#lspxLvxyYjzE+kgfhC#FBcO
zN3%?arjv5;h`aN&u>~u
zew5Gmu<-ocbyJmzQYV^B;p?0DtQ1>-5d8XYv^-8HS^+$j%Tk=fn+&I#J4wen;9yc!X~4-vzRhP9+n7N}W0w&J6BeNE|B{2lZ9_Wrsm-e-%p
zzU&QFmeJrirybL9{`G|D-}c?d!=!7-<%iHusomG!F`-d-hfv$
zGB0E{H0o{W{usNn4A`G?UVOLR>1*GVz0LomdF)kfws(6A*e;*g2e=3OvplwQ9TQlLfrkO
z7nY(7XGXbh_yX7i1EmA*rm*p{69NJTa%Yl3)hx|jL?73-e=THrZ86oKGG3?@Ca$+R
zD66N}_JwA&-RfkI-AS(?1phtjfGdY+ZsGO4En+gw_-KyCZYWf#OdJg#bK{&mWP06}
zxu5_!!yy>VO<$qo2!@efA(5xZ6~RF}E^FayvSBvOe7hi89J2t_KdZf%0y_~@^q?_-
zTN5jJUHGE`g!4Ner`EDX>bp2|nz=_nd><_AW@V6~A14#ige6Y8x(^=$BrE~(^9ily
zRe%3O{E!cw{3C(Rr7@jW*=L=rL7LRNN~t?cty4O_+4_KRRp9U?uL|h!bShMx+jS@o
z(rih2JF@4ww%gtEz4pq4{!hchvP6$?LDzzh>+g$rHKlRA^W
zR|(qMHFPRMNd1PUVPV2dy6(RT*YvE?_3alo+@
zcqE?v68=ftD7MdV=Dt?D!M;2f`kBMtKt3>D;tJ=x2-%|UE-V;jGp{h%jwfd=rv#yE
zPus&6h%$Ih)ntCSU<5Ai2#A|r7{8b2Vim5%3pQI__$upalx?)+9f*VW%RTO>7z5`z
zEjP(;Yk1M@oI
z)*mzVcsWVuVC`=}ZJm(-Nh4WD#Ewp(V;nCgecj?h$EazAlL;jA8PC8d!F5Z!(atb6
z^zP*e=|#@yZXv)zc*O@uQ^cI3sqK;uFE-MnJ$9{366RMm*bH3p
zZFpDdFqhj0K?aP+pW?c}B9jemH;olsH2;%Izcu@cP;0$h{+Mp^4-O
zAh{FrLF9Hop?5YEj#{bJ-(5gEzddez8*gut-P`W$_Y#N>SOYFre&|7EkRo^KN{
z=owg?3+_#c;cd>soDY;4%m7#lFqP<4UImM2Nv(a*(r1&_l<*`%LcR{ByVe{zF_XIF
z{+?wZ>eMsk@!H|R0jFuCvZ--0YlYfpc|r$kLnGS>(31i*i*SA5A}M3BAU8MO34Uid
zgunE8peACG9D>1jkoQhTlfBnlBg10u6x?o~DaW{Qy0`z(kmYXpnu;%|qcj@!s!dJE
zt=^%(UY=h+`9Fl6bx>T-`sNdY28RR*4grE&a0?E>L$C}QJh(dq3-0bN3GO<$ySoHu
za2;T
z3svlYzjr8i%A~eJ?&*}sy;Im}t<$e`6pJ2m?|KrwmR#JZa^ZcrBMx^nEBnB8n9=EjJCTiHeiUCFt9VVWe(Vy?>85$v?nDQ`JKi%B3GjhWP}~bF
z&sQ5%&P$I~SFaOKc94QdxcewY9!C7)gAP{Dl+;WfN?Y$nTp-j?$yzD}XMNC0%NN`l2ej0$RaYeXZ&o|LB$Myln9092v&2m|kpNfFU
zWi}H*zzSS~iTaoLGSyQxSV_STiNWOxNBk|VA#3NKxyIOacXtO>$`^F1nMJEBTmy(k-Eb(v#`{qxCK;77f$28rl%Q+
zbmZ^jY9GPu8}kt`y?i^-O<^wWmHnGNN8TEd4~pq<%1NA|DF0x+zvp^$K-fHW|31EHjdlyQRQ3zu~=!
zg6sY1hC*J5Zm;-t*G!gLVGkAwD0OEaV+WXCbvDdMje?Xg8%2DF3i9mX^tV*FD_^oW
zmM(qcNyu2NuA=MrZ8{AroI?4r`J$3^{ufkEYa92+M$n~^L%&D*o{gyxhM
zJdP)hB}|x%G+0g7fV#FBUfGh*D!M;Ocq>W_5wDz4iNtj)+6+)sGQqZ9%2;ZqIx^dk
zJj~Q&9$I@A5s@U8F1NIf1xfh#O9wovyXUrrJ?wD8=-TjZ?w|c7HK`aM1CEIbcNjvoSB{K#^`ycYk1tYkQdF0do4XhDcFN!xy>_y9M)$D%~x~F`W*%C#~%U|vYrk(G{{b?
zfPhhA;P`awBwCxc?VTee%brZf}`zNW6-mruGLNwxHG|Zgn_@=dFZ{}@(q+^>S>Iqhpxgns!E05t%oehb~-}o=V#CaP4@(h9mbjMcWr%
zEn7Fmqs*c3@!bKtB-bnHLa=(W0$F<^KtMn63mBELrBbEV_i;ztaQnma!aOB;w&h2l_NfM;;_EV$zo9_jdMKAV8h8#YhDb
zhE_(#AAadb`t;e;D8b{Vi%j8Jhx_^OSL+ccc}r3Hh0x*KwETe&zTwVocPpMhS
zm`}R4q%&__FG{UF&422-4#?`TN2%t@@omv9!Z}2+bUvh?c$h4(>S{xN*}>~NbX8_3
zG5)kR?^3$T8*@O$_P23F-%bNPH4XF5}Kn~Zf^jS-yZOgk7@^)=?(&*rkanDn*@
z3~ir38g6^MDwkF6t^8Tb-)vWL5Vmd2F1iq$68*=jzv-LFRf4CPHFF-9vN~bNN+`u(
z(Hg}RB%KR20gs%V5^O(RH5zn_@_=(uobxX1ki4JbswpF9<9JS
z%Bw9=Ue~nyLNBN=M;)Z$g=}gRifUgtAruDqOI=(ScBfWHS;eBL+z#gNJ21VX7zL&o
z-zD`oL2I_2wxN0L)llK8ibaWCLh~TwJA%mC3Rw#g>Z+?`CR4+lKSb`LQX`QT
zYkC_#_MG!;fDdSLLW)cqrc2iT8Oi}-W9ji|N1QuCoCmp7%Q&_s8p-3+4=k&ZZB-zI
zrdSrO>cS=y2L*I?9UlDAYT6OW?h@zFZtB?k?wnx}ZQE8Tp%_(E*CQ81=VPKNPVSYd
zt-G5^uq?uFtzq%H$Vqz2^?(X!IjMv
zc51V`dk_oIEpv6uPSd&8
z8`A^qCb8W;0Kp}lv->!UQ+Ez++ZYLhYB$cq&6}$muhoS;z)fzXx5RR1TRk)5%r*9o
z7DZcPZ1mC?hLfU<0a2l_z{`es}N+K#@g-V_
zIsx65xo&McXeER&^gYo>f?>5OaJYN6=`L(9Lo#
z$E};{Z@^OD;&Es#{8?&?i3KqI4Ft6x=En`pSDC7ivm(PGzrmEoY3T2bDy2UCt7Jni
zw@4iD&Vw%GA~egkS*4z5l)g|-%*Pl#kN0epY7{VNuOJ469J4Pz?(De`jI9Tf=|JDc
zzFdFj>q=;p<}k`>;)g6`ME=pbPv~-7Ox7vj@7ga2F`gzW@x=g*bo#Rkx@SEGOd%Wd
z3Yp=7++{``F7Eg+f(B0-CGSlb{8Es363Q-d_Y-G1LZMPfl;0oQ$%%*8(pS7|%V
zC7gMUt=Ojadc3vL*pz-)0}F>7TM?hay&4=b^=XJOAjj@7$tSEI#JWwQ+=j)|ht*X2p)r<=25nSz-w6aRccaSgtWZ@slEmbwUSx|4dv&Sb93UODB`$*XAqCX
zC#uxc2DF)o%109HKTp%iOd@WSKaNc3Qe_ZRdtN)n^~kaq9y^A5iNthHB}pRgvGDm|
z{q9h>b{%PpkNos04S9OKXJ=eKl)fr4Y7X|rdq80cU?ZTq&N7;`c+8Hmdtvth#yZ^z
zlsZgE(OPt0Z!lNKYD1kTJTZjaeNT&PWdm#D=xw=+#3D74MG)aG`RnLpcp50P#dpgR
z_fvmc1n&3vqIm0a_zOB~k&;^K;Be&&ANFv>PCJ*K5
z$oRfNORl@=vJ0FBmPO}*cl%XF9IAH
zijj`wmyRA2OJZ}J64xV`=EEg62Qx0;A9??T>3iPC0>x6_v*`3@#GL-?ILCSDXbY
z0oyQ>xp41!ES`xzw!Zp=c@Q1LcXFkrcV_octKZSuu6G9VQn`teOHNlo7rYr
zOuw{}JtC2rzD*Io;O7p{$3n9GPTeb9?D@l{#~rFa<&Rv>6X;Fa_&NJVH|P$kPC~jH
zNpi3$O`|Sdf+q)eZ-qnc|>t)SFcBjo%elZmTNjP=9A2?_U3AhSHvT;oGV
zGy4GNuH{LsTs>jiZof4xW~|75%YD>XfmB(gE$@?W8n20Nx2hdp#>EPhU%BVT$j@
z7y6qU^X%ma*=6)dDd^_}Qzy+t*Tksiic7M4CbXvTt#==>wkL(tl9sv9hIi81{A4AT
zVZEp6N{pBDao5=6#V+LUh24_+H-6z5hBdl)870jyVIUtrWiYouIY61NZUYYOPPS{xv@b6!mUMV4;-fa&Q5l;hK1I9`^|xh@`liZ*4Apa=Db|VC0eA?Q
zhLJOCNX3N)JbXdlI$|tZ_m{sS^|6)tG>C~0gp{}^-=Y($ypqER0vYg%iD?6msb6}C
z5aQ}LBl9VeH9DW-(@Y!emGZV*Km3+&R!p_D$C@@ijN&i$9*=JO_VyhY(&Vl0)o1_L
zg90AKWFj7axO4E#)b0iw;i%v5sRPb9KsM$!iLTkd8lBzG0`uW1r<4{88xnV0yIWQF
zJvz4AuLQ4RzwPz4^$D{n)_dEAV&^=tg|mVi2aU-l*4rvYQlzlMx-BX$(hA?o5M~BpbNP3jNTh{Ei8)YOUHNz&;b{g|
z?amWHkvn#c?pqIySQr8^)i9BeR}mx@nWp5RWj(2X4z37b?K%*ayjwl?X-*{m`pmUo
z^lM4^o6vlRRxj2}De$9T5^iV`BL6_87!{<6#nw^XtFYbiXxcrY
zM<&En6ZJ(t-FvSEMqVnmj^^kkyBjZX9$`0MxLUD>>5~60pEtD+$JZuPfR2IylbGo1
z0lANAMscK$)5=$0@>?D7qQgEK;s1)sh84}-_R>>kkT{_>RAvjoe8$!52%vc0^NU>i
zDyx9&GST+e2M(GaPhBZ1tVmOib4t2v*EnsFdGcl1qOS~yS;a=z+kd-*%4=#>U#XbvuePjdUmp&1sUME?78r+Y+F|v{sixuToX}<_p|8cf&=4p~5^d79OsXbyxK!0h3>D
zbOr$)OZQN7>kOv>LkPj-L#!(4rDn5$T(#5fVv@oo|8E7g-$@eP-g^UcDaDqkyO)))Rtc%TOWyyoW
zY>4uPF-1I8c$Rz(>zC2nQ=ajZ%K(<%5$+#%52%LX^jrw%tG=LWb+;*Aw}Ev0OJs
zj4GmO={|C7#$R_!QiXQwyC;U`I)a+aW%+GD%ONz1k+nO(tF*r1wOW*kuWZiaN|OCN
zPhy&Qhs??oCa+Gla?^nhhlt(*&w?P3vQWQD{Yh}v;p&flE-4uX3XYo@($5#RLUp=J
zFU7>NaS$VPh#RHo*AC1G6aRZwR}U@?3-r7w4fS=J%R(J2Y^$<8=mMwAWj%RG9S`_s
zmxIM!_xGS8AZciS2!b8Ml(Jm-B125ImTOgVjecf{Juo>z+St?p52_wOpLT&Mx)ZvBAI
z*WNIc@sq&|*G(7473$boWEE%uh?vT+#%fJ@tvS+QXE}R&{hqb#G3%^LCd=7nzOlmqpCkx1QRtz(vxzJM(w>az}(GaxLYy
zMgQIruqPR=X&Y@joqBppw)w)?GUws&w|QN-=I+;(8iMeaEz~BO^e^~^zZ>I64ykR{
z75BJNJ$he+)g)S0CpNI%CDga%-}nrkEs4j?S>2^=y<`qd%e+P(Bec9WWd91yALdLp
zktXUMT)TK>6eS2M;j=6=9yE3zGSG>d)o{yqSe`N2W~^;i@iLTajCbj~s#9>cx|&XV
zaPp>UVV2KG0h_JTDkqq#M19KiIzO2F=K9pNc`g=xXn)i0S8XSlz`S71vIK)rtIU-4
zL&cUEaV17*XSs*ojb%cbdB5iPBT%8dsERL^&spllb!XH!cwBBy53MvyLym&bxj{$v!N)v39F)Z
z$ud`)nirPFMesRu#o{DGfoI0GPt9EL0{7q+HTze0^}!`X0oMo1A>Vc&2dc=g<*ZDh
za+JN4PQrkD*QMwoQKh?QCOzkVLT@y`zY%<39lLtDr!6bEo|hTE4S-YF$t$F}loDis
zZCSv(ml!@4q=eM$45b-P{AUkv?|GzP8m`+P7_8s>d@esAtV%PzqV{E;W8=NT5n*MQ
zAzx;ghd3UKKkDcrf92Ie#u1;ikA%f*=
zMtM_-pF;lalZK2#M8Qw{3NK3rU18MO%f=TarLC5`wVr`_pYBa@bJj|I7k{LU3mb?F
z^7dbd{weDs+mQ~^QGm5HJW3BuBc*v7pu4iBW#RdRH%tb`m$0@<{fLAY_r>1IY?t^e
zI9Vkb(x;JWlGfhGPjbCa8VfFyab7S+#Q-QdLzjH5w}PIAvU{y(s$(Rw0_NTn@(bn{
zebR?&rQAgQ1r-Lpm?gi=ReosZdX_h#d)t>Znm1fTL&r^6>f}>@fqG4sl*e|tr#(Hc
zVqsxl@%O0Vx^4&AcW9dl6!A(f^Uq|-_S}SF5G}6u<)8i8NS56Y4TFBJ<`H$>)Vp6x
zYy0vhVK3`6QtxWD4OS9dHA&g~ako~~n!nQsVT-N6D+td?<>_Qz3$zHwCRLNQ*L>B5
zO)wI@q@ThNsgYb2e&fognQs<5e0}&}d(?PZ&V#Dy#nVBttXK7A;E2;0%`O_@c&-I#Lh7?w2#IK3yd
zpeny@;Z@*$$
zM7l@HH`D0#vQWabJ-aUL`h0_OzUpNifyp{@<|Ad^;>`|Ts>Hrhu+eI>-B6XuR=;d8
z$Nzyzx&aqq%3`>2_fJJ`{gfR
z)&vFJO638p*4q$UyMG!K3}7p3xVg?`|M^skJb${ihDMV<0H}uO`y#p!~~q>6i#&I*Q8N5J+*~6#Q#cQ(sdK
zFxhTpPKOvZKK1VHljhZ2?Gxoo*sJXZQD^dT;tp4kgj-Z}zRORWh{N9fKN!wSNpjMJ
zX`<8{>VNQ&YVgbNPJE=qzm?q7NuznU`~(rE(2ZUUA>TVqe%zIeMM3H{V=|oojqYTHKEEMClO%)x
zZxj5xs{f>HvW{W}85P*hQYfeYtF8X$f>;FZ&%85tx!msmR?y_R?oqz?e{lZ`N7Ue1
z7-bM?7Wd!StD9&(6l8Dre)j_*{XU*zr?0gCjoj(eh^=7|&N&I-GFj&^#c;BLXscoq
zV0PW*V{82{mj%ofEff`#pBZY^%%mx6GHO@d4)To>wd)=m$J>D9)C%NJaoOh~mNt)X
zCh3lNZUXc#61D7Pnw}o-rT|E12A-y2H0(f4uga7
z@BD<%K#v6xP?%a?#H;Hh8xJQ%1Khscnrk84L~=keQd4pay_b@D|jZ0p6c#Q{{((@=RUrDl$H2Cdj0&NOBYk%^i$K#
zM(EJJMFxf|fzN)MMax;Q8IZg7z6lJtSGi$1dX4oc%g?;tHXJ@QoXjx=s4EC)KBb6P(APuJNx=JuV@Q
zFnz84!uC8$wmZT5MBcO0IX_GxjaIZ{BP)we?_?s}wXaY44|4*6*D#7!%>V4~|5Lxw
zt0D(xTSLfzsnq|c@BgdAT#5sH-;D8Y%>Op3G_e5MZq=n>4b}ft;eXWP26$~v6-_q+
z|I48I_m&U)Ul;TLxh=ux8I$(onb?2S^6!c8-(Sk%0pmHHQIPQHzx+d=3}8Gj;-=hC
z{Fi&7L4{MHl@;qX;C~kVZPfxiwt^mf-IkWfJ~xGFzGrOi3V`LzhXIUG!)H9MxaFY0
zZ|;@m>s9yAVS={YWxGH9lN28alARM?Isgk_Y0>
zruuejdxv)UB7VJ_sz~Ow-}$M%eS#*HrpS!~COp3FYMeEVoXNCN-tD+FsyN`eK5jl?
zX0hbNFKyTeDJ^Ud@apWbeF%cNR(6f1uE7dCpRv1UnYN<4>Kgh1jek~&i53gW9)c?2
z@%vcz8VfREvj)?I%=_hQ&s1Jt`59-IoA7WQfTsq7nxyX%dzO{!FO+-!aRe!ys3Orb
z_r~2CCmSUs=YD=m)RezQ!c7b=+jVz-6z?#>!1{HDl(vUM7%Vh>p)ZF|o4O1Hwk5p7
zI!y~e}rCjv{b43-LmX^0BMW(A1MZ~+DgA#^PGnQ%oJ$;qIQ!f
zfK0_(Gun<<&xmsqO7Jv_0I;;65bw6dAI0+YM0~fDu*$<*--0dfAriMK3P^C;gz9>5ajw`_tGJY#M_dp_T5o>?~z
z0Q)kpPw?3GnZQfI@?sLx;U(&7EECU$5_-BN3%}$DWggK>gp3RI9I~n2Bv!DX$h>8G+
zntLlgFCB|Ob?&6*Xm$VrHep_T=EDfr0b+zyd9H=LDDtbDGbnQqSHy6&
zMTY_DMUv?9$AjPUizcI|5$$^XT+23HH64gVmv0?V*vEvQ?*C!T31nR6(Z-+6w~V^L
z!rBEMZ#HGuewfs^qN&b3FEA>Kf}H69-yQOdSR|MTaBkTou}}Ye4Mg#u@_?hNn*X)|
z;)-EMxrffATG!C>BaN-meWFBj#sy^^x~$WbTaR1!hI&v@kAl2z|4n3lf;G>BAwjp
z-eyD~@%ozJQpgNfQhSH%j!h;uu!GyBd-YO~xYNe#uX4HmVKFy<7vjzE)4W+>`fIKT
z4(pDB{aQAPi0``d=0`-I%hxee0SMVxK!1+;eLXOl3}j|w-n+W&yBlOOpSmvWd5JIE
zz#xVAZBLx#uJXc#a!Cfm|D^as)}DH~TPsb#*kK#8LhJ$yR`v~znUBd&;hR?3bEDYy
zNq0TiK~xW)OcqMA9qO8gJ05r1D)R4}j&J*y?*ZIiO_X}M$2P+dn_oOkR%*E^VxESGg&2l==<5VmYOcAZ7Qnwk1Jv9x194sG_J6D~Z
z7{cYol5CDX(*+bTl)jCP#5&R9YNTI-Pdw8FL@@PG<^&+lI%gs34ggBg+rdt6&F#^&
zZFp3&gPOlHm+8-yN7Fy>Au(>i^J~fC(t*kcT;vH~ubqmX-Fg7NQ|tXUi*wzw-H@;T
z#WL}svk+_|(#cL~Dp}pls{;Z5jaJp>TFGLP=V*k=?p|)>aSIUZKQN>%jU}mm>0!q@
zc?+PV=LaH|qvmq)`-tu3OaQhv^X{{{>S(LosQ*IJ?UU2tYi2*~&3qP5v&!aY`Z)gr
z1xZ6(D{N`)arVr;TG(@T=UGOPE+;TtEksh(FL0w5L241<(*+cOP96&av%t8}_gtaN
zY)nTxL=jq~e8D3+frVNCGk%vYB@CNLPrG9MSv;0Jy?=k6*mhX4fclwz;LD{E>l1Y>
zVmdLa)8ENAPU`@vZC?Oj(h-~DYY!}ZPM7=a_{?~6w#%B~FM>hu0e|aV%`rE%WZb4J
zqas*kJnx4i!|&!8>3aOl2u{i5^iG_(Ug=|=$M!3MX!q#@Iit3HBy$6X2DC^^_VacI
zJV>s&8O5+O-5To?@b$4_l~I=GPCS0=X}~MTQnIjUIk|XSL!s^ZD;#@Ooj5utBD@v*
zx1gCZt(oHcXs`0fdF{t3z3-!dszzvL0DkoabMKIP>v_Ba#wb64chMcvFyGj@9^qOl
zxd0-Kh7pzO<+1qmgIYb-L-2FN+kD(C9?q@6RFY`F8z1@QU=q3-`PBIc23D8M++$YX
zw?wh{#{9`edxo72H#sxCE+z~&j+W5+W+B(JoC^RCnXA=zGjHAY(1e+6?XCVK0q*;9
zb8m6^Ox77y16j`K2QuhbFr;2RGZkY8S}0gjgJGj8fC$J=C_Rk~#u4|#ZkT^3@aYbM
zZ_&*U7H@f$er8wS9XC6le?NVZFzPn|%#Qb?*4}qZjV{<4XMrORZ=YbyJAeJsJa4agREeHr9gnjfKeYhTwyRR>^x~&s<$HY|
z@V;s}{Y7viGEMJxOGqPP=^?>G67HARTn3!R@v-<{UwcR4bMd<>S9f=Bury6aK9J-V
zg#Ss(zEA8&9#Nw={7xQ4a&!r>9Qn4hvJA5gO^QlvG_|YYXx%pe7`%<4B@bZOz(pS}9}mTe~LA^C)GFGti2IgMmKfa;7XYn)mKYRr_Q
zRFUR-rFCh)RO$oc;75Z1=}T3J$^x}t@L>aB4nU&A_3at805&g1j31=9{D)cRW1b1;
ze#Uv2PU)@>L~`@+4PW*7?LNhpc%*_1v5daw%p*f7kYIF3>D0JlYBD2>lTL(w$!T;>
zqIf}vAcGJ5i*)=a)UQ4}5fG)4T)k{oDHkMN4JUGGmt08&P9XFKmQF<^G7p;pQ-EO1
zUR7`CL<>^V8Xt5r@9V-6r@A#K~;hxi7kyUbmF^4QD;wwLLK`bl8l9A{ogY$5hXz
zieAs_Z4MG!2^~{kj_^#8T}F@&78$(KRy3Xyx;s=U8%StAZo#n+nu-G>2WK5nBgS;NBi#N#(&cjuxOutO7J&@o>WWbYV{kFFY3A^DUcbEF5n^G7oJ
z;vtt8+D{e9SU29%8DYqt|JOiaF!WEp<$X|#i#n@;I|uLA9eu9LXZGvQ@PYUGgu1TT
zbzx)|Y5b#?Ou7ZDXHh7bb|cp!6Y~gVRdg1@5KvTy%)FPPF55G<34Nz{GlaDMST;;eZGW(WU
znl;k$OP;xiubkrtEJjh3dNK{`jR^S^3tE~BE-_b(isyYQu{K*FZ3O&fVul*2iua?t
zHi?mJRIAa{0?7dw7wb2g7N|?+p>WXX^0B9DqV=g$_gfZT-uHsQI-Nc|e!0RWkyAdQqm%h9Fg3T;=em
zEIcr5Ty8%RE$S5pk6wg@ZnDXxmgL&ACnn;pL2`q6{NdLGxxX3Vq=mw0@ya6J
zMx+CxgY{n=Oa5>Nki@IfFmNF$y?R(+po(LP0@H3PLkrPxwufBw1KC9u2kO5j+8K_b
z*Sf8h3~6{B)8(S!2q(8qvQdiFqME@?7zw8lse^>c>i86N5
zx+>BzUN^(Dh&U(?4c$p9fmd+YM|wb@e0Oy0I{O~qi5)jZLbs;ec)BWb(NKYPhXX~+
zB4;zaP_F;2XH>u?pigWv)UK&TCG9oC;6Ze-tTOR8{!+Dvvuv2t|0}>GUER|-)kvAU
zI%;7~hcLT0Rj9vPa)P_B(xjI|w`Bjf`JkR%c(kVq7BnA*0%i7|3
zg^fNu_1rv*us`3`bhN-n@{t;j%#?J?(ZtLB;4AVx{Mf8)S#3KbTh208$GyfZkYygr
zMiqyNE}I;gAe~do`)93IaET$;pg=
z=F0@zDD>I(Ul2|mVYN^jogDM%s|C|Ej6NY^^0xX)1`UoG1VyZq1lX5UGF77lR
z5+}T6bbRcl6;ft@?U9M@vfwuulA!v9p-0W`=+?jkhQU?!ehoO^g_@9`Juiq6pGOnB
zy2{);V-9@m_^K_@ulLrQ-NNuwCj~#kz^VwRigZE@+WMjC8*r4ZL7HReA=iQc?Nd$6K)tiyS*0~nAzqsmS+EQ16wFKL?x2#Ct>st7mb
z^)bVr3?c>VhNqzC?tlY9C~AWK6+25*9Juf>djR9Br#=JtZ|tkd9CglI#T~WPglt2K
z7Cgj)zxT9
zkOCy?h(JtTp(nBUkk8Px*k!h^Q3$G6tn*#&2m{+FNAj1DG1SOc4_3$1$0^uGItdii
zDzB~lP_$z>Wu(q2LCMAYE>+g07N6Z)I-*vNM}+djzG%|1PzJmFpk^ag34(Do&X{gaJ=hy6Qa#YVTO~G9Q_Z(J?yztzrOmwI(wMC*VXaFX
zmH)_QHIL5s9WAM^xtbT^Cnp8_TSiM5ypc_xJk8+TLEsE+@=s>$v&s%-C)SpgbUn&E
z-t6sKTO4$92}E7l>y;4Hr|lgWKp)T=s%Mj|sg&O{q8S@nRwNtovG@T7`3r@F4&PP8
zIes8gZxKg=r0{JYp$x7BZ?IAlO{_?}nip)iS8Hy3Wm`1OU-I?|;Hw6Z^pzA7g}q(mXi^DdMIT)V?;(IEEG1LhA{h8&-14CxMNsNSPeeM?`rT@6!KAA>i<
z`PSXC5Yzshq%{OP?MIH0{-nZ$CQ(|XQN7o0xBCdNB|^=%WP{X=LMqq05Q
zx=t~5IVbtLh7mVKzLJpk`}ydC*7hs`0mVTSKe0@G&m~ShY4R|FmFtYlPc(Z00sX^S
zCAR|~LR3*H2x_55)T_jnxe+bQ59_C>Npc>(B3EkaH!jkCW5h=VHW?Or3h?wZkX-IG
zQpAj7Gw1s3>vR-fj;+Ph%(WQxWMI>OS-4Ttn9pqTIIN2B5=)C`OwnTl7Zp3-GmW!8
zklf*~s%;U%&1xw)&yvlD`A0oS7lOa*Q${(`=ht8vjP8SqBYQWqFi%cQ73=jD)*)j@
zRGpQ20l%i>FDYnE2__;sECU%WsQ7$A7r0t~jx~uMr-lj;50pCiEnqnLF@0|)S(8to
za^*?b;my1lJZlI*eQ8nV6MEb4vV*l#0>;sz_?Y?<)f4-xOY)fhP;?%P5Ng6aQ$tNO
zrZh@tvV-ib%Nv7s-{2Q7vv+1cSz`B71TTn(%I?^>058QOgom8Oi}1GV2`<~}QtxEA
zCd9FPqOj2}PUg|-rY&78OtrPnL9GV)!v^tT`V0`_CI2^p*-M4bh6y}D(YPw`6pC&^rZd+$11ISMiyp!G8)sDmL=8uQY(@zm@UGnoO*b24tjmrCZ^xk
zQLR0w+-nO3fjXmgr6;)9>RN7sklCR5F0CTo1b3O`;;l(a^J^rO}aG9k(BpUGxVZkhn?TsnTxM4pu93^HE=Y25NoyYJmEVUXoy3{I>
ztM5s>DW*|mQ?^B=bPxhgD>cS3ej)!Oo`=m=6FfrnVN7CrIPeEkl&r?l_T|>n&q3T{
z_<#`LAq{>oV^e4QiXeS1Re?LamYv3oC
z(U-9*C@YJyAJ#l&QXy%pZadkw4H_$Y8V*
z?+D(R@Z}3mG34Y*i>3Q3oe$h?x>4HO_!9JV2dxWm<8YWVn!YzZg|OV)zScwwm
zZ{S`ebw4p5s~4OlQHX64?*?!n@6LswbIzMBf6=OpnbSpQr9O*(xbm|Lvyt)a(=1@7
zxWuO(6geOr5c^Kge#OD+f0Dw#B}6
z5B^1M`?QYA&sRN-f5`5O*s|Vj{g%2Lb)L#jf60`{RpBl?y-z*I3nxF7Y?uMox=Jc`
zn){HCme;-5@xB|$U+Erqk=TOWui@s+{QE;&aCa6f)PIc4rpwuAiGH7Y8=azt1lR9D
zT&}Q4XCFNW0rKoiB4*92Vwb*|tOwMZE1JH9GoG7w5D67-u085F1QficK;ow>P1gec*<%b$GaJLU)O!0dKDd_xg;
zC+SZsg!dA|%*Mwf`t%x$O9f2ni!FqvK_REd5<{j4&TaXNwC()H`(R1*@D)~oLY}Z=
z2zzL_6*tA3CpfZSTqvN={AoC?Xk1Y%ZD#IOt>~5?PuLF7g_9Q;uKbl+F(c+V1`Q_u
zV#`%l#z`!)K7I)Nona?@N{vF^fkZhmmP696wgH-Ct{Lo;tK+_9xhI8Y)@C(+LE!PC`jKFsAypR;l8g%saWT
zx_>@(=zVmGfpSvszqb4+SHu!$Imppsx>lJ#E#_;3oRQ-(h3M?|1^FnDU%D_xIE_%U
z*35z+9N?M1#>)NNB}Ks~HpmBE*rhumX5drUMfJf-_!wj(E-kU8XBar4nRxG2Kdsxq3S5
zwyXuo!@w4e*%y6|7k##T#mHSjdOS(uMW>{$c^98MeMH;;mVY=ClnX2zJttmoWZ_Si
zn2U-q8v4YS;JF2xl;#|haS(z;MPrFTEY1RB6a1ClaY>F|C=Yo2ia?ln$D`ZLIolXwbZ7^U
zWUp<^@4G9$BrW{R!uwdBM7y6yJv(~B6$awze=Q973EF>{WK3g!sS}F!wh-A@Qa+}m
zoyR;6KFVq(Pm3A*XeyDuQLdAt5=tQNOY;3Ye}M&|%vJD#+;!qhT7MKi%o?g`lEu|G
z$$mu&?s4)Sv=Q8OuM)(UCn3@~deCJjZw%Z;xo$pxlzVi(X0$SWgN8oj=%BQkTr@l&
zkz^BK>N75r+Qs|(u)+c1h-X4)nofiBGct|L&mLncjMF<8b
zdvZ8-Z7LhYn&X*2vgO4p{gW&eagQef9iH5JxK1w%fvD?+Ff>`9z`r8+uPRFix|)X~
zj&=-5@pbZ}K*aob^`l3aq^L!13GS_oW&q(M$~OL;2yb?#17KdZ#1ZAq1DXavf5dL35nb;-`QS@drl@yg)gEPkD31V9C>p#s_D~>G%uwQ@bBy
zaS}ROEOe#big)S!Mif6zFE+EUF@KU`YrTIQI}3O812rNsMVyV3iYFWaE3p+Z8Q-cy
zrztU;4iCBmnyDVairJiWDy;F~fuDeiSo$$Vo=4i@hi!{#n*Oi$Kq?6;pXj^}LqmeT
z&F52omamFa9>SgYFuN3{DWiM05@CT@BDB$FW*-wq-ls_pl16YUP-A%ThYH*Z81W~T
zkinx;syEEVp-2L;SK_XY_xt43tA!bro79w8etbM`$hzrK1nYTz{;}PrAErbbPJ#$|
z%9a&p=LT7F$5C8CQRd?JTjdMlx9v_*Sl_X#g0ynQeuWnZjQdFKWlgp3vA_D5p7w#v
zp|c2oUW`w~p$_9jvz~QCBT4;HPc#nkJHZ4MPXgU{L!C^^!0X>n%LeB@r!w!E(w%fR
zhp-kTK;o%k1X4(G$*6StieKVzF}VCb(BTNDPsv2y*g8~H(AIz*q=P?xH%2ldBl24d
z6x4_DZ~H5-1NuVGGu0xN-v)ebPDtLg5)u@x8&Z@pO2W6Yb!lr`UdotvIk|!DKUcO|
z?a`65gO-CFe_Q62O4KB0!WVut4UCeO3XiY^*M*qIJAFp>TDHGOIL35*FJWm*c76@l
zyii?f^hU(Y#H^Uk0GV()(HN&CsLJRYPo|W|4&Qm(%&_y-3@;kHig6H6;qc$$sq1yV
z%Fy{Y|BJo1j;eC|`gmzKy*J(6-KF%VJ5)NQ1QaBcu1%M8NH<3W1VmJ%l`cU_kxuDu
z?%LHXg
z;J{sG%#JW--1;b(9*eX0@z;xTB;pZ3p$ZJRfsn=mhvEvO#BT*4-JW{n)|cstv;@!PI4J*dJ{BP;T8VUp6NHrR2cor=)Tx
zf;nWrd!3cNdH;R-p}CZ<#GpWFahHdb1Dr=!*}6ydp7XxuR+e^Bk%?3qX9CKQ2(1#8
zqQ#Qq*x9IB`vx54N$UD?UFJQ_vwH?jnIDzTf2YFNIJ77?rnSVp`Jf(Aj-hutNC{YJ
zwWaJN^gMM{B_=&LDMY&{?UKA{boVUv_4w
z7Q`-uglfL-Z0|NMG6?1^Tlx}p&juDsN*u=;X)o=JrDg3vBv9-wp_fJR?y%RuJd%UQUisyaD!^2uPJrjy
zz?gftI?PU?35#ifBCaj3+<6;>fB_NSVa-Y9N3)zkN^9bNaTjR_T?B7n7y+*s>RbL;(w>u!g4aH}(XN1|{W!*5WwiM`Vh`5{5dcLFTXXUA2_{Fq$y
zZlKm#n1cL2#xw9@TBzAl?;E|G@AzCQV@1t5d3N3~nflvZ8a?$WjY1^!POhhf;8gIi
z@{iCdwGQR6j8Mi<;t=-SREo^6`AoN(OKc{*gCvifAPDhHhL}1MEE$a$h+P5pf`k%u
z=?BA$S_G`0Oz`sMZrja4?l>>pHglo7okCXj`-1nP@{bM%mPIyK0#pbd(Mt~cRF5sp
zYk7p%OvM9siLr>K-I3u(FGDJpU7(oVZ!c4AiDcLUc&qcf0={I-w;bDb{2I9_>v@2s
zAmudTB|r7@;SKb(mbWsXqoo8f7ydGjsK|nyu&>Nt#J9^^C|E(3f;zZH&b)&z#fS^<
zm7-5+)zX+=!rQT_R2QbY##hM-jxf7W-;vz*k-IqOdxX+6g
zcLc+XVQy2!U@n%3q`0e$<5U85npag}xSOjQ@g2i43MQ|%*RjCVamO}_8Q#s_vDNYq
z3m|QjUYS%dPjgJ6M+e{e`2D+EZYpKCg_QPkqBe;E2J~v3ML)u=?8N|k4p+_>Y+|-9
zl_|lrGjJAeQjp+s^Za5-l!+dn30oyoG;MugEUV30)!-Y2^jFSqJvOw|dj?oh58B$5
zF}6!q*w@Wq#){>^SHkzbF7rnHr1U)*B!*I
z(V5N!N7sd}EBM&V3T}yxJ$3siB{~`{aeXwlv@(`t4GnRM8CY-2gYIX<`C%s9+ZV|2EblxCg7@)Ak~a$Azk
z4)fuTs$GfNS`EaWIZ)dA4%BVUm>0}ADUB(Y`SAP)s~Otrn{LdZ3xw9gFxB;+&_WGR
z<&h0Na0-;6-a2kP@HM$MpIUG+{nBP@@)+eusBQ#(RH6a~?Tw`gZuR%cZ}fAKyThqU
z{*xk^UBPHU&1`73zpRLIAG#39;DiO7IW>$r&DaK+1#t(Wqo!PH7r7XuN85BwE?U?I
z?JYIkPLX5%fc8KSI0y&Mi
z_&x}-&
zL(s4y&k5_4O|x>h1q)`m5_s4Bn4u*K?{EU9ddu}?#tR!$HBSbZqE9|5d(M`&cB)qy
zy%6N3&eVvZY}48q8;!G!KS$s|?^GM+o#~*TX+2w`yxIXK>JN;haWu#zMlT2(WBm)jdyT7+=(jV9VghlM2{S
zV(duUHKY*9VAAL-G)1Zr1;3SzvN&VFvswBre4Tkh1|@A*xWZAUbaa)s^O9zt0wPoo3dJUbY8)W)_nR^BtlWIFNiJSTrw
zsbVF2|I5ZM>AQM2^;ecd!_UP%_~g|!*~@a-^%GDnvA#R??|JFlw2@OIg5=I0%?Ms&
zZwlM+wd+ptbva=1u3Iuryq{q5xLimuXo{f{3Ctc+
zeCF@wtNB!ka)s7g8Dm3}IqF+93l(d0fj}|L;V8b6S;Qnq15S;mQeNuIJmtfNo6HX!
z0$tB>#{o*wA@;73hO9feA4F)yf_}-hMm!XDsc>MSyj~RbXV^tyB~nInU3ZW1p)mZfC$7@|L0ZT--2Pl3=!HSekf>YE5-lK6ATHcg+0=Pf
zR7*P*#S&U3>@*3TqLe5kR(7CO;fdbcXx8>Gkygnp{A}%psMEbI;TQsuBG%gnSe-
z#Zfe!`7s^Qa>nXN?^=ho&Itx@l4@U-shXVoN~C}xP9kLgf@){J*LcLA{ui!qUt
z-{Gt4bINMrclYsZn2weYGiJDE9Q?Cbn2y)H8V}yr$>6whI8QPn7EgsfOY$SWJfbU?nP5p7
z2p?J^E2)uBKSYC~H!;NCiULD#A=gc1rik1dE{d
zoq(L7gH_C25x&M~{lPJid{3NuLIJbG=Wh~x
z9ll~=WST6Lt%WD1r=tcb9iK~WdU8>8)`i!wWY8JF@S@E5;H?=3bm?t8%gksM;R8OR
zcOULU-q?LRU=|r7=_o?9sVK&G>W|Np2b7s6B<-?jKRQI9cUg*!LP~8#1VjdW*b`%N
z%YJuIcQ=_Lbd4A#-f^`PB%!raRR4@zmMEK>&smP!ml(-C_03h^B;Q>jrsiSzMWhwH
zo~0A_k$K|fS%L^_^yEDUE=DGfT%Q~*sg1b2CEeA#kNoW!h*r>41oI_^0tQ=Ex+T2$
zd|IQZ=hAWp4f)pLDQ9@h%U7hV$~)(Z9IoY+Bam6n8YR8cR97aab(bZck#>>q>~eR~
z;F20cK{qa&5HZXhW7A~Wnpq2jRoKe9Sqii(Wj?h!?EnoOezy%Ed7VbGma^&1C!-3H|;+9Q~LeXtFge8iigLWWO$UBOoG>G
zbB+%gQ=5e)j8Y}?W*&EC`$v9Q7yc}%e3{l~_qa2*g?;*2wNJs1y`tZ*pT{+F%?4p)
zJ!(B%D<_~m7Ly*%A4{l#?K!F#Cqy*XsTOEE*Sqf?ppQ1x^Q?qIV`^e+BT+hul&8IL
ze+cfzS7YBIyfek9soMgEFHNVWJ>!HFayy4c==Pf}JOx&wP%UoGEc!J6qK~hxZ2eU%
zU6{Zo-?09?m%d0mkaw-j(fB~&CWff_OoQib_gOQ`%wHs&>9;&{(QR>WvgQVV9?X6J
zF*)u8w;*c4I_Fm(N9{ww8w)lc^%CrNzj)ycxAL~t3R|S
zxFM^%R@usieMF9WRVVTSVg6-Ky^+}*{Y3aXvx~s7%=EOUwzv!z;-u4@6^F?SqfSioU>0yt
z={n5c*qww6u*pCK&sphugjZ1QJ-gh>w7y`>XO^w$?M8j&QXl7ak=*Kzrxt%tN1ppb
zySw7nv&5JZBCO!BC>5){;~&e4|yxnVw1zwr74cCb2WyJw$iU5elVkpX>R7GZV*R
zGPJHM`#onkq7$fDce9GJB;
zw8h#7^O0**f(q(fI#zbAUuOVOIKT5UAzWLnKzq9_czC}_D>lV?YRuq@=G)p>&A%E<_v)lDs
zE0z7;L%Zx1zo{Og9|wR##$C6Vy&*n7M#_p+w-7~`U*y8+tcEuj6pNCb5C~r|FJU-R$&f0AEN8liY_dmdw`Ndvga~7(o&9^b>Vr8`4967O6$1god9iKO)GLHa7Y|j?M8{)0
z8!mwa5q$CA(*pSSyiz$*QcWlFnDA4feF2)2h>-}79>Z3a#|vi*SIe|kzvw5IG?_<6
zLpll>^_f_uwlr_8g~^Gu?H;qt>$I?+dh)qv8b6IGrI@J`+0X_
zs3>olc!<1+z^uX#BD4JJ@)Z&DyxLC%xiZvn+TmCtS~97bpY+Q8#)_nZ%-;`-hL27K
znL1%EP1<6)XC}j|GRA%y4#`5H#p;puc=16`cTw#Ke|!YgurCLSluY_mqxV={?tHrZ
zhN@Ihz~`-A)NBw~dF$CVof$R~EuWg1=wbJnfNnCDJf-!=ILKy*jneU~Rio5HQeqKJInFg~#-48;>ZP
zWyBK0QIP}GaZFslv~j5S3J+v<3>jkm-8FDpA|-m6MzqaHGjpFyvR~t%(H^|7MDFz
zOY=-SM-5I(L)+z}+_~<*Ah#q!6rarGf6yvl-0{&oo~Go=!DK&suGm)Ot%C_o@i1gs
zqRNxxgLi_#lY{8t?YG@@?t?lmP+E>nDq8e11pzB`Ze_fUs4{JtB714;lmnbQM2SDS
zZ)P~(j$(O1s1->FXlmKV`O7%oel?NOW-oN)4EOFEmb1jQ3mLW9!qSAbB6-3R(!~2O
z6HvnRJ#8}U)YuCRh$iD)Sw6-LkrFU_2m5NjN}j;Coi%g82xH+iiMT)~p7A2;z*w`p
z3y(&(gYrQrbCWBm;a@$T_Hw&ojppHLTPH@V+v-LP-J@34)rQX1@*Wf25U!1EAf6U`
z{0``rqv;NkujmLA(nZk1FT8Ag
zCedkvSS~6W5OP8eIdoXC=$vYx^81O79$8;wjm%OZZ3Q&m>gDNm>Im)*fk2K|a)QCV
zCTX?7*d2n~c$#|;tzY`TyocV0H>XxK)K+NjiigIg@z%-|8vM5W2$RA^v4Htol5hN*
zl4hVDOHz+qFAxSrt;{zqCB&lZOg%#5+Vk6Ya*CagY1R1Pv^-pXn!LTkO`YEQP^snv
zR-Zr%hs<5C@ZZHX9Rti4{xsdYtcho5=cwUDyS!4$vS_VY?J@{^EN%|Jz1Q6+KZe@`
zR#I+wZcw)YNa)q+0o^G9Dm8uT?gE8EiW=IrQt*N5-B3%hx8*@SAGeS8*Vzp$Io{({
z&jwy@yFL>;
zBN8mJC@M}YhzR?YAMF>35~e!PhT?^-B9KOYpen8I{N1j`)hzK$}s?4OhD4OzSl{(%fU!;8ET{`!V;!71(DjCVwQ><=j}tG^*|hi
z0OY}G;OU9~frej5O;S2%;ZIu+7!C5GfJD>8kKL!M9;pUpu0pmLm&1m}{*X=fm6Xen
z09zE;j|r#Ku*R4WNrXIBsIfza>T4HMOm?h$ci+6}c+aGt?LSJuT*OdRE{1&@dqRQL
z@Ni1kMJHy4aV|>qjN=Rrcps>>8zhV*e$j3kBGRt7u)&62mtgq0mMiY)2!^bu!5sDv
zwU#ky`*bpNr>TdVf_7-F84%r2hxw50!r`!jb`+KthHdu?JXg{c#3?09ud?AJxA3-i
z&Izd{nx6(eil1_>P5|kFs(GQSf@(E;%rLhlhQiOE!fhF7^m}jEPmW?Y-!1slx7e6i
z-oyzHMJW&qTknaVyL==}HND9c()sN0H5klmH56<5AXHj#*vWwH6)nbo{_d>$tn}_~
zv&X$(`IF+ht)edq0-Nw!V^K?j--a=V;t}AF5xZ?dsdd+!8rDTbgc@eQ#=y*^(jW$V
z){in^!bHEXMmBh|#pIU0@->U_#l2i4X-ig)^?K-bU(7m>DI;}801@|mBeQo?nmBBHDzbiI-#G-Y)?r}JhD
zk3=stic~IUZ(b}1F8T{9;mM~e$3bA#x&Fec!GjAqF2?gSR$b#vIWAq?2yU4>)D+ek
zXBIizJj1@{eStwIKA#USFC0XJV%50Yfamcr!dmkR?pHNjQ!XFDg
z*1pubgJr5o+s?P7OQfac7Z$KsKQ4cuQ!dQfb?n<1wp(KGBcW2yhjMbMyEa8s*d%JT
zWIG(!_0CH8x)s*LO{JG>hq!ldHq+{_&e6}aotI5uZ=sn+V&SPErN%wycoX{gMXtVB
zB0!N{Uop2TNouxc1?5lr6MEz4N1|$My4E?5t8B6!r%)1B=8y;Ia3|QgD%GDa3Fq|N
z>McqLQ)IsDuwu2*?W1=k4cmk$jeqx@(>wZn?GtyIvBQ|ixt~*^;)2ISN>eKc-`|g-
zJwNM~r7MpT-p)PTSAn3lO}yNbhBp76rbzJEkvrqy%s-~9w_~}?Iow~VRDt;6HMOI$
zjSC+jrotHrGVB2lG>}5fBk;5HPU2ES2E5tEa^ukIVZ9w1EFQH&m(P;@XFPwmjcE7BkS4qKK+*D
z*P*@A^36wVDq{-Z^EFZv5?avgW0CLiwmj+AhMNQTmI0!5wH-w?4GD;d$j2$~3v)b%BzjHM;sq{s3cVW+e?!&0PGXgWVQ4ABV@3-rP7jNQ
zzYIsLkNgV(9h|2P;Zccj;-V!b3Bh8{f7&FWeJA;uS_yq?Wan@ylN8$dVeF1p5WNN(
zO^0Ez6SW)T)2L<_qp+!9Idw@!ONq~^tW6?|xXfn<5aJsu)}fW>AIRYuZtMi|e5IBpeB9<|B+Dyg+JooXqsxfp13CDY1&5QK-N9{n
zJw5(obWM3~*wYq4B;_05lK;Ep${=yR(o>8VoD+cJM4oJ0f5R1RD-+{xkVK?{wK4-WA7T?FS8Kf|}yF6!6F
z1A^|-*_yDNYZ){gPG%m=Zb?7x61bhM40L}QMY1eqv=(1Co?*urM5|O&xeH-S-SEA^
zZA6v%>DQ+r=ALtrwndQ3#Po9AMX$TZxv|-e?_Rdn-a<|5So2keJ%|;;4!R7@nGT|j
zG;-G1b^Hl~+y;Iu-K+}n-(vDrzEL}s@yhE(cV%0ts`zQZ_syY~edUdwx?Q42viY0m
zfgdd)%H#5zOxx6_=%2p6<5PY7HK=#Brh<0pHc%>mY433t^9tc7tsknU;jQAWuJ>=K
z$d*M-fpMWnR7as{-*=4t3(e2K1;Ko^P9;o
z+7-iF78DJcWC(5OPs07z`r#LJncwmjCg=N?x<|=uaXIcN#f>@CCr%4Z^xO^v+RNt+
z_XnxXIx;sx0vZ$C`_
zjP%^i{!ECw*(&lXQd=r95OxyoO?GL?Un-=TRnWrYNNu$Hd}c#p8@BSv_!={a>=yqr
zMysPn>$`+;Ss9__1f5RXL
z2ytnkIjUf;_n8;hO|I9Y@6(y}-0iR$_1sbRQPXtipkUo{Ri9G~coNy=HU25x^Ud;y
zJjW>lAID?S6937iO+{1hguW*!jDnk6Ocw9j=b;9xt~Z
zt?V{lK7a2_`Q*-%$~d9=@YcF=Olmg1yV8gJsMxA$B&*EsNo|COU=qbN`Zv2NHRV`qKa3H+9To0K3*iuZ!0_aDE
zqpPYvb@~VlBSOS4ZLC~}kiX^3roR07wU5W@K(tQ|0NuPe=w<&T(>iFscRXpCSgxHV
z#EM5HQZb$Ab?sBRbMN-*kTET{df1bgQga29ZK4>b)=)Xmce7I9k?NTJ#{*3B3p_uK
zA{k?gzn&fdA`oZFV;06V^6QMp0=+Hkt%e?I+etSXqR3y+HLKgL0OHU0oz*>-%c5I-
z{8fOyNVsD^l;f0DsZ4xi`Ap8&qU-?t#X7~MpSz~nhW0P3=wba=CE!^szaJd3z=yZ
z)W!G%IQJ%YFUdi3fIF6wK(VE4zs3pp9D1xJlY3l?q6#6I&dF?=Y*M3C^D%9ZQvB5j}3dB4C`*eReoi^*jds0kb8#%
z!V3VFg>%7A%64xosB$&QVxyIX=zHP?|Mz<@l#cLM!cVfiXABt&z>6mZl-(wm>iJav
z_s{zi9v4be9{HyYteC&mYsb=`HtugDZOUL?>+3SCa?%d(G?F?xhLd
zfA*{dlr~wf&GB^xzIWsL)_YbN1zIkUH5CZ0jDOP#tNeyfiJonYa4z<5Jxn|sdKc(r
z1OcsyPp5+(2BcJc!bR;*W{MQF^8al|;8ot+
zzOEPl^@6$ieD#&}NP#>U?RVdQr5vUFvE`Of$7tvUh;$=44ayLOijfh#?|;_uSC8%f
z(blnyIHU2x6k7M=pCY1(*(1QhmC?+VdN)bFbmvco2^ISY6LszFDdzoFgFlUwBRA*+
zSFgLx6n`8Ee#lWYwC}50y~8j8fAgL~rm%yyrP=$rKiYQH0w*+HV`h;{=V9qnvnD_-
z=mb~z$jWr`=^g+yk%0HsR4t(UgahuGnBQN+|2NIY4N1o-thXP;0n7~xiM1S$4nT_`
z6&850H(%Sw^%?=dFg!H=d+mBqS`kOx)<;A1Nq?HuEGWp&61E?JSf2Mi>^R$B>I5VP
z(g-|CBD$cfeYZ*T=i&83Qru8p->dy@YRbpce_S6IX>uBq5Y}%V0G~#P1ooWWoVL=l
zY3ES1Acx2N`K&0?Y%g5=t|3ISNYI?4vi~$o!5EPv8-~l@>~;nKxl{zwMjhy7-uSv@-Sv9k
zynncoBV%;|yS~&tKz9+xqnBxZy@*W7M_Sv)sh|i9~E!ua6YIqu)AUZcqV8
zJzTI~SN3iqtns1Gpun0hj?_89=HuI(F5&tXpk2{&?aZ%;?vQ
zHw<%o0+>~MNS@D#AgJUb-Rbx1FmH;JA3ZTurOC{1``yCF-_;tCt1`$DbsxdL)c@+)
zi7(!@@^D>qdF<1#d+{pmwY(r<33bf{0(l%Gt5xr6xOj~InckD523mY
zRg4N)sByt=-`#v&LjY`;Ek3ysGH?!Qq%sATN$dOze<18h%lT@y3qN3}y{*~;R4vin
zv<0vYZkyZhMa#5liO(8d+7BlcCrEJ(WhxGE8g#Ve1l^ole=AWFowhG{2KX(=dRdBt
zezSonsB4R{`~H$7W^-cHr^7--E!owD+YlZ+7Q#HpS^D0=r=!E
zZXF>*O8BMoF9%sxR_glsZ~+o$u0C>2#$n^D=%kr2R7AfFK95J8B?~+W(P;~xm9$Qv
z;F2T3^W)7ABhYzWLw+m0VO$l^%0qCOm#99c-U4h4fvc-ES^l`dF4zlXWj`s#RKHuK
zK?a+5!=Cq8#WujqmD0SCmnd%r1TOCTE^f(`W(?mxxNH#WBDR7vrr;cF{vUj~93|LnzC02-T{s1f>Za;3^goWRBEkKw>3gHw+
zk?>5WB>J;(ZMZ)5Gj$5rmC8Nd+WAWTl%{dk`^Puz!tr9z3GZs@MzN5
zVGGtJhTIl;XV*UmE>$g{HGa{ubC#X`tU}Ra2Ys))$82Q}uBLI9LV<*C>NzMnN%HFo
zNo~@D@cP%s5)S1^3SWVKkGn4R7aOIS%KT8Junyul(4w;&j1e=h@AuCai7e?stu%N3
zS9REh7;r07Byx|6Z|diu_?#JILxd(j0g2MWo@kt>^?<79cvi&Zwo@3sSXAy#>aI(m
z`1GTc4;^kNFW$>K;2U$%Fuzhlver259B9byZZ{a@JX1CAWs4Jqb=hg+6}U8vDtq-ja@6Vj8W9zY+zFLMBGGLg8296zN8-}HDcTs47_f^i3&w_>csBi>HdhYmCriU1
zC+hePkZAXr5sT$ketns>pCGx%MBwg{&ot3-RK-CNB{!bT!epB0M(g-5pR(UiTCY#M
zGTJ(JbSL}LdxTL5J(~})t(|kdCVJ`lzBtcV*-OOJ-fa|Ub*_4=ys-ZSB$l;B^z-1q
z)WLf~Jo|70ajOroJeT})jLPE^*b@kH5BxivzUrp&j`|eU>pnHcpn4~y%&JG=
zI1<&@r^2>R=T#Pm9Z9KIlV^*UAvD3<13qVC9b9X4K<-8AJ32;0ohH6lC5N6x-+JrIT_lLh{XFj;
zc^{FpPsab*nSHop%}%dlzX|p87^pLnv~C0Xlj`7spFqdSUZAmoMz}7SaLZy0_u|lg
zuhrNTEJmY3h`A)n%BQe_dLd=9j&I&HEf^8i1&mS1T!;GIMIZ-`kRltrZVIcZWEUUvJ{vjzX_43o#{876Erh0-R#z{bq+
zoR5N?H!6>qO*2Y?oVpn*+Lp~;o*(G@^UY(xSB;N7O?OSYPR=V
z#|?DwQ`od(&aJiu#c0m}z>0LTpK~cY@Y+Z5n)i=RTDqJDLYGM{QF1l235(MCL2Xj^
zQlc!KxVDL1+I<@lj!*iWp%cGQ693yh{?7&l=oGLjtWUkQMGxY9(|U8^UcQEmy{d`T
zyDHTF5|D*nh5V=JE+m!?O%53S+zlUD|(P3aIN7ADMxE7Pg=dNO5
zm2pUi6&bt?23GwZ_5-KEHguMO=XU)Xt4~@lHec_52arp)3_S7|F9A{*mi(r-*?gva
zoc`O^bUiOWW@%ev1r^S=uo_s28g-l?u0{r|J|zz?_@m(~`$%Df|Hb&ugQh*6bAAv*
zgx+XiX&qQr=%i=iW#0O6r95W)I~GryVY*02&v-2JwB_9XUcrt2AnW<3?<(9iSjWMB
zv1Thq-|Kr**dqZp@*(YE_@5`6H_P8a#(+@&xS7cqtPts}#`4su4w9!HqcDrYoq2)d
zWZe}7v-_f(@)Yh-#FcL0>%^=e>}df=qHT_QdkdYNwbq?NyQ!C)qa@&1-GxHvkt$ML
z_+b6$*>DscbDls*{06w#fHe%ZR36|QSp)PjdH*d8=`t68{<4$Jg%PF72W2_B=TGad{hbDWrWpx3O7efacGC8MWGdP8A=y!
z>F^hHB&f2sM@HjD$72nd^WK~1iO`tW*lWq&s>PRim%s!U+
z`?X##w?gmL1EzIpXKB!Q5GNlRQm`;ug)pWDGi^T+rEe0DEB|{a`H@3u_!k$CKihwr
zpMPRyPfoDJ`98n3*Z7mosQ{YZo7%msKaKuKDKI0Sj=pC9*;oS2uc4#E;h*pPPn#&0
zK?G#OZ`96>e~Q{da6l_@aP$ADESRf1+ZE;U~6-20_@*T5Oq-_W8S|`1N3Cj-v4~2A+Z?6+<`>s
zz2xUdhD1{UZQR)vP236i%N8;ct49B7kgnW3;HMK>g1Tv)NMBpPd7?^A*F+ss02qyH
zouF7Oj7>8Ws+>rd`|Cgt&0NJwj#&X#&j+%2jHV+9>FKnkHcI{u%t5dhEu;v}?jX{B^$B$dS*A9-a^ybyWf3!k!JfEX&4*E>
zRrRO+QFNVxr-L`k=6*MQzGE(K3cq?R$eRZz*+-_Ueb_yR|7!w_;A3Qx
zss&{ah8;#{r8x3cw#Jhw7VYNRPgesxf5>2Ey%U(+xIj>UhTKN1bO^)@gwzU5k)EIQ
zW<~L$$O)2q9G-In`a1Ic=WSPOFjWa;0r7S%um0+wzkAp{*l8HySdNhU>WE27o)kOi
z2Kg@LC59Tu@7z(hh5E|cy5KPSo%MUrZH`ev$WrbBerfj~U<1)Twg#q@qPK>!5>C6U
zw&mUZf$iT@LmbBN%mauHxRzIoED#a!#bd1A9l_kcNbGF2rzG
zM+ra6O6!kPKY3zJNN#W5vL?M-nH`YT@2>?Qm6SzvVkm&
zY*xN_+Iky+OkC^$pF(jncY7J+{$}LYO+x-}0yJijZh{2=Esgtn*8>Z7jaF<1kG@$I
zZ^ed3SA&>+pQbPUfAO3{ppzJOiRnl(=t5QfP|Ro%Fo;ibjHBtoYWu1NKcCEKFpi^*
zeaFBugxK}nP0IV>QmKhw2E7;
z!LmHc6yj0k6T^_96ZH`YkWB{HhgPfm*$``mliSmvTXDW$aU9tb{6Je!Su7eNw|HSq
zGNuDo_}Q;}Ep@KH>e3hCCkW^X>AaaM6^p
zrq3D*4dArxSnVk7+0lQ{*Mmtm&J1ooG-8V7erqIKDSY2;gv`r^h8A&;$D`CwRx
zQ;OfoPl=25;!Z#eJ>CPVt?WhAVBoD>-Xu^d&4Du-CvSl66OX?Cqe12D(_SxodU4lI
zy6C)H8$uWF2SDN;x}C3NbZc`nPJyD=E=EA{E9Qt%m9p+Y2)HkHHMJ9c%Fgxtrbg>g
zOkv#KR&so|4l~h9&-i&*sKes~lur|2pPh?k;uz{EjHII?kF+P|%PFrkD33s{y#=r}
z<0AJzGYA#)94G0k%4?-hhod7;!Sc5TI{Om>L-O44_M2|m8(84wk2$i{82op@g2#S~
zHWhJ1@=iMdY2BnVJiY^2S;W!u@h_fsRI$}d05i7Wc;Hgk%VwP@eZrADA34@<$UYIu
zrG9s($+k)bKAyq*mXh9CdyTk5-~=3Jvu^R(t$+=fdqAq5uR;SkboNE$cPdrHXFUy?
zfKcWvW4?NT76)=dS}0vTDX+<|#!a7mU~Nt#%wmqM+j;v^1nd9#Kjr#i>x*x5mRyf<@z;y@&0ZML@P;DbI
zQYa%LcPJ37SmlM+202gRICxs4)$4K9BU9XnvQOsMJv|&j({RZTKq^Q7H1xE#9qTPm
z$-4EBZCDAhwgh83CzTc+BdjE~SX}#8xj4z6NsPbQNwAeI?7_wM$F+`tO>Kc@%
zi>@lDB?OR#2Jp#TaPXzkdF?bkd-$8?L*=^EWCE6%>9GL-+)qWqSu3
z8Fj4oY_*-@W2v!3^7K=)+R);d>elz&>xVD_C7=icuZi|D(2gYPkmo=~)3