-
+
View source on GitHub
@@ -137,7 +137,7 @@ If `False`, all calls to convolve() will get `sender_node_input=None`.
convolve
-View
+View
source
@@ -149,6 +149,7 @@ source
broadcast_from_sender_node: Callable[[tf.Tensor], tf.Tensor],
broadcast_from_receiver: Callable[[tf.Tensor], tf.Tensor],
pool_to_receiver: Callable[..., tf.Tensor],
+ extra_receiver_ops: Any = None,
training: bool
) -> tf.Tensor
diff --git a/tensorflow_gnn/docs/api_docs/python/models/graph_sage/GraphSAGEGraphUpdate.md b/tensorflow_gnn/docs/api_docs/python/models/graph_sage/GraphSAGEGraphUpdate.md
index 9831a117..feafc94c 100644
--- a/tensorflow_gnn/docs/api_docs/python/models/graph_sage/GraphSAGEGraphUpdate.md
+++ b/tensorflow_gnn/docs/api_docs/python/models/graph_sage/GraphSAGEGraphUpdate.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
diff --git a/tensorflow_gnn/docs/api_docs/python/models/graph_sage/GraphSAGENextState.md b/tensorflow_gnn/docs/api_docs/python/models/graph_sage/GraphSAGENextState.md
index 5a11f269..ab3307e0 100644
--- a/tensorflow_gnn/docs/api_docs/python/models/graph_sage/GraphSAGENextState.md
+++ b/tensorflow_gnn/docs/api_docs/python/models/graph_sage/GraphSAGENextState.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
diff --git a/tensorflow_gnn/docs/api_docs/python/models/graph_sage/GraphSAGEPoolingConv.md b/tensorflow_gnn/docs/api_docs/python/models/graph_sage/GraphSAGEPoolingConv.md
index cefbdf9b..8b019f2a 100644
--- a/tensorflow_gnn/docs/api_docs/python/models/graph_sage/GraphSAGEPoolingConv.md
+++ b/tensorflow_gnn/docs/api_docs/python/models/graph_sage/GraphSAGEPoolingConv.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -173,7 +173,7 @@ If `False`, all calls to convolve() will get `sender_node_input=None`.
convolve
-View
+View
source
@@ -185,6 +185,7 @@ source
broadcast_from_sender_node: Callable[[tf.Tensor], tf.Tensor],
broadcast_from_receiver: Callable[[tf.Tensor], tf.Tensor],
pool_to_receiver: Callable[..., tf.Tensor],
+ extra_receiver_ops: Any = None,
training: bool
) -> tf.Tensor
diff --git a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention.md b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention.md
index 957bedb9..47ae18eb 100644
--- a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention.md
+++ b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention.md
@@ -37,3 +37,9 @@ Returns a GraphUpdate layer with a transformer-style multihead attention.
[`MultiHeadAttentionMPNNGraphUpdate(...)`](./multi_head_attention/MultiHeadAttentionMPNNGraphUpdate.md):
Returns a GraphUpdate layer for message passing with MultiHeadAttention pooling.
+
+[`graph_update_from_config_dict(...)`](./multi_head_attention/graph_update_from_config_dict.md):
+Returns a MultiHeadAttentionMPNNGraphUpdate initialized from `cfg`.
+
+[`graph_update_get_config_dict(...)`](./multi_head_attention/graph_update_get_config_dict.md):
+Returns ConfigDict for graph_update_from_config_dict() with defaults.
diff --git a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionConv.md b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionConv.md
index a3b22963..c90d835d 100644
--- a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionConv.md
+++ b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionConv.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -31,7 +31,7 @@ Transformer-style (dot-product) multi-head attention on GNNs.
kernel_initializer: Union[None, str, tf.keras.initializers.Initializer] = None,
kernel_regularizer: Union[None, str, tf.keras.regularizers.Regularizer] = None,
transform_keys: bool = True,
- score_scaling: bool = True,
+ score_scaling: Literal['none', 'rsqrt_dim', 'trainable_sigmoid'] = 'rsqrt_dim',
transform_values_after_pooling: bool = False,
**kwargs
)
@@ -85,13 +85,13 @@ Note that in the context of graph, only nodes with edges connected are attended
to each other, which means we do NOT compute $N^2$ pairs of scores as the
original Transformer-style Attention.
-Users are able to remove the scaling of attention scores (score_scaling=False)
-or add an activation on the transformed query (controled by
-`attention_activation`). However, we recommend to remove the scaling when using
-an `attention_activation` since activating both of them may lead to degrated
-accuracy. One can also customize the transformation kernels with different
-intializers, regularizers as well as the use of bias terms, using the other
-arguments.
+Users are able to remove the scaling of attention scores
+(`score_scaling="none"`) or add an activation on the transformed query
+(controlled by `attention_activation`). However, we recommend to remove the
+scaling when using an `attention_activation` since activating both of them may
+lead to degraded accuracy. One can also customize the transformation kernels
+with different initializers, regularizers as well as the use of bias terms,
+using the other arguments.
Example: Transformer-style attention on neighbors along incoming edges whose
result is concatenated with the old node state and passed through a Dense layer
@@ -125,7 +125,6 @@ could potentially be beneficial:
```
-
Init args |
@@ -256,9 +255,15 @@ independent of this arg.)
`score_scaling`
-If true, the attention scores are divided by the square root
-of the dimension of keys (i.e., per_head_channels if transform_keys=True,
-else whatever the dimension of combined sender inputs is).
+One of either `"none"`, `"rsqrt_dim"`, or
+`"trainable_sigmoid"`. If set to `"rsqrt_dim"`, the attention scores are
+divided by the square root of the dimension of keys (i.e.,
+`per_head_channels` if `transform_keys=True`, otherwise whatever the
+dimension of combined sender inputs is). If set to `"trainable_sigmoid"`,
+the scores are scaled with `sigmoid(x)`, where `x` is a trainable weight
+of the model that is initialized to `-5.0`, which initially makes all the
+attention weights equal and slowly ramps up as the other weights in the
+layer converge. Defaults to `"rsqrt_dim"`.
|
@@ -270,13 +275,14 @@ the value transformation, then pools with attention coefficients.
Setting this option pools inputs with attention coefficients, then applies
the transformation. This is mathematically equivalent but can be faster
or slower to compute, depending on the platform and the dataset.
-IMPORANT: Toggling this option breaks checkpoint compatibility.
+IMPORTANT: Toggling this option breaks checkpoint compatibility.
+IMPORTANT: Setting this option requires TensorFlow 2.10 or greater,
+because it uses `tf.keras.layers.EinsumDense`.
|
-
Args |
@@ -335,7 +341,6 @@ Forwarded to the base class tf.keras.layers.Layer.
-
Attributes |
@@ -368,7 +373,7 @@ If `False`, all calls to convolve() will get `sender_node_input=None`.
convolve
-View
+View
source
@@ -394,7 +399,6 @@ from nodes to context). In the end, values have to be pooled from there into a
Tensor with a leading dimension indexed by receivers, see `pool_to_receiver`.
-
Args |
@@ -482,7 +486,6 @@ does not require forwarding this arg, Keras does that automatically.
-
Returns |
diff --git a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionEdgePool.md b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionEdgePool.md
index 8136ee3b..5c8df3d8 100644
--- a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionEdgePool.md
+++ b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionEdgePool.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -41,7 +41,6 @@ an edge set to do the analogous pooling of edge states to context.
NOTE: This layer cannot pool node states. For that, use MultiHeadAttentionConv.
-
Args |
diff --git a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionHomGraphUpdate.md b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionHomGraphUpdate.md
index a370f6ce..f5473cea 100644
--- a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionHomGraphUpdate.md
+++ b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionHomGraphUpdate.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -20,7 +20,7 @@ Returns a GraphUpdate layer with a transformer-style multihead attention.
*,
num_heads: int,
per_head_channels: int,
- receiver_tag: tfgnn.IncidentNodeOrContextTag,
+ receiver_tag: tfgnn.IncidentNodeTag,
feature_name: str = tfgnn.HIDDEN_STATE,
name: str = 'multi_head_attention',
**kwargs
@@ -43,7 +43,6 @@ details).
> itself requires having an explicit loop in the edge set.
-
Args |
diff --git a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionMPNNGraphUpdate.md b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionMPNNGraphUpdate.md
index 0f330280..05acc027 100644
--- a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionMPNNGraphUpdate.md
+++ b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/MultiHeadAttentionMPNNGraphUpdate.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -21,7 +21,7 @@ Returns a GraphUpdate layer for message passing with MultiHeadAttention pooling.
units: int,
message_dim: int,
num_heads: int,
- receiver_tag: tfgnn.IncidentNodeOrContextTag,
+ receiver_tag: tfgnn.IncidentNodeTag,
node_set_names: Optional[Collection[tfgnn.NodeSetName]] = None,
edge_feature: Optional[tfgnn.FieldName] = None,
l2_regularization: float = 0.0,
@@ -45,7 +45,6 @@ and all pooled messages, analogous to TF-GNN's
`vanilla_mpnn.VanillaMPNNGraphUpdate` and `gat_v2.GATv2MPNNGraphUpdate`.
-
Args |
@@ -159,7 +158,6 @@ Can be set to a `kerner_initializer` as understood by
-
Returns |
diff --git a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/all_symbols.md b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/all_symbols.md
index 68cdf923..de2d267e 100644
--- a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/all_symbols.md
+++ b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/all_symbols.md
@@ -9,3 +9,5 @@
* multi_head_attention.MultiHeadAttentionEdgePool
* multi_head_attention.MultiHeadAttentionHomGraphUpdate
* multi_head_attention.MultiHeadAttentionMPNNGraphUpdate
+* multi_head_attention.graph_update_from_config_dict
+* multi_head_attention.graph_update_get_config_dict
diff --git a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/graph_update_from_config_dict.md b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/graph_update_from_config_dict.md
new file mode 100644
index 00000000..ffa05192
--- /dev/null
+++ b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/graph_update_from_config_dict.md
@@ -0,0 +1,74 @@
+# multi_head_attention.graph_update_from_config_dict
+
+[TOC]
+
+
+
+
+
+Returns a MultiHeadAttentionMPNNGraphUpdate initialized from `cfg`.
+
+
+multi_head_attention.graph_update_from_config_dict(
+ cfg: config_dict.ConfigDict
+) -> tf.keras.layers.Layer
+
+
+
+
+
+
+
+Args |
+
+
+
+`cfg`
+ |
+
+A `ConfigDict` with the fields defined by
+`graph_update_get_config_dict()`. All fields with non-`None` values are
+used as keyword arguments for initializing and returning a
+`MultiHeadAttentionMPNNGraphUpdate` object. For the required arguments of
+`MultiHeadAttentionMPNNGraphUpdate.__init__`, users must set a value in
+`cfg` before passing it here.
+ |
+
+
+
+
+
+
+
+Returns |
+
+
+A new `MultiHeadAttentionMPNNGraphUpdate` object.
+ |
+
+
+
+
+
+
+
+
+Raises |
+
+
+
+`TypeError`
+ |
+
+if `cfg` fails to supply a required argument for
+`MultiHeadAttentionMPNNGraphUpdate.__init__`.
+ |
+
+
diff --git a/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/graph_update_get_config_dict.md b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/graph_update_get_config_dict.md
new file mode 100644
index 00000000..017fba0a
--- /dev/null
+++ b/tensorflow_gnn/docs/api_docs/python/models/multi_head_attention/graph_update_get_config_dict.md
@@ -0,0 +1,22 @@
+# multi_head_attention.graph_update_get_config_dict
+
+[TOC]
+
+
+
+
+
+Returns ConfigDict for graph_update_from_config_dict() with defaults.
+
+
+multi_head_attention.graph_update_get_config_dict() -> config_dict.ConfigDict
+
+
+
diff --git a/tensorflow_gnn/docs/api_docs/python/models/vanilla_mpnn/VanillaMPNNGraphUpdate.md b/tensorflow_gnn/docs/api_docs/python/models/vanilla_mpnn/VanillaMPNNGraphUpdate.md
index 68df23df..1705f770 100644
--- a/tensorflow_gnn/docs/api_docs/python/models/vanilla_mpnn/VanillaMPNNGraphUpdate.md
+++ b/tensorflow_gnn/docs/api_docs/python/models/vanilla_mpnn/VanillaMPNNGraphUpdate.md
@@ -20,7 +20,7 @@ Returns a GraphUpdate layer for a Vanilla MPNN.
*,
units: int,
message_dim: int,
- receiver_tag: tfgnn.IncidentNodeOrContextTag,
+ receiver_tag: tfgnn.IncidentNodeTag,
node_set_names: Optional[Collection[tfgnn.NodeSetName]] = None,
edge_feature: Optional[tfgnn.FieldName] = None,
reduce_type: str = 'sum',
diff --git a/tensorflow_gnn/docs/api_docs/python/models/vanilla_mpnn/graph_update_from_config_dict.md b/tensorflow_gnn/docs/api_docs/python/models/vanilla_mpnn/graph_update_from_config_dict.md
index 2e62e5b1..b9f5be5f 100644
--- a/tensorflow_gnn/docs/api_docs/python/models/vanilla_mpnn/graph_update_from_config_dict.md
+++ b/tensorflow_gnn/docs/api_docs/python/models/vanilla_mpnn/graph_update_from_config_dict.md
@@ -22,8 +22,8 @@ Returns a VanillaMPNNGraphUpdate initialized from `cfg`.
-
+
Args |
@@ -44,7 +44,6 @@ passing it here.
-
Returns |
@@ -57,7 +56,6 @@ A new `VanillaMPNNGraphUpdate` object.
-
Raises |
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn.md b/tensorflow_gnn/docs/api_docs/python/tfgnn.md
index 4878c14f..59ef5ad7 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn.md
@@ -30,6 +30,9 @@ similar to `tf.TensorSpec`. For example, a `FieldSpec` describes an instance of
## Modules
+[`experimental`](./tfgnn/experimental.md) module: Experimental (unstable) parts
+of the public interface of TensorFlow GNN.
+
[`keras`](./tfgnn/keras.md) module: The tfgnn.keras package.
## Classes
@@ -88,6 +91,9 @@ or from context to nodes or edges.
[`broadcast_node_to_edges(...)`](./tfgnn/broadcast_node_to_edges.md): Broadcasts values from nodes to incident edges.
+[`check_compatible_with_schema_pb(...)`](./tfgnn/check_compatible_with_schema_pb.md):
+Checks that the given spec or value is compatible with the graph schema.
+
[`check_homogeneous_graph_tensor(...)`](./tfgnn/check_homogeneous_graph_tensor.md):
Raises ValueError unless there is exactly one node set and edge set.
@@ -134,6 +140,9 @@ dataset from generator of any nest of scalar graph pieces.
[`mask_edges(...)`](./tfgnn/mask_edges.md): Creates a GraphTensor after applying
edge_mask over the specified edge-set.
+[`node_degree(...)`](./tfgnn/node_degree.md): Returns the degree of each node
+w.r.t. one side of an edge set.
+
[`pad_to_total_sizes(...)`](./tfgnn/pad_to_total_sizes.md): Pads graph tensor to the total sizes by inserting fake graph components.
[`parse_example(...)`](./tfgnn/parse_example.md): Parses a batch of serialized Example protos into a single `GraphTensor`.
@@ -271,7 +280,7 @@ TARGET_NAME
**version**
-`'0.4.0.dev1'`
+`'0.5.0.dev1'`
|
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/Context.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/Context.md
index df244afe..c38a1771 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/Context.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/Context.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -116,14 +116,14 @@ The total number of items.
from_fields
-View
+View
source
@classmethod
from_fields(
*,
- features: Optional[tfgnn.Fields ] = None,
+ features: Optional[Fields] = None,
sizes: Optional[Field] = None,
shape: Optional[ShapeLike] = None,
indices_dtype: Optional[tf.dtypes.DType] = None
@@ -206,7 +206,7 @@ A `Context` composite tensor.
get_features_dict
-View
+View
source
@@ -218,12 +218,12 @@ Returns features copy as a dictionary.
replace_features
-View
+View
source
replace_features(
- features: tfgnn.Fields
+ features: Fields
) -> 'Context'
@@ -246,13 +246,13 @@ Enforce the common prefix shape on all the contained features.
__getitem__
-View
+View
source
__getitem__(
feature_name: FieldName
-) -> tfgnn.Field
+) -> Field
Indexing operator `[]` to access feature values by their name.
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/ContextSpec.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/ContextSpec.md
index 34d9bd7e..583b0cfa 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/ContextSpec.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/ContextSpec.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -116,14 +116,14 @@ Do NOT override for custom non-TF types.
from_field_specs
-View
+View
source
@classmethod
from_field_specs(
*,
- features_spec: Optional[tfgnn.FieldsSpec ] = None,
+ features_spec: Optional[FieldsSpec] = None,
sizes_spec: Optional[FieldSpec] = None,
shape: ShapeLike = tf.TensorShape([]),
indices_dtype: tf.dtypes.DType = const.default_indices_dtype
@@ -298,7 +298,7 @@ and `other`.
relax
-View
+View
source
@@ -367,18 +367,15 @@ Return self==value.
__getitem__
-View
+View
source
__getitem__(
feature_name: FieldName
-) -> tfgnn.FieldSpec
+) -> FieldSpec
-
-
-
__ne__
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/EdgeSet.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/EdgeSet.md
index 10ebf1b2..b5dab508 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/EdgeSet.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/EdgeSet.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -127,16 +127,13 @@ The total number of items.
from_fields
-View
+View
source
@classmethod
from_fields(
- *,
- features: Optional[tfgnn.Fields ] = None,
- sizes: tfgnn.Field ,
- adjacency: Adjacency
+ *, features: Optional[Fields] = None, sizes: Field, adjacency: Adjacency
) -> 'EdgeSet'
@@ -218,7 +215,7 @@ An `EdgeSet` composite tensor.
get_features_dict
-View
+View
source
@@ -230,12 +227,12 @@ Returns features copy as a dictionary.
replace_features
-View
+View
source
replace_features(
- features: tfgnn.Fields
+ features: Mapping[FieldName, Field]
) -> '_NodeOrEdgeSet'
@@ -258,13 +255,13 @@ Enforce the common prefix shape on all the contained features.
__getitem__
-View
+View
source
__getitem__(
feature_name: FieldName
-) -> tfgnn.Field
+) -> Field
Indexing operator `[]` to access feature values by their name.
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/EdgeSetSpec.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/EdgeSetSpec.md
index e9076ed1..207e1a31 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/EdgeSetSpec.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/EdgeSetSpec.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -118,15 +118,15 @@ Do NOT override for custom non-TF types.
from_field_specs
-View
+View
source
@classmethod
from_field_specs(
*,
- features_spec: Optional[tfgnn.FieldsSpec ] = None,
- sizes_spec: tfgnn.FieldSpec ,
+ features_spec: Optional[FieldsSpec] = None,
+ sizes_spec: FieldSpec,
adjacency_spec: AdjacencySpec
) -> 'EdgeSetSpec'
@@ -299,7 +299,7 @@ and `other`.
relax
-View
+View
source
@@ -376,18 +376,15 @@ Return self==value.
__getitem__
-View
+View
source
__getitem__(
feature_name: FieldName
-) -> tfgnn.FieldSpec
+) -> FieldSpec
-
-
-
__ne__
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/GraphTensor.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/GraphTensor.md
index a70a7455..cf836449 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/GraphTensor.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/GraphTensor.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -236,13 +236,13 @@ The total number of graph components.
from_pieces
-View
+View
source
@classmethod
from_pieces(
- context: Optional[tfgnn.Context ] = None,
+ context: Optional[Context] = None,
node_sets: Optional[Mapping[NodeSetName, NodeSet]] = None,
edge_sets: Optional[Mapping[EdgeSetName, EdgeSet]] = None
) -> 'GraphTensor'
@@ -253,7 +253,7 @@ Constructs a new `GraphTensor` from context, node sets and edge sets.
merge_batch_to_components
-View
+View
source
@@ -334,7 +334,7 @@ A scalar (rank 0) graph tensor.
remove_features
-View
+View
source
@@ -447,12 +447,12 @@ input graph tensor.
replace_features
-View
+View
source
replace_features(
- context: Optional[tfgnn.Fields ] = None,
+ context: Optional[Fields] = None,
node_sets: Optional[Mapping[NodeSetName, Fields]] = None,
edge_sets: Optional[Mapping[EdgeSetName, Fields]] = None
) -> 'GraphTensor'
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/GraphTensorSpec.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/GraphTensorSpec.md
index c6c7f938..141ade00 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/GraphTensorSpec.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/GraphTensorSpec.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -116,13 +116,13 @@ Do NOT override for custom non-TF types.
from_piece_specs
-View
+View
source
@classmethod
from_piece_specs(
- context_spec: Optional[tfgnn.ContextSpec ] = None,
+ context_spec: Optional[ContextSpec] = None,
node_sets_spec: Optional[Mapping[NodeSetName, NodeSetSpec]] = None,
edge_sets_spec: Optional[Mapping[EdgeSetName, EdgeSetSpec]] = None
) -> 'GraphTensorSpec'
@@ -296,7 +296,7 @@ and `other`.
relax
-View
+View
source
@@ -322,7 +322,7 @@ Calling with all default parameters keeps the spec unchanged.
`num_components`
|
-if True, allows the variable number of graph components.
+if True, allows a variable number of graph components.
|
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/NodeSet.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/NodeSet.md
index 7b6b3f60..55d71d06 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/NodeSet.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/NodeSet.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -124,15 +124,13 @@ The total number of items.
from_fields
-View
+View
source
@classmethod
from_fields(
- *,
- features: Optional[tfgnn.Fields ] = None,
- sizes: tfgnn.Field
+ *, features: Optional[Fields] = None, sizes: Field
) -> 'NodeSet'
@@ -202,7 +200,7 @@ A `NodeSet` composite tensor.
get_features_dict
-View
+View
source
@@ -214,12 +212,12 @@ Returns features copy as a dictionary.
replace_features
-View
+View
source
replace_features(
- features: tfgnn.Fields
+ features: Mapping[FieldName, Field]
) -> '_NodeOrEdgeSet'
@@ -242,13 +240,13 @@ Enforce the common prefix shape on all the contained features.
__getitem__
-View
+View
source
__getitem__(
feature_name: FieldName
-) -> tfgnn.Field
+) -> Field
Indexing operator `[]` to access feature values by their name.
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/NodeSetSpec.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/NodeSetSpec.md
index 151e3d89..d250edfe 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/NodeSetSpec.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/NodeSetSpec.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -116,15 +116,13 @@ Do NOT override for custom non-TF types.
from_field_specs
-View
+View
source
@classmethod
from_field_specs(
- *,
- features_spec: Optional[tfgnn.FieldsSpec ] = None,
- sizes_spec: tfgnn.FieldSpec
+ *, features_spec: Optional[FieldsSpec] = None, sizes_spec: FieldSpec
) -> 'NodeSetSpec'
@@ -296,7 +294,7 @@ and `other`.
relax
-View
+View
source
@@ -373,18 +371,15 @@ Return self==value.
__getitem__
-View
+View
source
__getitem__(
feature_name: FieldName
-) -> tfgnn.FieldSpec
+) -> FieldSpec
-
-
-
__ne__
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/all_symbols.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/all_symbols.md
index 145a6aff..e2af57ae 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/all_symbols.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/all_symbols.md
@@ -36,6 +36,7 @@
* tfgnn.broadcast_context_to_edges
* tfgnn.broadcast_context_to_nodes
* tfgnn.broadcast_node_to_edges
+* tfgnn.check_compatible_with_schema_pb
* tfgnn.check_homogeneous_graph_tensor
* tfgnn.check_required_features
* tfgnn.check_scalar_graph_tensor
@@ -44,6 +45,7 @@
* tfgnn.create_schema_pb_from_graph_spec
* tfgnn.dataset_filter_with_summary
* tfgnn.dataset_from_generator
+* tfgnn.experimental
* tfgnn.find_tight_size_constraints
* tfgnn.gather_first_node
* tfgnn.get_io_spec
@@ -63,6 +65,7 @@
* tfgnn.keras.layers.ContextUpdate
* tfgnn.keras.layers.EdgeSetUpdate
* tfgnn.keras.layers.GraphUpdate
+* tfgnn.keras.layers.ItemDropout
* tfgnn.keras.layers.MakeEmptyFeature
* tfgnn.keras.layers.MapFeatures
* tfgnn.keras.layers.NextStateFromConcat
@@ -78,6 +81,7 @@
* tfgnn.keras.layers.SingleInputNextState
* tfgnn.learn_fit_or_skip_size_constraints
* tfgnn.mask_edges
+* tfgnn.node_degree
* tfgnn.pad_to_total_sizes
* tfgnn.parse_example
* tfgnn.parse_schema
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/check_compatible_with_schema_pb.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/check_compatible_with_schema_pb.md
new file mode 100644
index 00000000..6145c33e
--- /dev/null
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/check_compatible_with_schema_pb.md
@@ -0,0 +1,74 @@
+# tfgnn.check_compatible_with_schema_pb
+
+[TOC]
+
+
+
+
+
+Checks that the given spec or value is compatible with the graph schema.
+
+
+tfgnn.check_compatible_with_schema_pb(
+ graph: Union[tfgnn.GraphTensor , tfgnn.GraphTensorSpec ],
+ schema: tfgnn.GraphSchema
+) -> None
+
+
+
+
+The `graph` is compatible with the `schema` if
+
+* it is scalar (rank=0) graph tensor;
+* has single graph component;
+* has matching sets of nodes and edges;
+* has matching sets of features on all node sets, edge sets, and the context,
+ and their types and shapes are compatible;
+* all adjacencies are of type
+ tfgnn.Adjacency .
+
+
+
+
+
+Args |
+
+
+
+`graph`
+ |
+
+The graph tensor or graph tensor spec.
+ |
+
+
+`schema`
+ |
+
+The graph schema.
+ |
+
+
+
+
+
+
+
+Raises |
+
+
+
+`ValueError`
+ |
+
+if `spec_or_value` is not represented by the graph schema.
+ |
+
+
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/check_homogeneous_graph_tensor.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/check_homogeneous_graph_tensor.md
index ca8c74a0..97c87e3a 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/check_homogeneous_graph_tensor.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/check_homogeneous_graph_tensor.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -17,8 +17,7 @@ Raises ValueError unless there is exactly one node set and edge set.
tfgnn.check_homogeneous_graph_tensor(
- graph: Union[tfgnn.GraphTensor , tfgnn.GraphTensorSpec ],
- name='This operation'
+ graph: Union[GraphTensor, GraphTensorSpec], name='This operation'
) -> None
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/check_scalar_graph_tensor.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/check_scalar_graph_tensor.md
index 5a3fc187..40bd678e 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/check_scalar_graph_tensor.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/check_scalar_graph_tensor.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -15,11 +15,8 @@
tfgnn.check_scalar_graph_tensor(
- graph: Union[tfgnn.GraphTensor , tfgnn.GraphTensorSpec ],
- name='This operation'
+ graph: Union[GraphTensor, GraphTensorSpec], name='This operation'
) -> None
-
-
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/create_graph_spec_from_schema_pb.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/create_graph_spec_from_schema_pb.md
index 74df00bf..66b57f8d 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/create_graph_spec_from_schema_pb.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/create_graph_spec_from_schema_pb.md
@@ -33,7 +33,8 @@ of the same goal. This function converts the proto to the corresponding type
spec.
It is guranteed that the output graph spec is compatible with the input graph
-schema (as `tfgnn.check_compatible_with_schema_pb()`.)
+schema (as
+tfgnn.check_compatible_with_schema_pb() .)
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/create_schema_pb_from_graph_spec.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/create_schema_pb_from_graph_spec.md
index 637e8bef..ac837446 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/create_schema_pb_from_graph_spec.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/create_schema_pb_from_graph_spec.md
@@ -30,10 +30,10 @@ other fields are left unset. (Callers can set them separately before writing out
the schema.)
It is guranteed that the input graph is compatible with the output graph schema
-(as `tfgnn.check_compatible_with_schema_pb()`.)
+(as
+tfgnn.check_compatible_with_schema_pb() .)
-
Args |
@@ -49,7 +49,6 @@ The scalar graph tensor or its spec with single graph component.
-
Returns |
@@ -62,7 +61,6 @@ An instance of the graph schema proto message.
-
Raises |
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/dataset_from_generator.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/dataset_from_generator.md
index 842e571e..fa38d480 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/dataset_from_generator.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/dataset_from_generator.md
@@ -42,7 +42,6 @@ print([dataset1]) # prints: pieceA, pieceD.
```
-
Args |
@@ -61,7 +60,6 @@ protocol. Could consist of any nest of tensors and scalar graph pieces
-
Returns |
@@ -74,7 +72,6 @@ A `tf.data.Dataset`.
-
Raises |
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/experimental.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/experimental.md
new file mode 100644
index 00000000..97dbe4ac
--- /dev/null
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/experimental.md
@@ -0,0 +1,31 @@
+# Module: tfgnn.experimental
+
+[TOC]
+
+
+
+
+
+Experimental (unstable) parts of the public interface of TensorFlow GNN.
+
+A symbol `foo` exposed here is available to library users as
+
+```
+import tensorflow_gnn as tfgnn
+
+tfgnn.experimental.foo()
+```
+
+This is the preferred way to expose individual functions on track to inclusion
+into the stable public interface of TensorFlow GNN.
+
+Beyond these symbols, there are also experimental sub-libraries that need to be
+imported separately (`from tensorflow_gnn.experimental import foo`). This is for
+special cases only.
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/homogeneous.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/homogeneous.md
index bd3e8518..12b66418 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/homogeneous.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/homogeneous.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -27,13 +27,12 @@ Constructs a homogeneous `GraphTensor` with node features and one edge_set.
edge_set_name: Optional[FieldName] = const.EDGES,
node_set_sizes: Optional[Field] = None,
edge_set_sizes: Optional[Field] = None
-) -> tfgnn.GraphTensor
+) -> GraphTensor
-
Args |
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/is_ragged_tensor.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/is_ragged_tensor.md
index d66b391e..d534e1e6 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/is_ragged_tensor.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/is_ragged_tensor.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/ConvGNNBuilder.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/ConvGNNBuilder.md
index 90716515..47224171 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/ConvGNNBuilder.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/ConvGNNBuilder.md
@@ -20,12 +20,10 @@ Factory of layers that do convolutions on a graph.
convolutions_factory: Callable[..., graph_update_lib.EdgesToNodePoolingLayer],
nodes_next_state_factory: Callable[[const.NodeSetName], next_state_lib.NextStateForNodeSet],
*,
- receiver_tag: Optional[const.IncidentNodeOrContextTag] = None
+ receiver_tag: Optional[const.IncidentNodeTag] = None
)
-
-
ConvGNNBuilder object constructs `GraphUpdate` layers, that apply arbitrary
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers.md
index 60206410..a6547d36 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers.md
@@ -34,6 +34,9 @@ self-loops to scalar graphs.
[`class GraphUpdate`](../../tfgnn/keras/layers/GraphUpdate.md): Applies one round of updates to EdgeSets, NodeSets and Context.
+[`class ItemDropout`](../../tfgnn/keras/layers/ItemDropout.md): Dropout of
+feature values for entire edges, nodes or components.
+
[`class MakeEmptyFeature`](../../tfgnn/keras/layers/MakeEmptyFeature.md): Returns an empty feature with a shape that fits the input graph piece.
[`class MapFeatures`](../../tfgnn/keras/layers/MapFeatures.md): Transforms features on a GraphTensor by user-defined callbacks.
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/EdgeSetUpdate.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/EdgeSetUpdate.md
index 4203105e..f6ae3e81 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/EdgeSetUpdate.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/EdgeSetUpdate.md
@@ -91,7 +91,6 @@ To pass the default state tensor of the context, set this to
|
-
Call returns |
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/ItemDropout.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/ItemDropout.md
new file mode 100644
index 00000000..e3132f55
--- /dev/null
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/ItemDropout.md
@@ -0,0 +1,87 @@
+# tfgnn.keras.layers.ItemDropout
+
+[TOC]
+
+
+
+
+
+Dropout of feature values for entire edges, nodes or components.
+
+
+tfgnn.keras.layers.ItemDropout(
+ rate: float, seed: Optional[int] = None, **kwargs
+)
+
+
+
+
+This Layer class wraps `tf.keras.layers.Dropout` to perform edge dropout or node
+dropout (or "component dropout", which is rarely useful) on Tensors shaped like
+features of a **scalar** GraphTensor.
+
+
+
+
+
+Init args |
+
+
+
+`rate`
+ |
+
+The dropout rate, forwarded to `tf.keras.layers.Dropout`.
+ |
+
+
+`seed`
+ |
+
+The random seed, forwarded `tf.keras.layers.Dropout`.
+ |
+
+
+
+
+
+
+
+Call args |
+
+
+
+`x`
+ |
+
+A float Tensor of shape `[num_items, *feature_dims]`. This is the shape
+of node features or edge features (or context features) in a *scalar**
+GraphTensor. Across calls, all inputs must have the same known rank.
+ |
+
+
+
+
+
+
+
+Call returns |
+
+
+A Tensor `y` with the same shape and dtype as the input `x`.
+In non-training mode, the output is the same as the input: `y == x`.
+In training mode, each row `y[i]` is either zeros (with probability `rate`)
+or a scaled-up copy of the input row: `y[i] = x[i] * 1./(1-rate)`.
+This is similar to ordinary dropout, except all or none of the feature
+values for each item are dropped out.
+ |
+
+
+
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/MakeEmptyFeature.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/MakeEmptyFeature.md
index 51ddf270..242b606a 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/MakeEmptyFeature.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/MakeEmptyFeature.md
@@ -63,7 +63,6 @@ a Context, NodeSet or EdgeSet from a GraphTensor.
-
Call returns |
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/Readout.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/Readout.md
index 8342a854..6de78c2f 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/Readout.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/Readout.md
@@ -47,10 +47,8 @@ unset to select tfgnn.HIDDEN_STATE.
#### For example:
-
-
```python
-readout = tfgnn.keras.layers.Readout(feature="value")
+readout = tfgnn.keras.layers.Readout(feature_name="value")
value = readout(graph_tensor, edge_set_name="edges")
assert value == graph_tensor.edge_sets["edge"]["value"]
```
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/ResidualNextState.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/ResidualNextState.md
index c846ebe8..1145f376 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/ResidualNextState.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/ResidualNextState.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -30,10 +30,15 @@ Updates a state with a residual block.
This layer concatenates all inputs, sends them through a user-supplied
-transformation, forms a skip connection by adding back the state of the
-updated graph piece, and finally applies an activation function.
-In other words, the user-supplied transformation is a residual block
-that modifies the state.
+transformation, forms a skip connection by adding back the state of the updated
+graph piece, and finally applies an activation function. In other words, the
+user-supplied transformation is a residual block that modifies the state. The
+output shape of the residual block must match the shape of the state that gets
+updated so that they can be added.
+
+If the initial state of the graph piece that is being updated has size 0, the
+skip connection is omitted. This avoids the need to special-case, say, latent
+node sets in modeling code applied to different node sets.
@@ -46,18 +51,18 @@ that modifies the state.
Required. A Keras Layer to transform the concatenation
-of all inputs into a delta that gets added to the state. Notice that
-the activation function is applied after the residual_block and the
-addition, so typically the residual_block does *not* use an activation
-function in its last layer.
+of all inputs into a delta that gets added to the state.
|
`activation`
|
-An activation function (none by default),
-as understood by tf.keras.layers.Activation.
+An activation function (none by default), as understood by
+`tf.keras.layers.Activation`. This activation function is applied after
+the residual block and the addition. If using this, typically the
+residual block does not have an activation function on its last layer,
+or vice versa.
|
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/SimpleConv.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/SimpleConv.md
index 3aff814e..37370c07 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/SimpleConv.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/SimpleConv.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
@@ -245,7 +245,7 @@ If `False`, all calls to convolve() will get `sender_node_input=None`.
convolve
-View
+View
source
@@ -257,6 +257,7 @@ source
broadcast_from_sender_node: Callable[[tf.Tensor], tf.Tensor],
broadcast_from_receiver: Callable[[tf.Tensor], tf.Tensor],
pool_to_receiver: Callable[..., tf.Tensor],
+ extra_receiver_ops: Any = None,
training: bool
) -> tf.Tensor
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/SingleInputNextState.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/SingleInputNextState.md
index 8388d80d..efb8efaf 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/SingleInputNextState.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/keras/layers/SingleInputNextState.md
@@ -6,7 +6,7 @@
-
+
View source on GitHub
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/mask_edges.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/mask_edges.md
index e60b4462..a385fa63 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/mask_edges.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/mask_edges.md
@@ -33,7 +33,6 @@ Edge masking doesn't change the node sets or the context node information.
Not compatible with XLA.
-
Args |
@@ -72,7 +71,6 @@ new edge-set, with name masked_info_edge_set_name.
-
Returns |
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/node_degree.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/node_degree.md
new file mode 100644
index 00000000..31dbc255
--- /dev/null
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/node_degree.md
@@ -0,0 +1,74 @@
+# tfgnn.node_degree
+
+[TOC]
+
+
+
+
+
+Returns the degree of each node w.r.t. one side of an edge set.
+
+
+tfgnn.node_degree(
+ graph_tensor: tfgnn.GraphTensor ,
+ edge_set_name: EdgeSetName,
+ node_tag: IncidentNodeTag
+) -> tfgnn.Field
+
+
+
+
+
+
+
+Args |
+
+
+
+`graph_tensor`
+ |
+
+A scalar GraphTensor.
+ |
+
+
+`edge_set_name`
+ |
+
+The name of the edge set for which degrees are calculated.
+ |
+
+
+`node_tag`
+ |
+
+The side of each edge for which the degrees are calculated,
+specified by its tag in the edge set (e.g., tfgnn.SOURCE ,
+tfgnn.TARGET ).
+ |
+
+
+
+
+
+
+
+Returns |
+
+
+An integer Tensor of shape `[num_nodes]` and dtype equal to `indices_dtype`
+of the GraphTensor. Element `i` contains the number of edges in the given
+edge set that have node index `i` as their endpoint with the given
+`node_tag`. The dimension `num_nodes` is the number of nodes in the
+respective node set.
+ |
+
+
+
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/reorder_nodes.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/reorder_nodes.md
index 57d22232..9e09763d 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/reorder_nodes.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/reorder_nodes.md
@@ -25,8 +25,8 @@ Reorders nodes within node sets according to indices.
-
+
Args |
@@ -59,7 +59,6 @@ If True, checks that `node_indices` are valid permutations.
-
Returns |
@@ -72,7 +71,6 @@ A scalar GraphTensor with randomly shuffled nodes within `node_sets`.
-
Raises |
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/shuffle_features_globally.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/shuffle_features_globally.md
index 33e82e7a..ae032b9b 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/shuffle_features_globally.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/shuffle_features_globally.md
@@ -24,6 +24,7 @@ Shuffles context, node set and edge set features of a scalar GraphTensor.
+
@@ -48,7 +49,6 @@ A seed for random uniform shuffle.
-
Returns |
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/shuffle_nodes.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/shuffle_nodes.md
index 0edd0c1e..9b02a7bc 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/shuffle_nodes.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/shuffle_nodes.md
@@ -32,7 +32,6 @@ the new order of shuffled nodes. The order of graph components (as created by
within each component.
-
Args |
@@ -63,7 +62,6 @@ A seed for random uniform shuffle.
-
Returns |
@@ -76,7 +74,6 @@ A scalar GraphTensor with randomly shuffled nodes within `node_sets`.
-
Raises |
diff --git a/tensorflow_gnn/docs/api_docs/python/tfgnn/write_example.md b/tensorflow_gnn/docs/api_docs/python/tfgnn/write_example.md
index 9dfffd05..e2279ed5 100644
--- a/tensorflow_gnn/docs/api_docs/python/tfgnn/write_example.md
+++ b/tensorflow_gnn/docs/api_docs/python/tfgnn/write_example.md
@@ -31,9 +31,10 @@ Python job. Create instances of scalar `GraphTensor` with a single graph
component and call this to write them out. It is recommended to always accompany
serialized graph tensor tensorflow examples by their graph schema file (see
tfgnn.create_schema_pb_from_graph_spec() ).
-TF-GNN library provides `tfgnn.check_compatible_with_schema_pb()` to check that
-graph tensor instances (or their specs) are compatible with the graph schema.
-The graph tensors materialized in this way will be parseable by
+TF-GNN library provides
+tfgnn.check_compatible_with_schema_pb()
+to check that graph tensor instances (or their specs) are compatible with the
+graph schema. The graph tensors materialized in this way will be parseable by
tfgnn.parse_example()
(using the spec deserialized from the schema) and have the same contents (up to
the choice of indices_dtype).
| | | | | | | | | | | | | | | | | | | | | | |