Skip to content

Commit

Permalink
Refresh api_docs at commit 3f6e17a
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 505144955
  • Loading branch information
arnoegw authored and tensorflower-gardener committed Jan 27, 2023
1 parent 3f6e17a commit 9e0b8e2
Show file tree
Hide file tree
Showing 53 changed files with 535 additions and 170 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Returns a GraphUpdate layer with a Graph Attention Network V2 (GATv2).
*,
num_heads: int,
per_head_channels: int,
receiver_tag: tfgnn.IncidentNodeOrContextTag,
receiver_tag: tfgnn.IncidentNodeTag,
feature_name: str = tfgnn.HIDDEN_STATE,
heads_merge_type: str = 'concat',
name: str = 'gat_v2',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Returns a GraphUpdate layer for message passing with GATv2 pooling.
message_dim: int,
num_heads: int,
heads_merge_type: str = 'concat',
receiver_tag: tfgnn.IncidentNodeOrContextTag,
receiver_tag: tfgnn.IncidentNodeTag,
node_set_names: Optional[Collection[tfgnn.NodeSetName]] = None,
edge_feature: Optional[tfgnn.FieldName] = None,
l2_regularization: float = 0.0,
Expand Down
15 changes: 13 additions & 2 deletions tensorflow_gnn/docs/api_docs/python/models/gcn/GCNConv.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/gcn/gcn_conv.py#L28-L191">
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/gcn/gcn_conv.py#L28-L218">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
Expand All @@ -27,6 +27,7 @@ Implements the Graph Convolutional Network by Kipf&Welling (2016).
kernel_initializer: bool = None,
node_feature: Optional[str] = tfgnn.HIDDEN_STATE,
kernel_regularizer: Optional[_RegularizerType] = None,
edge_weight_feature_name: Optional[tfgnn.FieldName] = None,
**kwargs
)
</code></pre>
Expand Down Expand Up @@ -84,7 +85,8 @@ with Keras and other implementations.
</td>
<td>
Whether to compute the result as if a loop from each node
to itself had been added to the edge set.
to itself had been added to the edge set. The self-loop edges are added
with an edge weight of one.
</td>
</tr><tr>
<td>
Expand All @@ -109,6 +111,15 @@ Name of the node feature to transform.
</td>
</tr><tr>
<td>
`edge_weight_feature_name`<a id="edge_weight_feature_name"></a>
</td>
<td>
Can be set to the name of a feature on the edge
set that supplies a scalar weight for each edge. The GCN computation uses
it as the edge's entry in the adjacency matrix, instead of the default 1.
</td>
</tr><tr>
<td>
`**kwargs`<a id="**kwargs"></a>
</td>
<td>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/gcn/gcn_conv.py#L194-L251">
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/gcn/gcn_conv.py#L221-L278">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L253-L469">
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L257-L473">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L25-L122">
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L25-L124">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
Expand Down Expand Up @@ -137,7 +137,7 @@ If `False`, all calls to convolve() will get `sender_node_input=None`.

<h3 id="convolve"><code>convolve</code></h3>

<a target="_blank" class="external" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L109-L122">View
<a target="_blank" class="external" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L109-L124">View
source</a>

<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
Expand All @@ -149,6 +149,7 @@ source</a>
broadcast_from_sender_node: Callable[[tf.Tensor], tf.Tensor],
broadcast_from_receiver: Callable[[tf.Tensor], tf.Tensor],
pool_to_receiver: Callable[..., tf.Tensor],
extra_receiver_ops: Any = None,
training: bool
) -> tf.Tensor
</code></pre>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L472-L585">
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L476-L589">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L588-L739">
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L592-L743">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L125-L250">
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L127-L254">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
Expand Down Expand Up @@ -173,7 +173,7 @@ If `False`, all calls to convolve() will get `sender_node_input=None`.

<h3 id="convolve"><code>convolve</code></h3>

<a target="_blank" class="external" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L235-L250">View
<a target="_blank" class="external" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/graph_sage/layers.py#L237-L254">View
source</a>

<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
Expand All @@ -185,6 +185,7 @@ source</a>
broadcast_from_sender_node: Callable[[tf.Tensor], tf.Tensor],
broadcast_from_receiver: Callable[[tf.Tensor], tf.Tensor],
pool_to_receiver: Callable[..., tf.Tensor],
extra_receiver_ops: Any = None,
training: bool
) -> tf.Tensor
</code></pre>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,9 @@ Returns a GraphUpdate layer with a transformer-style multihead attention.

[`MultiHeadAttentionMPNNGraphUpdate(...)`](./multi_head_attention/MultiHeadAttentionMPNNGraphUpdate.md):
Returns a GraphUpdate layer for message passing with MultiHeadAttention pooling.

[`graph_update_from_config_dict(...)`](./multi_head_attention/graph_update_from_config_dict.md):
Returns a MultiHeadAttentionMPNNGraphUpdate initialized from `cfg`.

[`graph_update_get_config_dict(...)`](./multi_head_attention/graph_update_get_config_dict.md):
Returns ConfigDict for graph_update_from_config_dict() with defaults.
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/multi_head_attention/layers.py#L23-L518">
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/multi_head_attention/layers.py#L24-L558">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
Expand All @@ -31,7 +31,7 @@ Transformer-style (dot-product) multi-head attention on GNNs.
kernel_initializer: Union[None, str, tf.keras.initializers.Initializer] = None,
kernel_regularizer: Union[None, str, tf.keras.regularizers.Regularizer] = None,
transform_keys: bool = True,
score_scaling: bool = True,
score_scaling: Literal['none', 'rsqrt_dim', 'trainable_sigmoid'] = &#x27;rsqrt_dim&#x27;,
transform_values_after_pooling: bool = False,
**kwargs
)
Expand Down Expand Up @@ -85,13 +85,13 @@ Note that in the context of graph, only nodes with edges connected are attended
to each other, which means we do NOT compute $N^2$ pairs of scores as the
original Transformer-style Attention.

Users are able to remove the scaling of attention scores (score_scaling=False)
or add an activation on the transformed query (controled by
`attention_activation`). However, we recommend to remove the scaling when using
an `attention_activation` since activating both of them may lead to degrated
accuracy. One can also customize the transformation kernels with different
intializers, regularizers as well as the use of bias terms, using the other
arguments.
Users are able to remove the scaling of attention scores
(`score_scaling="none"`) or add an activation on the transformed query
(controlled by `attention_activation`). However, we recommend to remove the
scaling when using an `attention_activation` since activating both of them may
lead to degraded accuracy. One can also customize the transformation kernels
with different initializers, regularizers as well as the use of bias terms,
using the other arguments.

Example: Transformer-style attention on neighbors along incoming edges whose
result is concatenated with the old node state and passed through a Dense layer
Expand Down Expand Up @@ -125,7 +125,6 @@ could potentially be beneficial:
```

<!-- Tabular view -->

<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Init args</h2></th></tr>
Expand Down Expand Up @@ -256,9 +255,15 @@ independent of this arg.)
`score_scaling`<a id="score_scaling"></a>
</td>
<td>
If true, the attention scores are divided by the square root
of the dimension of keys (i.e., per_head_channels if transform_keys=True,
else whatever the dimension of combined sender inputs is).
One of either `"none"`, `"rsqrt_dim"`, or
`"trainable_sigmoid"`. If set to `"rsqrt_dim"`, the attention scores are
divided by the square root of the dimension of keys (i.e.,
`per_head_channels` if `transform_keys=True`, otherwise whatever the
dimension of combined sender inputs is). If set to `"trainable_sigmoid"`,
the scores are scaled with `sigmoid(x)`, where `x` is a trainable weight
of the model that is initialized to `-5.0`, which initially makes all the
attention weights equal and slowly ramps up as the other weights in the
layer converge. Defaults to `"rsqrt_dim"`.
</td>
</tr><tr>
<td>
Expand All @@ -270,13 +275,14 @@ the value transformation, then pools with attention coefficients.
Setting this option pools inputs with attention coefficients, then applies
the transformation. This is mathematically equivalent but can be faster
or slower to compute, depending on the platform and the dataset.
IMPORANT: Toggling this option breaks checkpoint compatibility.
IMPORTANT: Toggling this option breaks checkpoint compatibility.
IMPORTANT: Setting this option requires TensorFlow 2.10 or greater,
because it uses `tf.keras.layers.EinsumDense`.
</td>
</tr>
</table>

<!-- Tabular view -->

<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>
Expand Down Expand Up @@ -335,7 +341,6 @@ Forwarded to the base class tf.keras.layers.Layer.
</table>

<!-- Tabular view -->

<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr>
Expand Down Expand Up @@ -368,7 +373,7 @@ If `False`, all calls to convolve() will get `sender_node_input=None`.

<h3 id="convolve"><code>convolve</code></h3>

<a target="_blank" class="external" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/multi_head_attention/layers.py#L332-L490">View
<a target="_blank" class="external" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/multi_head_attention/layers.py#L354-L530">View
source</a>

<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
Expand All @@ -394,7 +399,6 @@ from nodes to context). In the end, values have to be pooled from there into a
Tensor with a leading dimension indexed by receivers, see `pool_to_receiver`.

<!-- Tabular view -->

<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Args</th></tr>
Expand Down Expand Up @@ -482,7 +486,6 @@ does not require forwarding this arg, Keras does that automatically.
</table>

<!-- Tabular view -->

<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Returns</th></tr>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/multi_head_attention/layers.py#L521-L574">
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/multi_head_attention/layers.py#L561-L614">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
Expand Down Expand Up @@ -41,7 +41,6 @@ an edge set to do the analogous pooling of edge states to context.
NOTE: This layer cannot pool node states. For that, use MultiHeadAttentionConv.

<!-- Tabular view -->

<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/multi_head_attention/layers.py#L578-L640">
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/multi_head_attention/layers.py#L618-L680">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
Expand All @@ -20,7 +20,7 @@ Returns a GraphUpdate layer with a transformer-style multihead attention.
*,
num_heads: int,
per_head_channels: int,
receiver_tag: tfgnn.IncidentNodeOrContextTag,
receiver_tag: tfgnn.IncidentNodeTag,
feature_name: str = tfgnn.HIDDEN_STATE,
name: str = &#x27;multi_head_attention&#x27;,
**kwargs
Expand All @@ -43,7 +43,6 @@ details).
> itself requires having an explicit loop in the edge set.
<!-- Tabular view -->

<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/multi_head_attention/layers.py#L643-L736">
<a target="_blank" href="https://github.com/tensorflow/gnn/tree/master/tensorflow_gnn/models/multi_head_attention/layers.py#L683-L776">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
Expand All @@ -21,7 +21,7 @@ Returns a GraphUpdate layer for message passing with MultiHeadAttention pooling.
units: int,
message_dim: int,
num_heads: int,
receiver_tag: tfgnn.IncidentNodeOrContextTag,
receiver_tag: tfgnn.IncidentNodeTag,
node_set_names: Optional[Collection[tfgnn.NodeSetName]] = None,
edge_feature: Optional[tfgnn.FieldName] = None,
l2_regularization: float = 0.0,
Expand All @@ -45,7 +45,6 @@ and all pooled messages, analogous to TF-GNN's
`vanilla_mpnn.VanillaMPNNGraphUpdate` and `gat_v2.GATv2MPNNGraphUpdate`.

<!-- Tabular view -->

<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>
Expand Down Expand Up @@ -159,7 +158,6 @@ Can be set to a `kerner_initializer` as understood by
</table>

<!-- Tabular view -->

<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Returns</h2></th></tr>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,5 @@
* <a href="../multi_head_attention/MultiHeadAttentionEdgePool.md"><code>multi_head_attention.MultiHeadAttentionEdgePool</code></a>
* <a href="../multi_head_attention/MultiHeadAttentionHomGraphUpdate.md"><code>multi_head_attention.MultiHeadAttentionHomGraphUpdate</code></a>
* <a href="../multi_head_attention/MultiHeadAttentionMPNNGraphUpdate.md"><code>multi_head_attention.MultiHeadAttentionMPNNGraphUpdate</code></a>
* <a href="../multi_head_attention/graph_update_from_config_dict.md"><code>multi_head_attention.graph_update_from_config_dict</code></a>
* <a href="../multi_head_attention/graph_update_get_config_dict.md"><code>multi_head_attention.graph_update_get_config_dict</code></a>
Loading

0 comments on commit 9e0b8e2

Please sign in to comment.