You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use the motif-finding module of DNABERT. I have a pre-trained DNABERT model, and I've managed to extract the attention from this pretrained model for the test set.
The attention variable in the model's output is a tuple with a size of 12. Each element in this tuple represents the attention of one layer. For each layer, the shape of the attention is [N, 12, max_seq_len, max_seq_len], where 'N' denotes the size of the test set, and 12 represents the number of attention heads.
I would now like to convert these output attentions into a numpy array, similar to the 'atten.npy' that is fed into the motif-finding module. Is there a module in the code that facilitates this conversion? Broadly speaking, I am interested in understanding how 'atten.npy' can be derived from the attention outputs of the DNABERT model.
The text was updated successfully, but these errors were encountered:
I am trying to use the motif-finding module of DNABERT. I have a pre-trained DNABERT model, and I've managed to extract the attention from this pretrained model for the test set.
The attention variable in the model's output is a tuple with a size of 12. Each element in this tuple represents the attention of one layer. For each layer, the shape of the attention is [N, 12, max_seq_len, max_seq_len], where 'N' denotes the size of the test set, and 12 represents the number of attention heads.
I would now like to convert these output attentions into a numpy array, similar to the 'atten.npy' that is fed into the motif-finding module. Is there a module in the code that facilitates this conversion? Broadly speaking, I am interested in understanding how 'atten.npy' can be derived from the attention outputs of the DNABERT model.
The text was updated successfully, but these errors were encountered: