Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Shape of variable bert/embeddings/LayerNorm/beta:0 ((768,)) doesn't match with shape of tensor bert/embeddings/LayerNorm/beta ([312]) from checkpoint reader. #8

Open
learnpythontheew opened this issue Oct 23, 2020 · 1 comment

Comments

@learnpythontheew
Copy link

运行第四步的时候遇到了这样的报错:
ValueError: Shape of variable bert/embeddings/LayerNorm/beta:0 ((768,)) doesn't match with shape of tensor bert/embeddings/LayerNorm/beta ([312]) from checkpoint reader.

执行run_lasertagger.py 文件 model config 用的lasertagger
用了你提供的链接下载的RoBERTa-tiny-clue模型

请问是哪里出现了问题呢?

@tongchangD
Copy link
Owner

训练模型维度不一样,抱歉,我上传的文件夹 中lasertagger_config.json, 是有内容的,GitHub中没了
你可以将 lasertagger_config.json 修改如下
{
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 312,
"initializer_range": 0.02,
"intermediate_size": 1248,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 4,
"type_vocab_size": 2,
"vocab_size": 8021,
"use_t2t_decoder": true,
"decoder_num_hidden_layers": 1,
"decoder_hidden_size": 768,
"decoder_num_attention_heads": 4,
"decoder_filter_size": 3072,
"use_full_attention": false
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants