Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于记忆性 #196

Open
xulin-1029 opened this issue May 12, 2023 · 4 comments
Open

关于记忆性 #196

xulin-1029 opened this issue May 12, 2023 · 4 comments

Comments

@xulin-1029
Copy link

作者大大您好,文中您写过为了在单片机上持续运行,设置timestep为1,且stateful为true,这样得到的网络在实际中取得的效果可以吗?您有对比过timestep较长如10s左右的效果吗?

@xulin-1029
Copy link
Author

或者说timestep=10s帧长且stateful为true,与timestep=1帧且stateful为true,有区别吗?

@majianjia
Copy link
Owner

按我的理解,区别主要在训练中的区别。实际使用的时候,不清除state会一直有记忆。
但是量化后的记忆可能会更快地消失。
可能不正确

@songdaw
Copy link
Contributor

songdaw commented May 17, 2023

训练时是已知数据,可以设置stateful=false,timestep=10s数据一次性输入。
推理时设置stateful=true,一帧帧输入,实测没有问题,量化后误差不大。

@bfs18
Copy link
Contributor

bfs18 commented May 18, 2023


void reset_rnn_buffer(struct  dnn_model_t *model) {
  nnom_layer_t *layer;
  nnom_rnn_layer_t* cl;
  size_t state_size;
  uint32_t layer_num = 1;
  layer = model->nnom_model->head;
  while (layer)
	{
    if(layer->type == NNOM_RNN) {
//      printf("rnn layer reset %d\n", layer_num);
      cl = (nnom_rnn_layer_t*)(layer);
      state_size = cl->cell->state_size;
      nnom_memset(cl->state_buf, 0, state_size * 2);
    }
    layer = layer->shortcut;
    layer_num += 1;
	}
}

This is a function to reset rnn state. Call this function before running a new sample if you want to reset memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants