Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation #11

Open
jayenashar opened this issue Jun 27, 2020 · 4 comments
Open

Documentation #11

jayenashar opened this issue Jun 27, 2020 · 4 comments

Comments

@jayenashar
Copy link

Is the README the only documentation? In particular, I'm looking to create a tiny_dnn-compatible 2d convolutional layer using CompiledNN, in order to benchmark it against a new one I created "tiny_jnn", but I don't want to replace the rest of our pipeline - just the one layer.

@fthielke
Copy link
Collaborator

Yes, currently our only documentation are the Readme and comments in the code.

Afaik, tinydnn's convolutional layer supports the same parameters as our Conv2D, so I suppose you would want to instantiate a CompiledNN model with just that one layer.
The problem is though that we currently only support loading Keras models without the possibility of constructing a Model yourself, which admittedly is a needless constraint.

@jayenashar
Copy link
Author

I think the only extra parameter in tiny_dnn would be dilation, which I saw somewhere is not supported in CompiledNN. But that's ok as most of our layers have no dilation.

I thought I could use the Conv2DCompiler class directly, but I couldn't figure out how. I looked at https://github.com/bhuman/CompiledNN/blob/master/Src/CompiledNN/CompiledNN.cpp#L160 as well.

@ahasselbring
Copy link
Member

You can also compile a single node, see for example https://github.com/bhuman/CompiledNN/blob/master/Tests/Layers/UpSampling2D.cpp. I have an unfinished Conv2D test, this an excerpt from it:

  static const Node& buildNode(Conv2DLayer* l, const std::array<unsigned int, 2>& strides, const std::array<unsigned int, 2>& kernelSize,
                               bool hasBiases, ActivationFunctionId activation, PaddingType padding, unsigned int filters,
                               unsigned int height, unsigned int width, unsigned int channels)
  {                               
    l->nodes.clear();
    l->strides = strides;                                           
    l->weights.reshape(kernelSize[0], kernelSize[1], channels, filters);
    l->biases.resize(hasBiases ? filters : 0);
    l->hasBiases = hasBiases;                                         
    l->activationId = activation;
    l->padding = padding;
     
    l->nodes.emplace_back(l);
    Node& n = l->nodes.back();
    n.inputs.emplace_back(nullptr, 0, 0);
    n.inputDimensions.push_back({height, width, channels});
    l->calcOutputDimensions(n);                 
    for(std::size_t i = 0; i < n.outputDimensions.size(); ++i)
      n.outputs.emplace_back(l, 0, i);
    return n;
  }

called by

CompiledNN c;
Conv2DLayer l;
const Node& n = buildNode(&l, {stride, stride}, {kernelSize, kernelSize}, true, ...);
// ... copy weights to l.weights, copy biases to l.biases
c.compile(n, CompilationSettings());
// ... fill c.input(0)
c.apply();
// ... obtain output from c.output(0)

@jayenashar
Copy link
Author

oh fantastic. i will figure out how to make this work as a tiny_dnn layer and report back.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants