-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Stack Smashing on stack allocated tensors #14
Comments
A new branch has been created to overhaul MagmaDNN's memory system. New Branch: memory-fixes The goal of this branch is to replace most pointer use in MagmaDNN with references or smart pointers. This aligns with a modern c++ approach. |
this is in effort for #14 and milestone v1.2
TODO -- a lot of the old macros need to be changed out to consider Tensor or Tensor*. Part of work towards #14.
better support for the binary and unary ops in addition to GPU support part of #14.
compile the math portion of the library as it comes to speed with the rest of the framework in effort to fix issue #14.
some math:: functions are refactored to take const correct Tensor& references. This is part of the changes to address issue #14.
… const correct. This is in part of the memory fixes for #14.
Here as an outline of how MagmaDNN's memory management will be refactored into a more modern c++ style. Remove pointers where possibleIn MagmaDNN v1.0, just about everything works in pointers. All of the This is C-style and an unsafe programming style. Modern C++ favors references over pointers. Where resource management is necessary ( First we shall change Tensors and the MemoryManager class to use smart pointers. Tensors will use reference counting to avoid copying of data pointers. Then proper copy/move/assign semantics are required for the Tensor class. This should be simple if Finally, the An example func: void add(const Tensor& a, const Tensor& b, Tensor& out) { ... } Operations will still needs pointers to children and parents. This is inherent to the graph data structure. However, rather than Const CorrectnessA significant portion of MagmaDNN routines are not const correct. We will be enforcing const correctness on member functions and Remove Type TemplatingCurrently Operations and math functions are restricted by type. We will remove the type template argument from most of these. For example, Valgrind and Cuda-memcheck TestingWe will run and check various MagmaDNN routines against valgrind and cuda-memcheck to be certain of its memory behaviour and performance. |
Now only the device type is templated. (#14)
This is part of the efforts described in #14.
Just to comment on the above memory strategy proposed for MagmaDNN: it will break a lot (pretty much all) existing code. This is unfortunate, but the memory update is necessary for growing the framework and there are also not that many users currently. |
Describe the bug
Passing stack allocated tensor objects to some MagmaDNN functions causes stack smashing errors. The bug is somewhat unpredictable. It does not happen every run.
To Reproduce
Expected behavior
No stack smashing.
Environment:
Additional context
Due to
magmadnn::Tensor<T>
's copy constructor and MemoryManager copying.The text was updated successfully, but these errors were encountered: