Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate Rust "impl specialization" #81

Open
hobofan opened this issue Mar 11, 2016 · 2 comments
Open

Investigate Rust "impl specialization" #81

hobofan opened this issue Mar 11, 2016 · 2 comments

Comments

@hobofan
Copy link
Member

hobofan commented Mar 11, 2016

Rust "impl specialization" should land Rust 1.9(?) (PR here: rust-lang/rust#30652).

That feature should hopefully allow us to better determine the capabilities of the different frameworks that are compiled in and use more performant operations if possible. Currently we handle this via ugly feature attributes which rely on our knowledge of the implemented operations rather than the type system. The current system of feature flags also requires us to "dumb down" the backends to the capabilities they all have in common in order to retain the backend portability.

@bluss
Copy link

bluss commented Mar 12, 2016

FYI it will land as an unstable feature and stabilized at a later point. I can't predict how soon it can be a feature in a stable rust release.

homu added a commit that referenced this issue Mar 15, 2016
change meaning of framework features

Changes the default feature flags to only build in support for the Native backend, since that is what most people will have available on their development machines.

It also changes the meaning of the framework feature flags (`native`,`cuda`,`opencl`), so that only the capabilities that are shared between the frameworks will be included in the compiled version. See #81 for a possible long term solution.

Example:
- feature flags are `native cuda` -> `Convolution` Layer **is not available** since the native backend does not provide the required traits, not even for the CUDA backend.
- feature flags are `cuda` -> `Convolution` Layer **is available** since the CUDA backend provides the required traits and there is no native backend it has to be compatible with.
- feature flags are `native` -> `Convolution` Layer **is not available** since the native backend does not provide the required traits and there are no other frameworks present.

WIP:

I still nedd to finish a top-level FEATURE-FLAGS guide that explains this a bit more in depth.
@bklooste
Copy link

Will help remove a lot of macros ..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants