You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Starting with Shader Model 6.2, we are introducing true 16-bit scalar types with option /enable-16bit-types. If this mode is enabled, every min precision type is disabled and implicitly converted to its full scalar types.
Rationale
Using 16-bit precision types will tremendously make some usage more efficient.
Proposed API
HLSL 16-Bit Scalar Types
Drawbacks
As I see it there are only "standard types" being mapped to HLSL types, not the min-types, there seem to be minimal drawbacks.
Alternatives
Branch off into different Shader Models to use.
Other thoughts
Not Sure.
The text was updated successfully, but these errors were encountered:
Description (optional)
Quoting from DirectXCompiler 16 Bit Scalar Types
Rationale
Using 16-bit precision types will tremendously make some usage more efficient.
Proposed API
HLSL 16-Bit Scalar Types
Drawbacks
As I see it there are only "standard types" being mapped to HLSL types, not the min-types, there seem to be minimal drawbacks.
Alternatives
Branch off into different Shader Models to use.
Other thoughts
Not Sure.
The text was updated successfully, but these errors were encountered: