Bit-vectors are represented as a pair of a size and a value, where sizes are of type Int and values are Integer. Operations on bit-vectors are translated into operations on integers. Remarkably, most operations taking two or more bit-vectors, will perform zero-padding to adjust the size of the input bit-vectors when needed (eg. when adding bit-vectors of different sizes). Indexing operators don't do this, to avoid masking out of bounds errors.
There exist many Haskell libraries to handle bit-vectors, but to the best of my knowledge bv is the only one that adequately supports bit-vector arithmetic.
If you do not need bit-vector arithmetic, then you may consider using any of these other libraries, which could offer more compact and efficient implementations of bit arrays.
Many exported functions name-clash with Prelude functions, it is therefore recommended to do a qualified import:
import Data.BitVector ( BV )
import qualified Data.BitVector as BV
If you wish to run the test suite simply:
cabal configure -ftest
cabal build
Then run:
dist/build/bv-tester/bv-tester
Tip: For best performance compile with -fgmp.
Tip: If you are brave enough, compile with -f -check-bounds (disables index bounds checking).
The BV datatype is simply a pair of an Int, to represent the size, and an arbitrary-precision Integer, to represent the value of a bit-vector. Both fields are strict, and we instruct GHC to unbox strict fields. Further, we ask GHC to inline virtually all bit-vector operations. When inlined, GHC should be able to remove any overhead associated with the BV data type, and unbox bit-vector sizes. Performance should depend mostly on the Integer data type implementation.