It has occurred to me that there are lots of counter-intuitive "gotchas" in cryptography. While some of them are relatively well-known (such as "encryption does not imply integrity"), there are many others which are less well-known and can trip up even relatively experienced people. So, this is a brain-dump which I hope will help other people who review and design cryptographic systems.
This won't teach you cryptography or help you get started in the field. For that, see my Getting Started and How to read a research paper pages.
This also isn't an attempt to tell people how to do anything. There are already lots of excellent guides out there including Latacora's Cryptographic Right Answers or libraries which do most of what you need for you such as the AWS Encryption SDK. In fact, this document isn't really ever going to try to tell you what to do, just what not to do. This also means that the target audience for this isn't really a junior engineer (though I hope they'll still find it useful and will help them advance). If you are a junior engineer, you really shouldn't be working at this level at all. Instead, please look at the earlier links in the paragraph for better guidance about what you should be doing. I also strongly endorse both Dan Boneh's Online Cryptography Course and The Cryptopals Crypto Challenges (both of which are free). Instead, this is aimed at the more experienced person who already knows cryptography pretty well but hasn't memorized the ins and outs of every single algorithm and quirk in the world.
I like to joke that being pedantic is in the job description for a cryptographer. Like many jokes, there is an kernel of truth to it. As I think of it (or people ask me), I'll add some important words here which I use in a very specific manner. So, for these terms you may want to forget their normal meanings and use the ones I have here. It's worth noting that these are my definitions and may not perfectly match standard definitions. Still, I'll do the best I can and I think that their subtle distictions may matter.
- Canonical: When I refer to a something being "Canonical", I mean that there is only one specific (correct) representation of it and all others are incorrect. What's more, you can construct the correct one and determine if a particular represenation is the correct (canonical) one or not. For example. I may define "The canonical form of an integer is in base ten with no leading zeros. The sole exception is for the value zero, which is represented as a single '0'." In this case "5" is canonical, but "05" and "00" are not. When a format is canonical no one can create more than a single representation (e.g., bits on disk) for the same thing. In general, no data encodings or formats should be considered (or assumed) to be canonical unless explicitly designed and documented to be so.
- Immutable: When I refer to something as being immutable, this means that some category of people (or actors) cannot modify it without invalidating it. Unlike "canonical" forms (which cannot be modified by anyone), an immutable form cannot be modified by an arbitrary actor. It may be modifiable by a priveleged actor. For example, AES-GCM encrypted data can be considered immutable (because AES-GCM provides integrity and an arbitrary actor cannot modify it). However, someone who has the key could modify an AES-GCM ciphertext without invalidating it (and even do so without changing the tag).
These are the standard things you should watch out for. Hopefully you've already learnt these all, but just in case, here they are.
-
READ THE STANDARDS. Many standards documents contain explicit instructions on how to do things (such as construct nonces, check input, or what limits to apply). If you are going to use an algorithm, read the standard defining it.
-
Encryption doesn't (necessarily) provide integrity/authenticity.
- No asymmetric modes provide authenticity (anyone can encrypt).
- Unless it is explicitly Authenticated Encryption (e.g., AES-GCM) it doesn't provide any integrity or authenticity.
-
Never use a key for more than one thing (this rule is sometimes, very carefully, violated).
- Keys may only be used for a single algorithm. Don't use the same key for both AES and HMAC (or CMAC, or anything else).
- This also applies to different modes of the same algorithm. Don't use the same key for AES-GCM, AES-CBC, and AES-CTR. In the asymmetric world you shouldn't use the same key for signatures, key agreement, and encryption. (Yes, this rule is commonly violated with RSA keys and while it is usually safe has also created problems in the past.)
- Even with identical algorithms and modes, you still shouldn't use the same key for different purposes. For example, if you have a bidirectional channel with another party and both directions are encrypted using AES-GCM, you probably should be using separate keys for each direction.
-
Always use a Cryptographically secure pseudorandom number generator. It doesn't matter if you aren't using it for anything which should have security implications. Just always use it. If you must use an insecure one (perhaps for performance reasons), name the variable something like
insecureRandom
to ensure you don't accidentally cross-wire something. For guidance in how to do this in many languages, please see Paragon Initiative Enterprises' excellent blog post on this exact topic. -
Hashes never provide authenticity. Since hashes don't incorporate a secret an attacker can just compute the correct hash over tampered data and you'll never know. Don't try to incorporate a secret yourself either because it won't work. (Please see Length Extension Attacks and Flickr's problems with this). If you need something like this, you should use a construction which handles the secret properly for you (such as HMAC or AES-GCM).
-
HMAC keys should be the same length as the hash output.
- While HMAC keys can safely be up to the length of the underlying block (512 bits for SHA-1 and SHA-256, and 1024 bits for SHA-384 and SHA-512) there is no real value in being larger than the output size
- If an HMAC key is larger than the underlying block-size it is hashed before use. This means that
HMAC(H(K), m) = HMAC(K, m)
for allK
andm
provided thatK
is sufficiently large. This will violate your security expectations.
-
Cryptographic keys can "wear out". The easiest solution for this is regular key rotation. If this looks like it will still be an issue for you, seek out a mode/library designed to avoid this (such as the AWS Encryption SDK) or find an expert. Working around this problem is beyond the current scope of this document. (Soatok has an excellent blog post with some of the inner details and specific numbers.)
- Generally, only worry about symmetric keys.
- With rare exceptions is this bound by the Birthday Paradox which means that using a key for more than 2b/2 operations degrades your security. You should generally remain well below this limit. I personally recommend keeping it below about 2^48 operations.
- For anything based on a block-cipher (such as almost any use of AES), this will be a limit on the number of blocks encrypted with a given key.
- For MACs, this will be the number of tags generated.
- Some modes (such as AES-GCM) will have additional limits on usage. Check them out carefully.
-
Most ciphers are not "committing". This means that a single ciphertext can be decrypted (using different keys) to different valid plaintexts. This is true even for AEAD ciphers such as AES-GCM and chacha20/poly1305. AES-GCM not being committing broke some security properties of Facebook Messenger.
-
Key Derivation Functions (KDFs) may not generate different outputs when only the length is varied. (HKDF, my favorite KDF, is an example of this.) Most KDFs take in both an Initial Keying Material (
IKM
) an a per-derived-keyInfo
value andLength
. (They may take in other parameters but these can be ignored safely for this gotcha.) If all inputs except theLength
are kept constant, the outputs may be related. For example, here are the outputs ofHKDF(IKM=0x0102030405060708, Salt="mysalt", Info="myinfo", Length=X)
forLength=16
and thenLength=32
:0x6adb5cbd648b0af649d1f507543df984 0x6adb5cbd648b0af649d1f507543df98484ed986c43cfcec47056b1d49795d944
-
Don't assume that any encoding is canonical unless it is explicitly designed to be so. While there are the obvious cases which ignore whitespace (hex, base64, yaml, json, xml, etc.) many also ignore capitalization (hex, email headers, etc.). Many formats also support a concept of unordered set where the order of elements doesn't matter (ASN.1 ProtoBuf, etc.) . Interestingly, Base64 (even ignoring whitespace) isn't canonical either! Since the trailing padding (which is often optional) causes you to ignore bits, the ignored bits can be anything. For example, while
example
would normally be encoded asZXhhbXBsZQ==
, there are many other possible values for it includingZXhhbXBsZR==
,ZXhhbXBsZY==
, andZXhhbXBsZf==
. -
Public keys are public and any system which assumes that they remain secret is likely broken. Since asymmetric algorithms/implementations assume that public keys are public, they rarerly take any precautions to protect them. RSA ciphertexts are all smaller than the public key so you can use the German Tank Problem to estimate the public key. (This broke privacy guarantees of an Australian government system called PLAID.) Many signatures permit key recovery. (See the "Signatures" section for more information.) Finally, since the public keys aren't secret, many implementations do not use constant-time (or side-channel-free) algorithms when handling them which opens the public keys up to "compromise".
Nonce or Initialization Vector (IV) are two different names for essentially the same thing. While some people (myself included) try to draw a distinction between them, the fact is that this is basically a lost cause and you cannot assume anything about a Nonce/IV based on the name alone. While there is some work being done on "nonce reuse resistant" cryptography, you should still try to avoid ever reusing these values. Just to be safe.
- Nonces/IVs must never repeat (in a given context, usually relative to a key). If you are using a random nonce/IV, a good rule of thumb is to generate no more than 2(n-32)/2 random values where n is the number of bits in the nonce. For example, when using AES-GCM with a 12-byte (96-bit) random nonce, your limit is 232 random values.
- You can never go wrong with a cryptographically random nonce from a general-purpose source of cryptographic entropy. Any construction other than this might cause problems.
- Many nonces have non-obvious requirements. Read the algorithm/protocol specific documentation if you are doing anything other than an independent random value
- Related or shorter than expected nonces in (EC)DSA are as bad as repeated, see how researchers managed to extract blockchain private keys by exploiting biased signature nonces.
- Predictable nonces in CBC gave rise to the BEAST Attack.
- Generating a nonce by using the same cryptographic key used elsewhere has caused problems (TODO: Add reference).
This section isn't about selecting the proper algorithm or key for your design (see the introduction). It's about how to select the correct parameters for this specific use of your design when decrypting or verifying data. Now, I know that there is a major push in the cryptographic community to eliminate as much of this negotiation as possible, and I sympathize with it, but it isn't always possible. If you need to decrypt/verify data from some time in the past, or rotate keys, or be able to move off of a deprecated algorithm, you need some way to signal this. Of course, there are lots of gotchas here (which is the main reason the community is trying to eliminate this in the first place).
- Don't let your adversary select a completely arbitrary key. If an adversary can get you to use an arbitrary key, then it could be one they control and that you'd never expect to use. This may sound ridiculous, but many asymmetric signatures come with the public key for verification alongside of them. Instead, all keys must be extremely carefully checked to ensure they are trusted before you use them. Two ways accomplishing this are a PKI (powerful, hard to get right, only works for asymmetric keys) or giving the keys a unique identifier and using that. Of course, even with the unique identifier you need to be very careful and might need to whitelist which ones you accept because there may be keys with valid identifiers which shouldn't be used for this use case. (An example of this would be any of the massive cloud/corporate KMS systems where every key has a unique identifier.)
- Don't let your adversary select a completely arbitrary algorithm. If an adversary can get you to use an arbitrary algorithm, then they can select an insecure or completely broken one. (One example of this is Downgrade Attacks.) At most this can be from a pre-approved white-list. Ensure this algorithm is appropriate for the key! (I explicitly recommend people not use JWT and this type of historical issue is part of why.)
This is one of the very few times I'll provide advice on how to do something as opposed to simply saying what not to do. This is because it is really important to support key rotation and be able to change algorithms. So, if you are encrypting data and will need to decrypt it later I recommend the following (simple) solution.
- Prepend your ciphertext with a version number (4-byte integer?)
- Each version number corresponds to the immutable set of keys needed to decrypt that message (usually just one, but if you have multiple keys for confidentiality/encryption, this can signal both) and the exact algorithms used.
- Whitelist exactly which versions you'll accept and prune this list whenever you can.
This will let you rotate your keys (by incrementing the version) and even move to new algorithms if needed (by incrementing the version) while avoiding all of the normal complexity around protocol negotiation. It still isn't fool-proof but is a reasonable design which works for many simple cases. If this isn't sufficient for your design, please seek out experts to talk to.
AES-GCM is a really popular mode and one of the better ones for people to select. However, it doesn't always act intuitively.
- Reusing a (nonce/IV, key) combination not only destroys confidentiality for the impacted ciphertexts but results in loss of integrity/authenticity for all ciphertexts protected by the key. This was known back when GCM was originally standardized and has even happened in the real world. Thus, as per "key wear out" and "nonces" above, for 96 bit nonces (the recommended case) you shouldn't use generate more than 232 ciphertexts with a given key and random nonces.
- While nonces can be re-used across different keys, this is a more advanced design and should only be done cautiously. Note that a nonce is still required even if it doesn't vary. I strongly recommend 96 bits of zeros. Nonces of zero-length are not compliant with the specification and extremely insecure and must not be used.
- As AES-GCM uses an underlying counter mode you cannot encrypt more than 236-32 bytes at once as it will cause the counter to overflow.
- AAD is also restricted in length to 261-1 bytes due to internal encodings of the data
- If you use different length tags with the same key, you lower the security of all tags produced by that key, not just the short ones. (See Authentication weaknesses in GCM by Niels Ferguson for that and other interesting issues with the construction.)
- The tags produced can't be treated as "random" values (e.g., like the outputs of a random function or a hash function). Any of the properties you expect (collision resistance, non-invertibility, etc.) may not be there. The only property you can assume they have is that specifically promised by the definition of a MAC.
- As an example, it is trivial for someone who knows the key to craft a message with any arbitrary tag.
- This implies that it is trivial for someone who knows the key to craft multiple messages with the same tag
- Contrast this with tags generated by HMAC which do generally act as people expect (in that they are collision resistant and act like output from a Random Oracle)
- AES-GCM is not committing. (See the discussion under "The Basics" earlier).
- Do not use the plaintext before you've validated the tag! This is called "releasing unverified plaintext" and is very bad.
Some implementations (I'm looking at you OpenSSL and BouncyCastle) release the plaintext before the tag has been verified (e.g., before the
doFinal
call). While this is great from a performance perspective, the plaintext must be considered completely untrusted until the tag is verified. This means you mustn't do anything with it which you cannot fully roll-back. In fact, it's better to just ignore it altogether. If you are decrypting the data and handing it on to some other component, you should probably just buffer up all of the plaintext until you know it's valid. - Not knowing the AAD does not stop someone with the key from just decrypting the data. AES-GCM is just AES-CTR (an unauthenticated mode) plus GMAC. This means that an attacker who has the key can just decrypt the ciphertext in AES-CTR mode and ignore the extra GMAC tag. That or they can use a library like OpenSSL which will release unverified plaintext (see prior point) and get the data that way.
Digital Signatures are generally safe to use, but many people assume they have properties that they do not. At their core all they mean is that without the private key, an attacker cannot find a signature for the associated public key over an arbitrary value for which they don't already know a signature. (a.k.a. existential unforgeability) The best overview of these additional (unpromised) security properties is in Seems Legit: Automated Analysis of Subtle Attacks on Protocols that Use Signatures and I strongly recommend that everyone at least skim that paper. (There is lots of formal verification in there which you can probably skip, but the introduction, defined properties, and case studies are critical.) For an easier read, though focused entirely on ECDSA, you should look at How Not to Use ECDSA.
That said, here are lots of "gotchas" for digital signatures:
- Many signatures are malleable. This means that given a valid signature, an attacker can often find other valid signatures over the same message.
- (EC)DSA have several different encodings: ASN.1, IEEE P1363, and "raw" (my name). This means that an attacker can convert a valid signature in one form to the other.
- While both IEEE P1360 and raw have specific length requirements, many systems do no enforce them. (This is an example of an "edge-case" signature from later in the gotchas list.)
- While the ASN.1 encoding is supposed to be DER encoded, many libraries accept any (semi-)valid BER encoding. This means that an attacker can often use the flexibility of BER to craft an essentially infinite number of valid signatures (for the same message) once they know a single one. (more "edge-case" signatures)
- ECDSA is mathematically malleable as well. It consists of two values
(r, s)
and given one signature it is trivial to calculate a news'
(equal to the order minus the originals
) which results in a new valid signature(r, s')
over the same message.
- An attacker mustn't be allowed to select the actual value being validated in the signature. (The hashing step in all standard signatures defends against that as they can only select the hash pre-image, not the value of the hash.) If they could then they could trivially craft a valid signature for an arbitrary public key by (essentially) generating a random signature and seeing what message would be verified by that signature and then returning that message/signature pair.
- Just because a signature is valid for a given message doesn't mean it isn't valid for other messages.
- For example in EdDSA, an attacker can craft a malicious low-order public key and then create a signature valid for all messages. (This, along with other properties of low-order points, was used to break Scuttlebutt in section 7.1 of this paper by Cremers and Jackson from 2020.)
- More trivially, signatures are generally over message digests, so if the hash function is broken (such as MD5 or SHA-1) then a single signature is also valid for all messages with colliding hashes. This was used to attack Windows Update in 2012.
- Just because a signature is valid for a message and public key doesn't mean that the holder of the private key necessarily knows what the message was.
- This can be unintentional if the underlying hash function is broken (see earlier point)
- Otherwise this generally requires intentional behavior by the signer:
- Consider the maliciously created universal signature from earlier.
- There is the trivial case that the signer might only know the hash and not the real message.
- This can also be an feature (rather than a bug) as in the case of Blind Signatures
- Just because a signature is valid for a given public key doesn't mean it isn't valid for other public keys. For that matter, given a signature and a message, an attacker can craft a public key which validates signature and message. This is called a Duplicate Signature Key Selection attack.
- Some signatures are randomized (e.g., the (EC)DSA family and RSA-PSS) and some are deterministic (e.g., RSA PKCS#1 v1 and v1.5). This means that assuming either property will trip you up.
- Signatures do not (necessarily) hide the message they sign. They'll commonly reveal the hash (or more) of the data being signed.
- Signatures do not hide the public key used to verify the message.
- "Public Key Recovery" is a standardized part of ECDSA. (See SEC2, Section 4.1.6-4.1.7.)
- This is slightly more challenging with RSA but is still doable as described here and implemented here.
- For many signature algorithms, in addition to the standard well-defined lists of "valid" and "invalid" signatures, there may also be a large number of "edge-case" signatures. These can be thought of as signatures which are technically invalid but many implementations may accept without breaking the fundamental guarantee of signatures. (Namely, these invalid signatures do not invalidate existential unforgeability and therefore will require use of the private key to construct.) Examples of these incudes: invalid DER encodings for (EC)DSA signatures, invalid length for IEEE P1363 (EC)DSA signatures, values greater than the group-order/modulus, non-canonical point encodings, etc. While this isn't a security issue for most systems, it can definitely break systems such as consensus protocols which require different actors to be in complete agreement as to the validity of data. If different implementations may accept different signatures, this will break the protocols. Imagine if not all clients of a block-chain agreed on if blocks were valid or not? Complete confusion! (See Henry de Valence's blog post for an excellent write-up of this for Ed25519 signatures. Also, a thank you to Deirdre Connolly who highlighted this issue in her podcast.)
To quote SwiftOnSecurity, "Cryptography is nightmare magic math that cares what kind of pen you use." Nowhere is this more true than in the area of side-channels. It's not enough get the right answer and perform the correct calculation, you must do it in the correct way. I truly believe that writing good side-channel-free implementations is a specialty unto itself within the larger (highly specialized) field of cryptographic development and I am definitely not an expert. So, please take all of this advice as not only coming from that particular perspective but also aimed at other non-experts (like myself).
- Any behavior which varies based on a secret value can be (potentially) exploited by an attacker. This includes time, memory access, power draw, bits flowing down a wire, and literally anything else.
- This means that any branches, memory access, or any logic beyond very simple math based on a secret value is a problem (and even simple math can be dangerous).
- The most common case of this for standard developers is comparing a secret value (such as a MAC tag) against another value. This must be done with a timing-safe equals check. Many languages and frameworks already have one. If you must write one yourself, please seek out an expert. For that matter, in cryptographic code I recommend using a timing-safe equality check for all array comparisons. Why? Not because it is necessary most of the time but because it both means you won't accidentally omit it (when you need it) and you don't need to waste your time continually re-reviewing every unsafe check to determine if it is actually safe in this exact case and nothing has changed. Save yourself the stress and just always use the constant-time implementation. (ToDo: Create/find a single page listing the best patterns for as many languages/frameworks as possible.)
- If you find that you need to worry about side-channel protection other than constant time equality checks, seek out an expert or refactor your design to make it a non-issue. Generally, the best strategy is to make this someone else's problem by depending on a library you trust. For example, you might depend on OpenSSL or Microsoft's CNG for cryptographic implementations and simply state "We trust their implementations to be side-channel free." Now, they won't be. Not perfectly at least. But they are likely to be far better than anything you can write and more likely to be reviewed and patched as problems are found.
Now, X.509 Certificates aren't cryptography any more than a car is an engine. But just as people who are experts with engines will spend a lot of time worrying about and fixing cars, so too will cryptographers (unfortunately) need to deal with X.509 Certificates. Here are just a few of many gotchas related to these horrors. As always, if you actually need to work with them, you should read the specification (RFC 5280 and many others).
- X.509 Certificates are supposed to be DER encoded ASN.1, but most systems will happily accept any (semi-)valid BER encoding. This means these technically invalid certificates will almost always work, except with they don't. (ASN.1 is a nightmare in itself. Truthfully, I have a soft spot in my heart for it because I think that it fills a really useful function. But I also enjoy Perl and C++, so perhaps my taste is questionable. Three of the best resources for dealing with ASN.1 are A Layman's Guide to a Subset of ASN.1, BER, and DER, A Warm Welcome to ASN.1 and DER, and the ASN.1 JavaScript decoder. These have saved me more than once.)
- Even a properly DER-encoded X.509 certificate isn't canonical and can often be modified by a third-party without invalidating the signature.
This is because an X.509 certificate contains three primary sub-parts: A TBS (To Be Signed) Certificate (all of the data you care about), the Signature Algorithm (which describes how the certificate was signed), and the Signature (which signs the TBS Certificate).
Since the Signature Algorithm isn't signed, anyone can change it (provided that it doesn't change how the signature is interpreted).
The Signature Algorithm field often has an optional
NULL
value, which means that there are two valid variants of it. Then the Signature itself isn't guaranteed to be canonical (see elsewhere in the Gotchas) and may be able to be modified by anyone without invalidating it. Non-canonical X.509 certificates actually created a real-world problem for Java in 2020. The JDK contains an explicit list of certificates which must not be trusted. For unknown reasons, this is actually represented as a list of hashes of the certificates. This means that someone could trivially modify a blocked certificate to evade detection. JDK bug 8237995 fixed this by generating different (valid) variants of the banned certificates and hashing each variant. (This really should be hashes of the public keys for due to following bullet point.) - There is nothing special about a root certificate. All that makes a certificate a "root" certificate is that its key is saved some place safe and tagged as "a root of trust."
- Notice that I say "key" here. Commonly the key is the real the root of trust and the "root certificate" is just convenient way to package the key.
- Roots don't need to be self-signed
- Roots will often be signed by other roots (usually for migration reasons)
- Signatures, experiations, certificate limitations, etc. on roots are commonly ignored. (Remember what I said about the "key" being the real root?)
- Root stores often have custom logic and restrictions around how specific roots can be used, except when they don't and just trust the world.
- Not all "Root CAs" are publicly trusted. You may be most familiar with those in your browser and in the CA/B Forum, but there are many others out there.
- While some (publicly-trusted) roots are good and well managed, others just pay a bunch of auditors money to claim they can be trusted. From the outside, it can be hard to know which is which.
- Certificate path building is complicated, never build it yourself.
- Trust an existing library or your platform. They may get it wrong, but you'll usually do even worse than they do.
- There can be more than one valid path from a leaf certificate to a root. (Sometimes to the same root, sometimes to different roots.)
- Not all certificate constraints are always enforced properly if at all (such as name constraints or policies)
- More than one certificate can have the same public key.
- This is common when a CA needs to be renewed but they want to keep everything signed by it valid.
- It does happen in other cases too
- Certificate validation (is this the correct certificate and well formed?) is complicated, never build it yourself.
- The "Common Name" (CN) on server certificates really should be ignored now and only the Subject Alternative Names (SANs) respected. Still, when something goes wrong, it's still often to blame.
- Wild cards only apply to the single left-most level. So
*.example.com
matchesfoo.example.com
andbar.example.com
, but notfoo.bar.example.com
. For that you need*.bar.example.com
. This means that both*.*.example.com
andfoo.*.example.com
are invalid. - Top-Level Domains (TLDs) are weird and need to be handled specially. (For example, the following certificates are all invalid:
*.com
,*.uk
, and*.ac.uk
.) - Let's not even talk about Internationalized Domain Names
- If you aren't dealing with HTTPS, you can often throw everything you know about name/host validation out the window, because it not longer applies.
- Key-usages (and other extensions) can be important here and they are an entire area unto themselves. Consult detailed specification or leave this to an expert. (Note, this caution applies equally to path building.)
- Key and Signature types can (and often do) differ within the same chain. So an ECDSA certificate might be issued by an RSA (intermediate) CA which is then issued by a DSA CA. (Though if you find a DSA CA, you should probably run screaming to something slightly newer.)
I'm always interested in receiving feedback. Issues or pull requests are probably best, but any (reasonable) way to reach out will work.
Is there a mistake? Tell me!
Am I missing something important? Please let me know!
Can you help flesh out my references or otherwise improve this? I want to hear from you!
Several people have already shared pre-publication papers or other non-public materials with me to help me in drafting accurate and helpful public documentation. If you have something you want to share in confidence with me but don't want me to share it on, please reach out to me so we can chat.
The license I've attached to this document is somewhat of a placeholder and exists solely to have something well defined attached. If it causes problems for you for any reason at all please let me know and I'll work with you to come to some mutually agreeable solution
This work is licensed under a Creative Commons Attribution 4.0 International License
A special thank you to the following people and groups who have helped me with this.
- The MVP Slack