Skip to content

Latest commit

 

History

History
153 lines (115 loc) · 6.07 KB

naive-bayesian.org

File metadata and controls

153 lines (115 loc) · 6.07 KB

Naïve Bayesian classification

{{{begin-summary}}}

  • The “naïve assumption” is that the presence of a word does not give any information about the probability of the presence of any other word. This assumption gives us $P(w_1, w_2, …|C_c) = ∏_i P(w_i|C_c)$.
  • $d_i$ is the number of documents (in the training set, of course) with word $i$.
  • $dci$ is the number of documents in category $C_c$ with the word $i$.
  • $n_c$ is the number of documents in the category $C_c$.
  • $n$ is the number of documents.
  • $|C|$ is the number of categories.
  • $P(C_c) = \frac{n_c + 1}{n + |C|}$ is the probability of any random document having category $C_c$.
  • $\hat{X} = <w_1, w_2, …, w_k>$ is a binary document vector where each $w_i ∈ \{0,1\}$.
  • $P(\hat{X}|C_c) = P(w_1, w_2, …, w_k | C_c) = ∏_i P(w_i|C_c)$ is the probability of the document having the words that it does assuming that the document comes from category $C_c$.
  • $P(w_i|C_c) = \frac{dci + 1}{d_i + n_c}$ is the probability of a document from category $C_c$ having word $w_i$.
  • $arg\maxC_c P(\hat{X}|C_c) P(C_c)$ is the question: which category $C_c$ maximizes the probability that the document has these words?
  • $arg\maxC_c ∑_i log P(w_i|C_c) + log P(C_c)$ is the new question using log-probability to avoid “underflow” in the floating-point values.

{{{end-summary}}}

Feature vectors

Documents are simply binary word vectors. Each dimension in the vector represents a unique word. The value in this dimension is 0 or 1 depending on whether the document has at least one occurrence of that word.

Category vectors

Each category vector is represented as a series of probabilities, one probability per word (each vector dimension represents a word, just like a document feature vector). Each probability means, “the probability of this word being present in a document that is a member of this category.” Thus, the category vector has terms $C_c = (pc1, pc2, …, pck)$, and

$$pci = P(w_i|C_c) = \frac{dci+1}{d_i+n_c},$$

where $dci$ is the number of documents in $C_c$ that have word $i$ (anywhere in the document, any number of occurrences), $d_i$ is the number of documents in any category that have word $i$, and $n_c$ is the number of documents in category $C_c$. We add 1 and add $n_c$ so that $P(w_i|C_c)$ is never equal to 0. (This is called Laplace smoothing.)

Algorithm

We assume, for simplicity, that the occurrences of words in documents are completely independent (this is what makes the method “naïve”). This is patently false since, for instance, the words “vision” and “image” often both appear in documents about computer vision; so seeing the word “vision” suggests that “image” will also appear in the document.

We further assume that the order the words appear in the document does not matter.

Because we make this independence assumption, we can calculate the probability of a document being a member of some category quite easily:

$$P(\hat{X}|C_c) = ∏_i P(w_i|C_c),$$

where $P(w_i|C_c) = pci$ (from the definition above).

Now, Bayes’ theorem gives us:

$$P(C_c|\hat{X}) = P(\hat{X}|C_c)P(C_c) / P(\hat{X}),$$

with,

$$P(C_c) = \frac{n_c + 1}{n + |C|},$$

where $n_c$ is the number of documents in category $C_c$ and $n$ is the number of documents overall. Again, we use Laplace smoothing; this allows us to avoid probabilities equal to 0.0.

Since we want to find the category $C_c$ that makes the quantity maximal, we can ignore $P(\hat{X})$ because it does not change depending on which category we are considering.

Thus, we are actually looking for:

$$arg\maxC_c P(C_c|\hat{X}) = arg\maxC_c P(\hat{X}|C_c)P(C_c)$$

We just check all the categories, and choose the single best or top $N$.

A problem with tiny values

With a lot of unique words, we create very small values by multiplying many $pci$ terms. On a computer, the values may become so small that they may “underflow” (run out of bits required to represent the value). To prevent this, we just throw a logarithm around everything:

$$log P(\hat{X}|C_c)P(C_c) = log P(\hat{X}|C_c) + log P(C_c),$$

and furthermore,

$$log P(\hat{X}|C_c) = log ∏_i P(w_i|C_c) = ∑_i log P(w_i|C_c)$$

So our multiplications turn to sums, and we avoid the underflow problem. Rewriting again, we ultimately have this problem:

$$arg\maxC_c ∑_i log P(w_i|C_c) + log P(C_c)$$

Evaluation

The book Introduction to Information Retrieval gathered some published results for classification tasks. We can see that naïve Bayes is usually not as good as k-nearest neighbor (which we did learn about) nor support vector machines (which we didn’t learn about).

DatasetNaïve Bayesk-nearest neighborSupport vector machines
earn0.960.970.98
acq0.880.920.94
money-fx0.570.780.75
grain0.790.820.95
crude0.800.860.89
trade0.640.770.76
interest0.650.740.78
ship0.850.790.86
wheat0.700.770.92
corn0.650.780.90

Benefits of naïve Bayes

  • It is very fast. In the table above, while naïve Bayes does not perform as well, it is significantly more efficient than either k-nearest neighbor or support vector machines. The latter, support vector machines, are painfully slow (at least in the training phase).

Drawbacks of naïve Bayes

  • Accuracy is low, as seen in the table above.