Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modified overall #18

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
81 changes: 34 additions & 47 deletions README.rst → README.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,34 @@
Overview
========

A sample program I wrote to detect gibberish. It uses a 2 character markov chain.

http://en.wikipedia.org/wiki/Markov_chain

This is a nice (IMO) answer to this guys question on stackoverflow.
http://stackoverflow.com/questions/6297991/is-there-any-way-to-detect-strings-like-putjbtghguhjjjanika/6298040#comment-7360747

Usage
=====

First train the model:

python gib_detect_train.py

Then try it on some sample input

python gib_detect.py

my name is rob and i like to hack True

is this thing working? True

i hope so True

t2 chhsdfitoixcv False

ytjkacvzw False

yutthasxcvqer False

seems okay True

yay! True

How it works
============
The markov chain first 'trains' or 'studies' a few MB of English text, recording how often characters appear next to each other. Eg, given the text "Rob likes hacking" it sees Ro, ob, o[space], [space]l, ... It just counts these pairs. After it has finished reading through the training data, it normalizes the counts. Then each character has a probability distribution of 27 followup character (26 letters + space) following the given initial.

So then given a string, it measures the probability of generating that string according to the summary by just multiplying out the probabilities of the adjacent pairs of characters in that string. EG, for that "Rob likes hacking" string, it would compute prob['r']['o'] * prob['o']['b'] * prob['b'][' '] ... This probability then measures the amount of 'surprise' assigned to this string according the data the model observed when training. If there is funny business with the input string, it will pass through some pairs with very low counts in the training phase, and hence have low probability/high surprise.

I then look at the amount of surprise per character for a few known good strings, and a few known bad strings, and pick a threshold between the most surprising good string and the least surprising bad string. Then I use that threshold whenever to classify any new piece of text.

Peter Norvig, the director of Research at Google, has this nice talk about "The unreasonable effectiveness of data" here, http://www.youtube.com/watch?v=9vR8Vddf7-s. This insight is really not to try to do something complicated, just write a small program that utilizes a bunch of data and you can do cool things.

# GIBBERISH-DETECTOR

A program to check if a sentence is gibberish or not using simple markov chain

## 1. Getting Started

Clone the repo:

```bash
git clone https://github.com/rrenaud/Gibberish-Detector.git
```
Train the model:

```
python gib_detect_train.py
```
Run the model:

```
python gib_detect.py
```

## 2. How it works

The markov chain first 'trains' or 'studies' a few MB of English text, recording how often characters appear next to each other. Eg, given the text "Rob likes hacking" it sees Ro, ob, o[space], [space]l, ... It just counts these pairs. After it has finished reading through the training data, it normalizes the counts. Then each character has a probability distribution of 27 followup character (26 letters + space) following the given initial.

So then given a string, it measures the probability of generating that string according to the summary by just multiplying out the probabilities of the adjacent pairs of characters in that string. EG, for that "Rob likes hacking" string, it would compute prob['r']['o'] * prob['o']['b'] * prob['b'][' '] ... This probability then measures the amount of 'surprise' assigned to this string according the data the model observed when training. If there is funny business with the input string, it will pass through some pairs with very low counts in the training phase, and hence have low probability/high surprise.

I then look at the amount of surprise per character for a few known good strings, and a few known bad strings, and pick a threshold between the most surprising good string and the least surprising bad string. Then I use that threshold whenever to classify any new piece of text.

Peter Norvig, the director of Research at Google, has this nice talk about "The unreasonable effectiveness of data" here, http://www.youtube.com/watch?v=9vR8Vddf7-s. This insight is really not to try to do something complicated, just write a small program that utilizes a bunch of data and you can do cool things.

![sample](/sample.png)

18 changes: 13 additions & 5 deletions gib_detect.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,19 @@

import pickle
import gib_detect_train

print(" ")
print(" GIBBBERISH-DETECTOR")
print(" ")
model_data = pickle.load(open('gib_model.pki', 'rb'))

while True:
l = raw_input()
check = 1
while check == 1:
l = str(input("Enter text: "))
model_mat = model_data['mat']
threshold = model_data['thresh']
print gib_detect_train.avg_transition_prob(l, model_mat) > threshold
if (gib_detect_train.avg_transition_prob(l, model_mat) > threshold) == True:
print("The text is not gibberish")
else:
print("The text is gibberish")
print(" ")
check = int(input("Press 1 continue: "))
print(" ")
6 changes: 3 additions & 3 deletions gib_detect_train.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ def train():
# prior or smoothing factor. This way, if we see a character transition
# live that we've never observed in the past, we won't assume the entire
# string has 0 probability.
counts = [[10 for i in xrange(k)] for i in xrange(k)]
counts = [[10 for i in range(k)] for i in range(k)]

# Count transitions from big text file, taken
# from http://norvig.com/spell-correct.html
Expand All @@ -41,7 +41,7 @@ def train():
# http://squarecog.wordpress.com/2009/01/10/dealing-with-underflow-in-joint-probability-calculations/
for i, row in enumerate(counts):
s = float(sum(row))
for j in xrange(len(row)):
for j in range(len(row)):
row[j] = math.log(row[j] / s)

# Find the probability of generating a few arbitrarily choosen good and
Expand Down Expand Up @@ -72,4 +72,4 @@ def avg_transition_prob(l, log_prob_mat):





Binary file added gib_model.pki
Binary file not shown.
Binary file added sample.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.