How To Detect Spam With Naive Bayes And Test Using K-Fold Cross Validation

In order to classify an email as "spam" or "not spam", we're going to train a classifier using sklearn.naive_bayes. Then we're going to test our classifier using "K-Fold Cross Validation".

Let's start out by loading email messages into a pandas dataframe, with each message classified as either "spam" or "not-spam".

In [55]:
import os
import io
import numpy
from pandas import DataFrame
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB

import warnings
warnings.filterwarnings('ignore')

def readFiles(path):
    for root, dirnames, filenames in os.walk(path):
        for filename in filenames:
            path = os.path.join(root, filename)

            inBody = False
            lines = []
            f = io.open(path, 'r', encoding='latin1')
            for line in f:
                if inBody:
                    lines.append(line)
                elif line == '\n':
                    inBody = True
            f.close()
            message = '\n'.join(lines)
            yield path, message


def dataFrameFromDirectory(path, classification):
    rows = []
    index = []
    for filename, message in readFiles(path):
        rows.append({'message': message, 'class': classification})
        index.append(filename)

    return DataFrame(rows, index=index)

data = DataFrame({'message': [], 'class': []})

data = data.append(dataFrameFromDirectory('email-messages/spam', 'spam'))
data = data.append(dataFrameFromDirectory('email-messages/not-spam', 'not-spam'))

What does the dataframe look like?

In [56]:
data.head()
Out[56]:
class message
email-messages/spam\00001.7848dde101aa985090474a91ec93fcf0 spam <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Tr...
email-messages/spam\00002.d94f1b97e48ed3b553b3508d116e6a09 spam 1) Fight The Risk of Cancer!\n\nhttp://www.adc...
email-messages/spam\00003.2ee33bc6eacdb11f38d052c44819ba6c spam 1) Fight The Risk of Cancer!\n\nhttp://www.adc...
email-messages/spam\00004.eac8de8d759b7e74154f142194282724 spam ##############################################...
email-messages/spam\00005.57696a39d7d84318ce497886896bf90d spam I thought you might like these:\n\n1) Slim Dow...

Our next step is to use CountVectorizer to split up each email message into a list of words and their counts.

In [57]:
vectorizer = CountVectorizer()
messages = data['message'].values
counts = vectorizer.fit_transform(messages)

Next we'll create a "Multinomial Naive Bayes" classifier and fit it with the message word counts and target values (class = 'spam' or 'not-spam').

In [58]:
classifier = MultinomialNB()
classes = data['class'].values
classifier.fit(counts, classes)
Out[58]:
MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)

Now for the moment of truth...

Let's test our spam detection classifier using two clear examples.

In [59]:
test_msgs = ['$$$ Free Cash From Nigerian Prince!', "Hey, what did you think of last night's episode of 'Friends'?"]
test_msg_counts = vectorizer.transform(test_msgs)

classifications = classifier.predict(test_msg_counts)
classifications
Out[59]:
array(['spam', 'not-spam'], dtype='<U8')

Our spam classifier appears to be working! :)

Now let's use K-fold cross validation to objectively measure the accuracy of the classifier.

In [60]:
from sklearn.model_selection import cross_val_score

scores = cross_val_score(classifier, counts, classes, cv=5)

# Print the accuracy of each fold:
print(scores)

# Print the mean accuracy of all 5 folds
print(scores.mean())
[0.96333333 0.97166667 0.94833333 0.94166667 0.94666667]
0.9543333333333333