materi-praktikum/Praktikum Python Code/03-Text-Classification/.ipynb_checkpoints/01-Feature-Extraction-from-Text-checkpoint.ipynb

801 lines
24 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"___\n",
"\n",
"<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n",
"___"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This unit is divided into two sections:\n",
"* First, we'll find out what what is necessary to build an NLP system that can turn a body of text into a numerical array of *features*.\n",
"* Next we'll show how to perform these steps using real tools."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Building a Natural Language Processor From Scratch\n",
"In this section we'll use basic Python to build a rudimentary NLP system. We'll build a *corpus of documents* (two small text files), create a *vocabulary* from all the words in both documents, and then demonstrate a *Bag of Words* technique to extract features from each document.<br>\n",
"<div class=\"alert alert-info\" style=\"margin: 20px\">**This first section is for illustration only!**\n",
"<br>Don't bother memorizing the code - we'd never do this in real life.</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start with some documents:\n",
"For simplicity we won't use any punctuation."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Overwriting 1.txt\n"
]
}
],
"source": [
"%%writefile 1.txt\n",
"This is a story about cats\n",
"our feline pets\n",
"Cats are furry animals"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Overwriting 2.txt\n"
]
}
],
"source": [
"%%writefile 2.txt\n",
"This story is about surfing\n",
"Catching waves is fun\n",
"Surfing is a popular water sport"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build a vocabulary\n",
"The goal here is to build a numerical array from all the words that appear in every document. Later we'll create instances (vectors) for each individual document."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'this': 1, 'is': 2, 'a': 3, 'story': 4, 'about': 5, 'cats': 6, 'our': 7, 'feline': 8, 'pets': 9, 'are': 10, 'furry': 11, 'animals': 12}\n"
]
}
],
"source": [
"vocab = {}\n",
"i = 1\n",
"\n",
"with open('1.txt') as f:\n",
" x = f.read().lower().split()\n",
"\n",
"for word in x:\n",
" if word in vocab:\n",
" continue\n",
" else:\n",
" vocab[word]=i\n",
" i+=1\n",
"\n",
"print(vocab)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'this': 1, 'is': 2, 'a': 3, 'story': 4, 'about': 5, 'cats': 6, 'our': 7, 'feline': 8, 'pets': 9, 'are': 10, 'furry': 11, 'animals': 12, 'surfing': 13, 'catching': 14, 'waves': 15, 'fun': 16, 'popular': 17, 'water': 18, 'sport': 19}\n"
]
}
],
"source": [
"with open('2.txt') as f:\n",
" x = f.read().lower().split()\n",
"\n",
"for word in x:\n",
" if word in vocab:\n",
" continue\n",
" else:\n",
" vocab[word]=i\n",
" i+=1\n",
"\n",
"print(vocab)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Even though `2.txt` has 15 words, only 7 new words were added to the dictionary.\n",
"\n",
"## Feature Extraction\n",
"Now that we've encapsulated our \"entire language\" in a dictionary, let's perform *feature extraction* on each of our original documents:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['1.txt', 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Create an empty vector with space for each word in the vocabulary:\n",
"one = ['1.txt']+[0]*len(vocab)\n",
"one"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['1.txt', 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# map the frequencies of each word in 1.txt to our vector:\n",
"with open('1.txt') as f:\n",
" x = f.read().lower().split()\n",
" \n",
"for word in x:\n",
" one[vocab[word]]+=1\n",
" \n",
"one"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color=green>We can see that most of the words in 1.txt appear only once, although \"cats\" appears twice.</font>"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"# Do the same for the second document:\n",
"two = ['2.txt']+[0]*len(vocab)\n",
"\n",
"with open('2.txt') as f:\n",
" x = f.read().lower().split()\n",
" \n",
"for word in x:\n",
" two[vocab[word]]+=1"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['1.txt', 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]\n",
"['2.txt', 1, 3, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 2, 1, 1, 1, 1, 1, 1]\n"
]
}
],
"source": [
"# Compare the two vectors:\n",
"print(f'{one}\\n{two}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By comparing the vectors we see that some words are common to both, some appear only in `1.txt`, others only in `2.txt`. Extending this logic to tens of thousands of documents, we would see the vocabulary dictionary grow to hundreds of thousands of words. Vectors would contain mostly zero values, making them *sparse matrices*."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Bag of Words and Tf-idf\n",
"In the above examples, each vector can be considered a *bag of words*. By itself these may not be helpful until we consider *term frequencies*, or how often individual words appear in documents. A simple way to calculate term frequencies is to divide the number of occurrences of a word by the total number of words in the document. In this way, the number of times a word appears in large documents can be compared to that of smaller documents.\n",
"\n",
"However, it may be hard to differentiate documents based on term frequency if a word shows up in a majority of documents. To handle this we also consider *inverse document frequency*, which is the total number of documents divided by the number of documents that contain the word. In practice we convert this value to a logarithmic scale, as described [here](https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Inverse_document_frequency).\n",
"\n",
"Together these terms become [**tf-idf**](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Stop Words and Word Stems\n",
"Some words like \"the\" and \"and\" appear so frequently, and in so many documents, that we needn't bother counting them. Also, it may make sense to only record the root of a word, say `cat` in place of both `cat` and `cats`. This will shrink our vocab array and improve performance."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tokenization and Tagging\n",
"When we created our vectors the first thing we did was split the incoming text on whitespace with `.split()`. This was a crude form of *tokenization* - that is, dividing a document into individual words. In this simple example we didn't worry about punctuation or different parts of speech. In the real world we rely on some fairly sophisticated *morphology* to parse text appropriately.\n",
"\n",
"Once the text is divided, we can go back and *tag* our tokens with information about parts of speech, grammatical dependencies, etc. This adds more dimensions to our data and enables a deeper understanding of the context of specific documents. For this reason, vectors become ***high dimensional sparse matrices***."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert alert-info\" style=\"margin: 20px\">**That's the end of the first section.**\n",
"<br>In the next section we'll use scikit-learn to perform a real-life analysis.</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"___\n",
"# Feature Extraction from Text\n",
"In the **Scikit-learn Primer** lecture we applied a simple SVC classification model to the SMSSpamCollection dataset. We tried to predict the ham/spam label based on message length and punctuation counts. In this section we'll actually look at the text of each message and try to perform a classification based on content. We'll take advantage of some of scikit-learn's [feature extraction](https://scikit-learn.org/stable/modules/feature_extraction.html#text-feature-extraction) tools.\n",
"\n",
"## Load a dataset"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style>\n",
" .dataframe thead tr:only-child th {\n",
" text-align: right;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: left;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>label</th>\n",
" <th>message</th>\n",
" <th>length</th>\n",
" <th>punct</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>ham</td>\n",
" <td>Go until jurong point, crazy.. Available only ...</td>\n",
" <td>111</td>\n",
" <td>9</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>ham</td>\n",
" <td>Ok lar... Joking wif u oni...</td>\n",
" <td>29</td>\n",
" <td>6</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>spam</td>\n",
" <td>Free entry in 2 a wkly comp to win FA Cup fina...</td>\n",
" <td>155</td>\n",
" <td>6</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>ham</td>\n",
" <td>U dun say so early hor... U c already then say...</td>\n",
" <td>49</td>\n",
" <td>6</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>ham</td>\n",
" <td>Nah I don't think he goes to usf, he lives aro...</td>\n",
" <td>61</td>\n",
" <td>2</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" label message length punct\n",
"0 ham Go until jurong point, crazy.. Available only ... 111 9\n",
"1 ham Ok lar... Joking wif u oni... 29 6\n",
"2 spam Free entry in 2 a wkly comp to win FA Cup fina... 155 6\n",
"3 ham U dun say so early hor... U c already then say... 49 6\n",
"4 ham Nah I don't think he goes to usf, he lives aro... 61 2"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Perform imports and load the dataset:\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"df = pd.read_csv('../TextFiles/smsspamcollection.tsv', sep='\\t')\n",
"df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Check for missing values:\n",
"Always a good practice."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"label 0\n",
"message 0\n",
"length 0\n",
"punct 0\n",
"dtype: int64"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df.isnull().sum()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Take a quick look at the *ham* and *spam* `label` column:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ham 4825\n",
"spam 747\n",
"Name: label, dtype: int64"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df['label'].value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color=green>4825 out of 5572 messages, or 86.6%, are ham. This means that any text classification model we create has to perform **better than 86.6%** to beat random chance.</font>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Split the data into train & test sets:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.model_selection import train_test_split\n",
"\n",
"X = df['message'] # this time we want to look at the text\n",
"y = df['label']\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Scikit-learn's CountVectorizer\n",
"Text preprocessing, tokenizing and the ability to filter out stopwords are all included in [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), which builds a dictionary of features and transforms documents to feature vectors."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(3733, 7082)"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.feature_extraction.text import CountVectorizer\n",
"count_vect = CountVectorizer()\n",
"\n",
"X_train_counts = count_vect.fit_transform(X_train)\n",
"X_train_counts.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color=green>This shows that our training set is comprised of 3733 documents, and 7082 features.</font>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transform Counts to Frequencies with Tf-idf\n",
"While counting words is helpful, longer documents will have higher average count values than shorter documents, even though they might talk about the same topics.\n",
"\n",
"To avoid this we can simply divide the number of occurrences of each word in a document by the total number of words in the document: these new features are called **tf** for Term Frequencies.\n",
"\n",
"Another refinement on top of **tf** is to downscale weights for words that occur in many documents in the corpus and are therefore less informative than those that occur only in a smaller portion of the corpus.\n",
"\n",
"This downscaling is called **tfidf** for “Term Frequency times Inverse Document Frequency”.\n",
"\n",
"Both tf and tfidf can be computed as follows using [TfidfTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html):"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(3733, 7082)"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.feature_extraction.text import TfidfTransformer\n",
"tfidf_transformer = TfidfTransformer()\n",
"\n",
"X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)\n",
"X_train_tfidf.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note: the `fit_transform()` method actually performs two operations: it fits an estimator to the data and then transforms our count-matrix to a tf-idf representation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Combine Steps with TfidVectorizer\n",
"In the future, we can combine the CountVectorizer and TfidTransformer steps into one using [TfidVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html):"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(3733, 7082)"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.feature_extraction.text import TfidfVectorizer\n",
"vectorizer = TfidfVectorizer()\n",
"\n",
"X_train_tfidf = vectorizer.fit_transform(X_train) # remember to use the original X_train set\n",
"X_train_tfidf.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train a Classifier\n",
"Here we'll introduce an SVM classifier that's similar to SVC, called [LinearSVC](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html). LinearSVC handles sparse input better, and scales well to large numbers of samples."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,\n",
" intercept_scaling=1, loss='squared_hinge', max_iter=1000,\n",
" multi_class='ovr', penalty='l2', random_state=None, tol=0.0001,\n",
" verbose=0)"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.svm import LinearSVC\n",
"clf = LinearSVC()\n",
"clf.fit(X_train_tfidf,y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color=green>Earlier we named our SVC classifier **svc_model**. Here we're using the more generic name **clf** (for classifier).</font>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build a Pipeline\n",
"Remember that only our training set has been vectorized into a full vocabulary. In order to perform an analysis on our test set we'll have to submit it to the same procedures. Fortunately scikit-learn offers a [**Pipeline**](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) class that behaves like a compound classifier."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Pipeline(memory=None,\n",
" steps=[('tfidf', TfidfVectorizer(analyzer='word', binary=False, decode_error='strict',\n",
" dtype=<class 'numpy.float64'>, encoding='utf-8', input='content',\n",
" lowercase=True, max_df=1.0, max_features=None, min_df=1,\n",
" ngram_range=(1, 1), norm='l2', preprocessor=None, smooth_idf=True,...ax_iter=1000,\n",
" multi_class='ovr', penalty='l2', random_state=None, tol=0.0001,\n",
" verbose=0))])"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.pipeline import Pipeline\n",
"# from sklearn.feature_extraction.text import TfidfVectorizer\n",
"# from sklearn.svm import LinearSVC\n",
"\n",
"text_clf = Pipeline([('tfidf', TfidfVectorizer()),\n",
" ('clf', LinearSVC()),\n",
"])\n",
"\n",
"# Feed the training data through the pipeline\n",
"text_clf.fit(X_train, y_train) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test the classifier and display results"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"# Form a prediction set\n",
"predictions = text_clf.predict(X_test)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[1586 7]\n",
" [ 12 234]]\n"
]
}
],
"source": [
"# Report the confusion matrix\n",
"from sklearn import metrics\n",
"print(metrics.confusion_matrix(y_test,predictions))"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" precision recall f1-score support\n",
"\n",
" ham 0.99 1.00 0.99 1593\n",
" spam 0.97 0.95 0.96 246\n",
"\n",
" micro avg 0.99 0.99 0.99 1839\n",
" macro avg 0.98 0.97 0.98 1839\n",
"weighted avg 0.99 0.99 0.99 1839\n",
"\n"
]
}
],
"source": [
"# Print a classification report\n",
"print(metrics.classification_report(y_test,predictions))"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.989668297988\n"
]
}
],
"source": [
"# Print the overall accuracy\n",
"print(metrics.accuracy_score(y_test,predictions))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Using the text of the messages, our model performed exceedingly well; it correctly predicted spam **98.97%** of the time!<br>\n",
"Now let's apply what we've learned to a text classification project involving positive and negative movie reviews.\n",
"\n",
"## Next up: Text Classification Project"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}