{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": true,
"jupyter": {
"outputs_hidden": true
}
},
"source": [
"___\n",
"\n",
"
\n",
"___"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Text Classification Project\n",
"Now we're at the point where we should be able to:\n",
"* Read in a collection of documents - a *corpus*\n",
"* Transform text into numerical vector data using a pipeline\n",
"* Create a classifier\n",
"* Fit/train the classifier\n",
"* Test the classifier on new data\n",
"* Evaluate performance\n",
"\n",
"For this project we'll use the Cornell University Movie Review polarity dataset v2.0 obtained from http://www.cs.cornell.edu/people/pabo/movie-review-data/\n",
"\n",
"In this exercise we'll try to develop a classification model as we did for the SMSSpamCollection dataset - that is, we'll try to predict the Positive/Negative labels based on text content alone. In an upcoming section we'll apply *Sentiment Analysis* to train models that have a deeper understanding of each review."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Perform imports and load the dataset\n",
"The dataset contains the text of 2000 movie reviews. 1000 are positive, 1000 are negative, and the text has been preprocessed as a tab-delimited file."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
| \n", " | label | \n", "review | \n", "
|---|---|---|
| 0 | \n", "neg | \n", "how do films like mouse hunt get into theatres... | \n", "
| 1 | \n", "neg | \n", "some talented actresses are blessed with a dem... | \n", "
| 2 | \n", "pos | \n", "this has been an extraordinary year for austra... | \n", "
| 3 | \n", "pos | \n", "according to hollywood movies made in last few... | \n", "
| 4 | \n", "neg | \n", "my first press screening of 1998 and already i... | \n", "
Pipeline(steps=[('tfidf', TfidfVectorizer()), ('clf', MultinomialNB())])In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. Pipeline(steps=[('tfidf', TfidfVectorizer()), ('clf', MultinomialNB())])TfidfVectorizer()
MultinomialNB()
Pipeline(steps=[('tfidf', TfidfVectorizer()), ('clf', LinearSVC())])In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. Pipeline(steps=[('tfidf', TfidfVectorizer()), ('clf', LinearSVC())])TfidfVectorizer()
LinearSVC()
from sklearn.feature_extraction import text\n",
"> print(text.ENGLISH_STOP_WORDS)\n",
"['a', 'about', 'above', 'across', 'after', 'afterwards', 'again', 'against', 'all', 'almost', 'alone', 'along', 'already', 'also', 'although', 'always', 'am', 'among', 'amongst', 'amoungst', 'amount', 'an', 'and', 'another', 'any', 'anyhow', 'anyone', 'anything', 'anyway', 'anywhere', 'are', 'around', 'as', 'at', 'back', 'be', 'became', 'because', 'become', 'becomes', 'becoming', 'been', 'before', 'beforehand', 'behind', 'being', 'below', 'beside', 'besides', 'between', 'beyond', 'bill', 'both', 'bottom', 'but', 'by', 'call', 'can', 'cannot', 'cant', 'co', 'con', 'could', 'couldnt', 'cry', 'de', 'describe', 'detail', 'do', 'done', 'down', 'due', 'during', 'each', 'eg', 'eight', 'either', 'eleven', 'else', 'elsewhere', 'empty', 'enough', 'etc', 'even', 'ever', 'every', 'everyone', 'everything', 'everywhere', 'except', 'few', 'fifteen', 'fifty', 'fill', 'find', 'fire', 'first', 'five', 'for', 'former', 'formerly', 'forty', 'found', 'four', 'from', 'front', 'full', 'further', 'get', 'give', 'go', 'had', 'has', 'hasnt', 'have', 'he', 'hence', 'her', 'here', 'hereafter', 'hereby', 'herein', 'hereupon', 'hers', 'herself', 'him', 'himself', 'his', 'how', 'however', 'hundred', 'i', 'ie', 'if', 'in', 'inc', 'indeed', 'interest', 'into', 'is', 'it', 'its', 'itself', 'keep', 'last', 'latter', 'latterly', 'least', 'less', 'ltd', 'made', 'many', 'may', 'me', 'meanwhile', 'might', 'mill', 'mine', 'more', 'moreover', 'most', 'mostly', 'move', 'much', 'must', 'my', 'myself', 'name', 'namely', 'neither', 'never', 'nevertheless', 'next', 'nine', 'no', 'nobody', 'none', 'noone', 'nor', 'not', 'nothing', 'now', 'nowhere', 'of', 'off', 'often', 'on', 'once', 'one', 'only', 'onto', 'or', 'other', 'others', 'otherwise', 'our', 'ours', 'ourselves', 'out', 'over', 'own', 'part', 'per', 'perhaps', 'please', 'put', 'rather', 're', 'same', 'see', 'seem', 'seemed', 'seeming', 'seems', 'serious', 'several', 'she', 'should', 'show', 'side', 'since', 'sincere', 'six', 'sixty', 'so', 'some', 'somehow', 'someone', 'something', 'sometime', 'sometimes', 'somewhere', 'still', 'such', 'system', 'take', 'ten', 'than', 'that', 'the', 'their', 'them', 'themselves', 'then', 'thence', 'there', 'thereafter', 'thereby', 'therefore', 'therein', 'thereupon', 'these', 'they', 'thick', 'thin', 'third', 'this', 'those', 'though', 'three', 'through', 'throughout', 'thru', 'thus', 'to', 'together', 'too', 'top', 'toward', 'towards', 'twelve', 'twenty', 'two', 'un', 'under', 'until', 'up', 'upon', 'us', 'very', 'via', 'was', 'we', 'well', 'were', 'what', 'whatever', 'when', 'whence', 'whenever', 'where', 'whereafter', 'whereas', 'whereby', 'wherein', 'whereupon', 'wherever', 'whether', 'which', 'while', 'whither', 'who', 'whoever', 'whole', 'whom', 'whose', 'why', 'will', 'with', 'within', 'without', 'would', 'yet', 'you', 'your', 'yours', 'yourself', 'yourselves']\n",
"\n",
"However, there are words in this list that may influence a classification of movie reviews. With this in mind, let's trim the list to just 60 words:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"stopwords = ['a', 'about', 'an', 'and', 'are', 'as', 'at', 'be', 'been', 'but', 'by', 'can', \\\n",
" 'even', 'ever', 'for', 'from', 'get', 'had', 'has', 'have', 'he', 'her', 'hers', 'his', \\\n",
" 'how', 'i', 'if', 'in', 'into', 'is', 'it', 'its', 'just', 'me', 'my', 'of', 'on', 'or', \\\n",
" 'see', 'seen', 'she', 'so', 'than', 'that', 'the', 'their', 'there', 'they', 'this', \\\n",
" 'to', 'was', 'we', 'were', 'what', 'when', 'which', 'who', 'will', 'with', 'you']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's repeat the process above and see if the removal of stopwords improves or impairs our score."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"# YOU DO NOT NEED TO RUN THIS CELL UNLESS YOU HAVE\n",
"# RECENTLY OPENED THIS NOTEBOOK OR RESTARTED THE KERNEL:\n",
"\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"df = pd.read_csv('../TextFiles/moviereviews.tsv', sep='\\t')\n",
"df.dropna(inplace=True)\n",
"blanks = []\n",
"for i,lb,rv in df.itertuples():\n",
" if type(rv)==str:\n",
" if rv.isspace():\n",
" blanks.append(i)\n",
"df.drop(blanks, inplace=True)\n",
"from sklearn.model_selection import train_test_split\n",
"X = df['review']\n",
"y = df['label']\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)\n",
"\n",
"from sklearn.pipeline import Pipeline\n",
"from sklearn.feature_extraction.text import TfidfVectorizer\n",
"from sklearn.naive_bayes import MultinomialNB\n",
"from sklearn.svm import LinearSVC\n",
"from sklearn import metrics"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"Pipeline(steps=[('tfidf',\n",
" TfidfVectorizer(stop_words=['a', 'about', 'an', 'and', 'are',\n",
" 'as', 'at', 'be', 'been', 'but',\n",
" 'by', 'can', 'even', 'ever', 'for',\n",
" 'from', 'get', 'had', 'has',\n",
" 'have', 'he', 'her', 'hers', 'his',\n",
" 'how', 'i', 'if', 'in', 'into',\n",
" 'is', ...])),\n",
" ('clf', LinearSVC())])In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. Pipeline(steps=[('tfidf',\n",
" TfidfVectorizer(stop_words=['a', 'about', 'an', 'and', 'are',\n",
" 'as', 'at', 'be', 'been', 'but',\n",
" 'by', 'can', 'even', 'ever', 'for',\n",
" 'from', 'get', 'had', 'has',\n",
" 'have', 'he', 'her', 'hers', 'his',\n",
" 'how', 'i', 'if', 'in', 'into',\n",
" 'is', ...])),\n",
" ('clf', LinearSVC())])TfidfVectorizer(stop_words=['a', 'about', 'an', 'and', 'are', 'as', 'at', 'be',\n",
" 'been', 'but', 'by', 'can', 'even', 'ever', 'for',\n",
" 'from', 'get', 'had', 'has', 'have', 'he', 'her',\n",
" 'hers', 'his', 'how', 'i', 'if', 'in', 'into', 'is', ...])LinearSVC()
Pipeline(steps=[('tfidf', TfidfVectorizer()), ('clf', LinearSVC())])In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. Pipeline(steps=[('tfidf', TfidfVectorizer()), ('clf', LinearSVC())])TfidfVectorizer()
LinearSVC()