{"id":295,"date":"2013-11-04T13:01:27","date_gmt":"2013-11-04T18:01:27","guid":{"rendered":"http:\/\/homepages.uc.edu\/~yaozo\/wordpress\/?p=295"},"modified":"2013-11-04T13:01:27","modified_gmt":"2013-11-04T18:01:27","slug":"text-mining-the-complete-works-of-william-shakespeare","status":"publish","type":"post","link":"https:\/\/zhuoyao.net\/index.php\/2013\/11\/04\/text-mining-the-complete-works-of-william-shakespeare\/","title":{"rendered":"Text Mining the Complete Works of William Shakespeare"},"content":{"rendered":"<p>(This article was first published on\u00a0<strong><a href=\"http:\/\/www.exegetic.biz\/blog\/2013\/09\/text-mining-the-complete-works-of-william-shakespeare\/\">Exegetic Analytics \u00bb R<\/a><\/strong>, and kindly contributed to\u00a0<a href=\"http:\/\/www.r-bloggers.com\/\" rel=\"nofollow\">R-bloggers)<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>I am starting a new project that will require some serious text mining. So, in the interests of bringing myself up to speed on the tm package, I thought I would apply it to the Complete Works of William Shakespeare and just see what falls out.<\/p>\n<p>The first order of business was getting my hands on all that text. Fortunately it is available from a number of sources. I chose to use\u00a0<a href=\"http:\/\/www.gutenberg.org\/\" target=\"_blank\" rel=\"noopener\">Project Gutenberg<\/a>.<\/p>\n<div>\n<div id=\"highlighter_327431\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; TEXTFILE = <\/code><code>\"data\/pg100.txt\"<\/code><\/div>\n<div><code>&gt; <\/code><code>if <\/code><code>(!<\/code><code>file.exists<\/code><code>(TEXTFILE)) {<\/code><\/div>\n<div><code>+\u00a0\u00a0\u00a0\u00a0 <\/code><code>download.file<\/code><code>(<\/code><code>\"<a href=\"http:\/\/www.gutenberg.org\/cache\/epub\/100\/pg100.txt\">http:\/\/www.gutenberg.org\/cache\/epub\/100\/pg100.txt<\/a>\"<\/code><code>, destfile = TEXTFILE)<\/code><\/div>\n<div><code>+ }<\/code><\/div>\n<div><code>&gt; shakespeare = <\/code><code>readLines<\/code><code>(TEXTFILE)<\/code><\/div>\n<div><code>&gt; <\/code><code>length<\/code><code>(shakespeare)<\/code><\/div>\n<div><code>[1] 124787<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>That\u2019s quite a solid chunk of data: 124787 lines. Let\u2019s take a closer look.<\/p>\n<div>\n<div id=\"highlighter_583782\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; <\/code><code>head<\/code><code>(shakespeare)<\/code><\/div>\n<div><code>[1] <\/code><code>\"The Project Gutenberg EBook of The Complete Works of William Shakespeare, by\"<\/code><\/div>\n<div><code>[2] <\/code><code>\"William Shakespeare\"<\/code><\/div>\n<div><code>[3] <\/code><code>\"\"<\/code><\/div>\n<div><code>[4] <\/code><code>\"This eBook is for the use of anyone anywhere at no cost and with\"<\/code><\/div>\n<div><code>[5] <\/code><code>\"almost no restrictions whatsoever.\u00a0 You may copy it, give it away or\"<\/code><\/div>\n<div><code>[6] <\/code><code>\"re-use it under the terms of the Project Gutenberg License included\"<\/code><\/div>\n<div><code>&gt; <\/code><code>tail<\/code><code>(shakespeare)<\/code><\/div>\n<div><code>[1] <\/code><code>\"<a href=\"http:\/\/www.gutenberg.org\/2\/4\/6\/8\/24689\">http:\/\/www.gutenberg.org\/2\/4\/6\/8\/24689<\/a>\"<\/code>\u00a0\u00a0\u00a0 <code>\"\"<\/code><\/div>\n<div><code>[3] <\/code><code>\"An alternative method of locating eBooks:\"<\/code> <code>\"<a href=\"http:\/\/www.gutenberg.org\/GUTINDEX.ALL\">http:\/\/www.gutenberg.org\/GUTINDEX.ALL<\/a>\"<\/code><\/div>\n<div><code>[5] <\/code><code>\"\"<\/code>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <code>\"*** END: FULL LICENSE ***\"<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>There seems to be some header and footer text. We will want to get rid of that! Using a text editor I checked to see how many lines were occupied with metadata and then removed them before concatenating all of the lines into a single long, long, long string.<\/p>\n<div>\n<div id=\"highlighter_67529\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; shakespeare = shakespeare[-(1:173)]<\/code><\/div>\n<div><code>&gt; shakespeare = shakespeare[-(124195:<\/code><code>length<\/code><code>(shakespeare))]<\/code><\/div>\n<div><code>&gt;<\/code><\/div>\n<div><code>&gt; shakespeare = <\/code><code>paste<\/code><code>(shakespeare, collapse = <\/code><code>\" \"<\/code><code>)<\/code><\/div>\n<div><code>&gt; <\/code><code>nchar<\/code><code>(shakespeare)<\/code><\/div>\n<div><code>[1] 5436541<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>While I had the text open in the editor I noticed that sections in the document were separated by the following text:<\/p>\n<div>\n<div id=\"highlighter_982430\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&lt;&lt;THIS ELECTRONIC VERSION OF THE COMPLETE WORKS OF WILLIAM<\/code><\/div>\n<div><code>SHAKESPEARE IS COPYRIGHT 1990-1993 BY WORLD LIBRARY, INC., AND IS<\/code><\/div>\n<div><code>PROVIDED BY PROJECT GUTENBERG ETEXT OF ILLINOIS BENEDICTINE COLLEGE<\/code><\/div>\n<div><code>WITH PERMISSION.\u00a0 ELECTRONIC AND MACHINE READABLE COPIES MAY BE<\/code><\/div>\n<div><code>DISTRIBUTED SO LONG AS SUCH COPIES (1) ARE FOR YOUR OR OTHERS<\/code><\/div>\n<div><code>PERSONAL USE ONLY, AND (2) ARE NOT DISTRIBUTED OR USED<\/code><\/div>\n<div><code>COMMERCIALLY.\u00a0 PROHIBITED COMMERCIAL DISTRIBUTION INCLUDES BY ANY<\/code><\/div>\n<div><code>SERVICE THAT CHARGES FOR DOWNLOAD TIME OR FOR MEMBERSHIP.&gt;&gt;<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>Obviously that is going to taint the analysis. But it also serves as a convenient marker to divide that long, long, long string into separate documents.<\/p>\n<div>\n<div id=\"highlighter_63780\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; shakespeare = <\/code><code>strsplit<\/code><code>(shakespeare, <\/code><code>\"&lt;&lt;[^&gt;]*&gt;&gt;\"<\/code><code>)[[1]]<\/code><\/div>\n<div><code>&gt; <\/code><code>length<\/code><code>(shakespeare)<\/code><\/div>\n<div><code>[1] 218<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>This left me with a list of 218 documents. On further inspection, some of them appeared to be a little on the short side (in my limited experience, the bard is not known for brevity). As it turns out, the short documents were the\u00a0<a href=\"http:\/\/en.wikipedia.org\/wiki\/Dramatis_person%C3%A6\" target=\"_blank\" rel=\"noopener\">dramatis personae<\/a>\u00a0for his plays. I removed them as well.<\/p>\n<div>\n<div id=\"highlighter_642385\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; (dramatis.personae &lt;- <\/code><code>grep<\/code><code>(<\/code><code>\"Dramatis Personae\"<\/code><code>, shakespeare, ignore.case = <\/code><code>TRUE<\/code><code>))<\/code><\/div>\n<div><code>\u00a0<\/code><code>[1]\u00a0\u00a0 2\u00a0\u00a0 8\u00a0 11\u00a0 17\u00a0 23\u00a0 28\u00a0 33\u00a0 43\u00a0 49\u00a0 55\u00a0 62\u00a0 68\u00a0 74\u00a0 81\u00a0 87\u00a0 93\u00a0 99 105 111 117 122 126 134 140 146 152 158<\/code><\/div>\n<div><code>[28] 164 170 176 182 188 194 200 206 212<\/code><\/div>\n<div><code>&gt; <\/code><code>length<\/code><code>(shakespeare)<\/code><\/div>\n<div><code>[1] 218<\/code><\/div>\n<div><code>&gt; shakespeare = shakespeare[-dramatis.personae]<\/code><\/div>\n<div><code>&gt; <\/code><code>length<\/code><code>(shakespeare)<\/code><\/div>\n<div><code>[1] 182<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>Down to 182 documents, each of which is a complete work.<\/p>\n<p>The next task was to convert these documents into a\u00a0<a href=\"http:\/\/en.wikipedia.org\/wiki\/Text_corpus\" target=\"_blank\" rel=\"noopener\">corpus<\/a>.<\/p>\n<div>\n<div id=\"highlighter_816960\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; <\/code><code>library<\/code><code>(tm)<\/code><\/div>\n<div><code>&gt; <\/code><\/div>\n<div><code>&gt; doc.vec &lt;- <\/code><code>VectorSource<\/code><code>(shakespeare)<\/code><\/div>\n<div><code>&gt; doc.corpus &lt;- <\/code><code>Corpus<\/code><code>(doc.vec)<\/code><\/div>\n<div><code>&gt; <\/code><code>summary<\/code><code>(doc.corpus)<\/code><\/div>\n<div><code>A corpus with 182 text documents<\/code><\/div>\n<div><\/div>\n<div><code>The metadata consists of 2 tag-value pairs and a data frame<\/code><\/div>\n<div><code>Available tags are:<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>create_date creator <\/code><\/div>\n<div><code>Available variables <\/code><code>in<\/code> <code>the data frame are:<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>MetaID<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>There is a lot of information in those documents which is not particularly useful for text mining. So before proceeding any further, we will clean things up a bit. First we convert all of the text to lowercase and then remove punctuation, numbers and common English stopwords. Possibly the list of English stop words is not entirely appropriate for Shakespearean English, but it is a reasonable starting point.<\/p>\n<div>\n<div id=\"highlighter_588513\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; doc.corpus &lt;- <\/code><code>tm_map<\/code><code>(doc.corpus, tolower)<\/code><\/div>\n<div><code>&gt; doc.corpus &lt;- <\/code><code>tm_map<\/code><code>(doc.corpus, removePunctuation)<\/code><\/div>\n<div><code>&gt; doc.corpus &lt;- <\/code><code>tm_map<\/code><code>(doc.corpus, removeNumbers)<\/code><\/div>\n<div><code>&gt; doc.corpus &lt;- <\/code><code>tm_map<\/code><code>(doc.corpus, removeWords, <\/code><code>stopwords<\/code><code>(<\/code><code>\"english\"<\/code><code>))<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>Next we perform\u00a0<a href=\"http:\/\/en.wikipedia.org\/wiki\/Word_stem\" target=\"_blank\" rel=\"noopener\">stemming<\/a>, which removes affixes from words (so, for example, \u201crun\u201d, \u201cruns\u201d and \u201crunning\u201d all become \u201crun\u201d).<\/p>\n<div>\n<div id=\"highlighter_360550\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; <\/code><code>library<\/code><code>(SnowballC)<\/code><\/div>\n<div><code>&gt;<\/code><\/div>\n<div><code>&gt; doc.corpus &lt;- <\/code><code>tm_map<\/code><code>(doc.corpus, stemDocument)<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>All of these transformations have resulted in a lot of whitespace, which is then removed.<\/p>\n<div>\n<div id=\"highlighter_964109\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; doc.corpus &lt;- <\/code><code>tm_map<\/code><code>(doc.corpus, stripWhitespace)<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>If we have a look at what\u2019s left, we find that it\u2019s just the lowercase, stripped down version of the text (which I have truncated here).<\/p>\n<div>\n<div id=\"highlighter_580999\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; <\/code><code>inspect<\/code><code>(doc.corpus[8])<\/code><\/div>\n<div><code>A corpus with 1 text document<\/code><\/div>\n<div><\/div>\n<div><code>The metadata consists of 2 tag-value pairs and a data frame<\/code><\/div>\n<div><code>Available tags are:<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>create_date creator <\/code><\/div>\n<div><code>Available variables <\/code><code>in<\/code> <code>the data frame are:<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>MetaID <\/code><\/div>\n<div><\/div>\n<div><code>[[1]]<\/code><\/div>\n<div><code>\u00a0<\/code><code>act ii scene messina pompey hous enter pompey menecr mena warlik manner pompey great god just shall<\/code><\/div>\n<div><code>\u00a0<\/code><code>assist deed justest men menecr know worthi pompey delay deni pompey <\/code><code>while<\/code> <code>suitor throne decay thing<\/code><\/div>\n<div><code>\u00a0<\/code><code>sue menecr ignor beg often harm wise powr deni us good find profit lose prayer pompey shall well<\/code><\/div>\n<div><code>\u00a0<\/code><code>peopl love sea mine power crescent augur hope say will come th full mark antoni egypt sit dinner<\/code><\/div>\n<div><code>\u00a0<\/code><code>will make war without door caesar get money lose heart lepidus flatter flatterd neither love either<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>This is where things start to get interesting. Next we create a\u00a0<a href=\"http:\/\/en.wikipedia.org\/wiki\/Document-term_matrix\" target=\"_blank\" rel=\"noopener\">Term Document Matrix<\/a>\u00a0(TDM) which reflects the number of times each word in the corpus is found in each of the documents.<\/p>\n<div>\n<div id=\"highlighter_80859\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; TDM &lt;- <\/code><code>TermDocumentMatrix<\/code><code>(doc.corpus)<\/code><\/div>\n<div><code>&gt; TDM<\/code><\/div>\n<div><code>A term-document <\/code><code>matrix <\/code><code>(18651 terms, 182 documents)<\/code><\/div>\n<div><\/div>\n<div><code>Non-\/sparse entries: 182898\/3211584<\/code><\/div>\n<div><code>Sparsity\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 : 95%<\/code><\/div>\n<div><code>Maximal term length: 31 <\/code><\/div>\n<div><code>Weighting\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 : term <\/code><code>frequency <\/code><code>(tf)<\/code><\/div>\n<div><code>&gt; <\/code><code>inspect<\/code><code>(TDM[1:10,1:10])<\/code><\/div>\n<div><code>A term-document <\/code><code>matrix <\/code><code>(10 terms, 10 documents)<\/code><\/div>\n<div><\/div>\n<div><code>Non-\/sparse entries: 1\/99<\/code><\/div>\n<div><code>Sparsity\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 : 99%<\/code><\/div>\n<div><code>Maximal term length: 9 <\/code><\/div>\n<div><code>Weighting\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 : term <\/code><code>frequency <\/code><code>(tf)<\/code><\/div>\n<div><\/div>\n<div><code>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/code><code>Docs<\/code><\/div>\n<div><code>Terms\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1 2 3 4 5 6 7 8 9 10<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>aaron\u00a0\u00a0\u00a0\u00a0 0 0 0 0 0 0 0 0 0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>abaissiez 0 0 0 0 0 0 0 0 0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>abandon\u00a0\u00a0 0 0 0 0 0 0 0 0 0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>abandond\u00a0 0 1 0 0 0 0 0 0 0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>abas\u00a0\u00a0\u00a0\u00a0\u00a0 0 0 0 0 0 0 0 0 0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>abashd\u00a0\u00a0\u00a0 0 0 0 0 0 0 0 0 0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>abat\u00a0\u00a0\u00a0\u00a0\u00a0 0 0 0 0 0 0 0 0 0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>abatfowl\u00a0 0 0 0 0 0 0 0 0 0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>abbess\u00a0\u00a0\u00a0 0 0 0 0 0 0 0 0 0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>abbey\u00a0\u00a0\u00a0\u00a0 0 0 0 0 0 0 0 0 0\u00a0 0<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>The extract from the TDM shows, for example, that the word \u201cabandond\u201d occurred once in document number 2 but was not present in any of the other first ten documents. We could have generated the transpose of the DTM as well.<\/p>\n<div>\n<div id=\"highlighter_673742\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; DTM &lt;- <\/code><code>DocumentTermMatrix<\/code><code>(doc.corpus)<\/code><\/div>\n<div><code>&gt; <\/code><code>inspect<\/code><code>(DTM[1:10,1:10])<\/code><\/div>\n<div><code>A document-term <\/code><code>matrix <\/code><code>(10 documents, 10 terms)<\/code><\/div>\n<div><\/div>\n<div><code>Non-\/sparse entries: 1\/99<\/code><\/div>\n<div><code>Sparsity\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 : 99%<\/code><\/div>\n<div><code>Maximal term length: 9 <\/code><\/div>\n<div><code>Weighting\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 : term <\/code><code>frequency <\/code><code>(tf)<\/code><\/div>\n<div><\/div>\n<div><code>\u00a0\u00a0\u00a0\u00a0<\/code><code>Terms<\/code><\/div>\n<div><code>Docs aaron abaissiez abandon abandond abas abashd abat abatfowl abbess abbey<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>1\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>2\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>3\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>4\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>5\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>6\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>7\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>8\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>9\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>10\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0\u00a0 0\u00a0\u00a0\u00a0\u00a0 0<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>Which of these proves to be most convenient will depend on the relative number of documents and terms in your data.<\/p>\n<p>Now we can start asking questions like: what are the most frequently occurring terms?<\/p>\n<div>\n<div id=\"highlighter_874448\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; <\/code><code>findFreqTerms<\/code><code>(TDM, 2000)<\/code><\/div>\n<div><code>\u00a0<\/code><code>[1] <\/code><code>\"come\"<\/code>\u00a0 <code>\"enter\"<\/code> <code>\"good\"<\/code>\u00a0 <code>\"king\"<\/code>\u00a0 <code>\"let\"<\/code>\u00a0\u00a0 <code>\"lord\"<\/code>\u00a0 <code>\"love\"<\/code>\u00a0 <code>\"make\"<\/code>\u00a0 <code>\"man\"<\/code>\u00a0\u00a0 <code>\"now\"<\/code>\u00a0\u00a0 <code>\"shall\"<\/code> <code>\"sir\"<\/code>\u00a0\u00a0 <code>\"thee\"<\/code><\/div>\n<div><code>[14] <\/code><code>\"thi\"<\/code>\u00a0\u00a0 <code>\"thou\"<\/code>\u00a0 <code>\"well\"<\/code>\u00a0 <code>\"will\"<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>Each of these words occurred more that 2000 times.<\/p>\n<p>What about associations between words? Let\u2019s have a look at what other words had a high association with \u201clove\u201d.<\/p>\n<div>\n<div id=\"highlighter_996407\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; <\/code><code>findAssocs<\/code><code>(TDM, <\/code><code>\"love\"<\/code><code>, 0.8)<\/code><\/div>\n<div><code>beauti\u00a0\u00a0\u00a0 eye<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>0.83\u00a0\u00a0 0.80<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>Well that\u2019s not too surprising!<\/p>\n<p>From our first look at the TDM we know that there are many terms which do not occur very often. It might make sense to simply remove these sparse terms from the analysis.<\/p>\n<div>\n<div id=\"highlighter_715512\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; TDM.common = <\/code><code>removeSparseTerms<\/code><code>(TDM, 0.1)<\/code><\/div>\n<div><code>&gt; <\/code><code>dim<\/code><code>(TDM)<\/code><\/div>\n<div><code>[1] 18651\u00a0\u00a0 182<\/code><\/div>\n<div><code>&gt; <\/code><code>dim<\/code><code>(TDM.common)<\/code><\/div>\n<div><code>[1]\u00a0 71 182<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>From the 18651 terms that we started with, we are now left with a TDM which considers on 71 commonly occurring terms.<\/p>\n<div>\n<div id=\"highlighter_898003\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; <\/code><code>inspect<\/code><code>(TDM.common[1:10,1:10])<\/code><\/div>\n<div><code>A term-document <\/code><code>matrix <\/code><code>(10 terms, 10 documents)<\/code><\/div>\n<div><\/div>\n<div><code>Non-\/sparse entries: 94\/6<\/code><\/div>\n<div><code>Sparsity\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 : 6%<\/code><\/div>\n<div><code>Maximal term length: 6<\/code><\/div>\n<div><code>Weighting\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 : term <\/code><code>frequency <\/code><code>(tf)<\/code><\/div>\n<div><\/div>\n<div><code>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<\/code><code>Docs<\/code><\/div>\n<div><code>Terms\u00a0\u00a0\u00a0\u00a0 1 2\u00a0 3\u00a0 4\u00a0 5\u00a0 6\u00a0 7\u00a0 8 9 10<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>act\u00a0\u00a0\u00a0\u00a0 1 4\u00a0 7\u00a0 9\u00a0 6\u00a0 3\u00a0 2 14 1\u00a0 0<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>art\u00a0\u00a0\u00a0 53 0\u00a0 9\u00a0 3\u00a0 5\u00a0 3\u00a0 2 17 0\u00a0 6<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>away\u00a0\u00a0 18 5\u00a0 8\u00a0 4\u00a0 2 10\u00a0 5 13 1\u00a0 7<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>call\u00a0\u00a0 17 1\u00a0 4\u00a0 2\u00a0 2\u00a0 1\u00a0 6 17 3\u00a0 7<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>can\u00a0\u00a0\u00a0 44 8 12\u00a0 5 10\u00a0 6 10 24 1\u00a0 5<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>come\u00a0\u00a0 19 9 16 17 12 15 14 89 9 15<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>day\u00a0\u00a0\u00a0 43 2\u00a0 2\u00a0 4\u00a0 1\u00a0 5\u00a0 3 17 2\u00a0 3<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>enter\u00a0\u00a0 0 7 12 11 10 10 14 87 4\u00a0 6<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>exeunt\u00a0 0 3\u00a0 8\u00a0 8\u00a0 5\u00a0 4\u00a0 7 49 1\u00a0 4<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>exit\u00a0\u00a0\u00a0 0 6\u00a0 8\u00a0 5\u00a0 6\u00a0 5\u00a0 3 31 3\u00a0 2<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>Finally we are going to put together a visualisation. The TDM is stored as a<a href=\"http:\/\/en.wikipedia.org\/wiki\/Sparse_matrix\" target=\"_blank\" rel=\"noopener\">sparse matrix<\/a>. This was an apt representation for the initial TDM, but the reduced TDM containing only frequently occurring words is probably better stored as a normal matrix. We\u2019ll make the conversion and see.<\/p>\n<div>\n<div id=\"highlighter_675623\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; <\/code><code>library<\/code><code>(slam)<\/code><\/div>\n<div><code>&gt;<\/code><\/div>\n<div><code>&gt; TDM.dense &lt;- <\/code><code>as.matrix<\/code><code>(TDM.common)<\/code><\/div>\n<div><code>&gt;<\/code><\/div>\n<div><code>&gt; TDM.dense<\/code><\/div>\n<div><code>&gt; <\/code><code>object.size<\/code><code>(TDM.common)<\/code><\/div>\n<div><code>207872 bytes<\/code><\/div>\n<div><code>&gt; <\/code><code>object.size<\/code><code>(TDM.dense)<\/code><\/div>\n<div><code>112888 bytes<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>So, as it turns out the sparse representation was actually wasting space! (This will generally not be true though: it will only apply for a matrix consisting of just the common terms). Anyway, we need the data as a normal matrix in order to produce the visualisation. The next step is to convert it into a tidy format.<\/p>\n<div>\n<div id=\"highlighter_258256\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; <\/code><code>library<\/code><code>(reshape2)<\/code><\/div>\n<div><code>&gt;<\/code><\/div>\n<div><code>&gt; TDM.dense = <\/code><code>melt<\/code><code>(TDM.dense, value.name = <\/code><code>\"count\"<\/code><code>)<\/code><\/div>\n<div><code>&gt; <\/code><code>head<\/code><code>(TDM.dense)<\/code><\/div>\n<div><code>\u00a0\u00a0<\/code><code>Terms Docs count<\/code><\/div>\n<div><code>1\u00a0\u00a0 act\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0\u00a0 1<\/code><\/div>\n<div><code>2\u00a0\u00a0 art\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0 53<\/code><\/div>\n<div><code>3\u00a0 away\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0 18<\/code><\/div>\n<div><code>4\u00a0 call\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0 17<\/code><\/div>\n<div><code>5\u00a0\u00a0 can\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0 44<\/code><\/div>\n<div><code>6\u00a0 come\u00a0\u00a0\u00a0 1\u00a0\u00a0\u00a0 19<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>And finally generate the visualisation.<\/p>\n<div>\n<div id=\"highlighter_986530\">\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td>\n<div>\n<div><code>&gt; <\/code><code>library<\/code><code>(ggplot2)<\/code><\/div>\n<div><code>&gt;<\/code><\/div>\n<div><code>&gt; <\/code><code>ggplot<\/code><code>(TDM.dense, <\/code><code>aes<\/code><code>(x = Docs, y = Terms, fill = <\/code><code>log10<\/code><code>(count))) +<\/code><\/div>\n<div><code>+\u00a0\u00a0\u00a0\u00a0 <\/code><code>geom_tile<\/code><code>(colour = <\/code><code>\"white\"<\/code><code>) +<\/code><\/div>\n<div><code>+\u00a0\u00a0\u00a0\u00a0 <\/code><code>scale_fill_gradient<\/code><code>(high=<\/code><code>\"#FF0000\"<\/code> <code>, low=<\/code><code>\"#FFFFFF\"<\/code><code>)+<\/code><\/div>\n<div><code>+\u00a0\u00a0\u00a0\u00a0 <\/code><code>ylab<\/code><code>(<\/code><code>\"\"<\/code><code>) +<\/code><\/div>\n<div><code>+\u00a0\u00a0\u00a0\u00a0 <\/code><code>theme<\/code><code>(panel.background = <\/code><code>element_blank<\/code><code>()) +<\/code><\/div>\n<div><code>+\u00a0\u00a0\u00a0\u00a0 <\/code><code>theme<\/code><code>(axis.text.x = <\/code><code>element_blank<\/code><code>(), axis.ticks.x = <\/code><code>element_blank<\/code><code>())<\/code><\/div>\n<\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p><a href=\"http:\/\/www.exegetic.biz\/blog\/wp-content\/uploads\/2013\/09\/shakespeare-common-tdm.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" alt=\"shakespeare-common-tdm\" src=\"http:\/\/www.exegetic.biz\/blog\/wp-content\/uploads\/2013\/09\/shakespeare-common-tdm.png\" width=\"800\" height=\"1000\" \/><\/a><\/p>\n<p>The colour scale indicates the number of times that each of the terms cropped up in each of the documents. I applied a logarithmic transform to the counts since there was a very large disparity in the numbers across terms and documents. The grey tiles correspond to terms which are not found in the corresponding document.<\/p>\n<p>One can see that some terms, like \u201cwill\u201d turn up frequently in most documents, while \u201clove\u201d is common in some and rare or absent in others.<\/p>\n<p>That was interesting. Not sure that I would like to make any conclusions on the basis of the results above (Shakespeare is well outside my field of expertise!), but I now have a pretty good handle on how the tm package works. As always, feedback will be appreciated!<\/p>\n<h1>References<\/h1>\n<ul>\n<li><a href=\"http:\/\/anythingbutrbitrary.blogspot.com\/2013\/03\/build-search-engine-in-20-minutes-or.html\" target=\"_blank\" rel=\"noopener\">Build a search engine in 20 minutes or less<\/a><\/li>\n<li>Feinerer, I. (2013). Introduction to the tm Package: Text Mining in R.<\/li>\n<li>Feinerer, I., Hornik, K., &amp; Meyer, D. (2008). Text Mining Infrastructure in R. Journal of Statistical Software, 25(5).<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>(This article was first published on\u00a0Exegetic Analytics \u00bb R, and kindly contributed to\u00a0R-bloggers) &nbsp; I am starting a new project that will require some serious&hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[],"class_list":["post-295","post","type-post","status-publish","format-standard","hentry","category-r"],"_links":{"self":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts\/295","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/comments?post=295"}],"version-history":[{"count":0,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts\/295\/revisions"}],"wp:attachment":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/media?parent=295"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/categories?post=295"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/tags?post=295"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}