{"id":549,"date":"2014-02-28T13:19:56","date_gmt":"2014-02-28T18:19:56","guid":{"rendered":"http:\/\/homepages.uc.edu\/~yaozo\/wordpress\/?p=549"},"modified":"2014-02-28T13:19:56","modified_gmt":"2014-02-28T18:19:56","slug":"statistics-meets-rhetoric-a-text-analysis-of-i-have-a-dream-in-r","status":"publish","type":"post","link":"https:\/\/zhuoyao.net\/index.php\/2014\/02\/28\/statistics-meets-rhetoric-a-text-analysis-of-i-have-a-dream-in-r\/","title":{"rendered":"Statistics meets rhetoric: A text analysis of &#8220;I Have a Dream&#8221; in R"},"content":{"rendered":"<div>\n<div dir=\"ltr\">\n<div dir=\"ltr\">This article was first published on <a href=\"http:\/\/analyzestuff.com\/\" target=\"_blank\" rel=\"noopener\">analyze stuff<\/a>. It has been contributed to Anything but R-bitrary as the second article in its introductory series.<\/p>\n<p><center><i>By Max Ghenis<\/i><\/center><br \/>\nToday, we celebrate the would-be 85th birthday of Martin Luther King, Jr., a man remembered for pioneering the civil rights movement through his courage, moral leadership, and oratory prowess. This post focuses on his most famous speech, <a href=\"http:\/\/en.wikipedia.org\/wiki\/I_Have_a_Dream\" target=\"_blank\" rel=\"noopener\">I Have a Dream<\/a> [<a href=\"http:\/\/youtu.be\/smEqnnklfYs\" target=\"_blank\" rel=\"noopener\">YouTube<\/a> | <a href=\"https:\/\/docs.google.com\/document\/d\/1gRgZmcFmleaJH9Jdmnhkh4V2UcZEapLiu-wXGs_ETBg\/pub\" target=\"_blank\" rel=\"noopener\">text<\/a>] given on the steps of the Lincoln Memorial to over 250,000 supporters of the March on Washington. While many have analyzed the cultural impact of the speech, few have approached it from a natural language processing perspective. I use R\u2019s text analysis packages and other tools to reveal some of the trends in sentiment, flow (syllables, words, and sentences), and ultimately popularity (Google search volume) manifested in the rhetorical masterpiece.<\/div>\n<h2>Bag-of-words<\/h2>\n<div dir=\"ltr\">Word clouds are somewhat controversial among data scientists: some see them as overused and cliche, while others find them a useful exploratory tool, particularly for connecting with a less analytical audience. I consider them a fun and useful starting point, so I started off by throwing the speech\u2019s text into <a href=\"http:\/\/wordle.net\/\" target=\"_blank\" rel=\"noopener\">Wordle<\/a>.<\/div>\n<div dir=\"ltr\">\n<div><a href=\"http:\/\/www.wordle.net\/show\/wrdl\/7466879\/I_have_a_dream\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" alt=\"\" src=\"http:\/\/i0.wp.com\/3.bp.blogspot.com\/-3m6QgcsRv0k\/UtzYDpnXb5I\/AAAAAAAAQQQ\/im6M2_Nh4qc\/s576\/Screen+Shot+2014-01-20+at+12.01.38+AM.png?w=456\" width=\"456\" height=\"201\" border=\"0\" \/><\/a><\/div>\n<\/div>\n<div dir=\"ltr\">\nR also has a <a href=\"blank\" target=\"_blank\" rel=\"noopener\">wordcloud<\/a> package, though it\u2019s hard to beat Wordle on looks.<\/div>\n<pre>\n# Load raw data, stored at textuploader.com\nspeech.raw &lt;- paste(scan(url(\"http:\/\/textuploader.com\/1k0g\/raw\"), \n                         what=\"character\"), collapse=\" \")\n\nlibrary(wordcloud)\nwordcloud(speech.raw) # Also takes other arguments like color<\/pre>\n<\/div>\n<p><img decoding=\"async\" alt=\"\" src=\"https:\/\/lh3.googleusercontent.com\/XB3fthavQrgB0ZUjFKc5NGEVs-V1H0EMnF_az95v9Yl-jTvIRwNJu9fuDw6WGGEFmFuWvKTbYuH_c7njJkfQ8IYHuaWTSF7_ARnHuyDbhG9RBE2k9ouBryE1ew\" width=\"375px;\" height=\"242px;\" \/><\/p>\n<h2>\nCalculating textual metrics<\/h2>\n<div dir=\"ltr\">The <a href=\"http:\/\/cran.r-project.org\/web\/packages\/qdap\/qdap.pdf\" target=\"_blank\" rel=\"noopener\">qdap<\/a> package provides functions for text analysis, which I use to split sentences, count syllables and words, and estimate sentiment and readability. I also use the <a href=\"blank\" target=\"_blank\" rel=\"noopener\">data.table<\/a> package to organize the sentence-level data structure.<\/div>\n<div dir=\"ltr\">\n<pre>\nlibrary(qdap)\nlibrary(data.table)\n\n# Split into sentences\n# qdap's sentSplit is modeled after dialogue data, so person field is needed\nspeech.df &lt;- data.table(speech=speech.raw, person=\"MLK\")\nsentences &lt;- data.table(sentSplit(speech.df, \"speech\"))\n# Add a sentence counter and remove unnecessary variables\nsentences[, sentence.num := seq(nrow(sentences))]\nsentences[, person := NULL]\nsentences[, tot := NULL]\nsetcolorder(sentences, c(\"sentence.num\", \"speech\"))\n\n# Syllables per sentence\nsentences[, syllables := syllable.sum(speech)]\n# Add cumulative syllable count and percent complete as proxy for progression\nsentences[, syllables.cumsum := cumsum(syllables)]\nsentences[, pct.complete := syllables.cumsum \/ sum(sentences$syllables)]\nsentences[, pct.complete.100 := pct.complete * 100]<\/pre>\n<\/div>\n<div dir=\"ltr\">qdap\u2019s sentiment analysis is based on a sentence-level formula classifying each word as either positive, negative, neutral, negator or amplifier, per <a href=\"http:\/\/www.cs.uic.edu\/~liub\/FBS\/sentiment-analysis.html\" target=\"_blank\" rel=\"noopener\">Hu &amp; Liu\u2019s sentiment lexicon.<\/a> The function also provides a word count.<\/div>\n<div dir=\"ltr\">\n<pre>\npol.df &lt;- polarity(sentences$speech)$all\nsentences[, words := pol.df$wc]\nsentences[, pol := pol.df$polarity]<\/pre>\n<\/div>\n<div dir=\"ltr\">A scatterplot hints that polarity increases throughout the speech; that is, the sentiment gets more positive.<\/div>\n<div dir=\"ltr\">\n<pre>\nwith(sentences, plot(pct.complete, pol))<\/pre>\n<\/div>\n<div><a href=\"http:\/\/1.bp.blogspot.com\/-wiYGjNU03wE\/Utxs7M7wbJI\/AAAAAAAAQO4\/Fzlblk2eaNE\/s576\/polarity1.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" alt=\"\" src=\"http:\/\/i0.wp.com\/1.bp.blogspot.com\/-wiYGjNU03wE\/Utxs7M7wbJI\/AAAAAAAAQO4\/Fzlblk2eaNE\/s576\/polarity1.png?w=456\" width=\"456\" height=\"283\" border=\"0\" \/><\/a><\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">\nCleaning up the plot and adding a <a href=\"http:\/\/en.wikipedia.org\/wiki\/Local_regression\" target=\"_blank\" rel=\"noopener\">LOESS<\/a> smoother clarifies this trend, particularly the peak at the end.<\/div>\n<div dir=\"ltr\">\n<pre>\nlibrary(ggplot2)\nlibrary(scales)\n\nmy.theme &lt;- \n  theme(plot.background = element_blank(), # Remove background\n        panel.grid.major = element_blank(), # Remove gridlines\n        panel.grid.minor = element_blank(), # Remove more gridlines\n        panel.border = element_blank(), # Remove border\n        panel.background = element_blank(), # Remove more background\n        axis.ticks = element_blank(), # Remove axis ticks\n        axis.text=element_text(size=14), # Enlarge axis text font\n        axis.title=element_text(size=16), # Enlarge axis title font\n        plot.title=element_text(size=24, hjust=0)) # Enlarge, left-align title\n\nCustomScatterPlot &lt;- function(gg)\n  return(gg + geom_point(color=\"grey60\") + # Lighten dots\n           stat_smooth(color=\"royalblue\", fill=\"lightgray\", size=1.4) + \n           xlab(\"Percent complete (by syllable count)\") + \n           scale_x_continuous(labels = percent) + my.theme)\n\nCustomScatterPlot(ggplot(sentences, aes(pct.complete, pol)) +\n                    ylab(\"Sentiment (sentence-level polarity)\") + \n                    ggtitle(\"Sentiment of I Have a Dream speech\"))<\/pre>\n<\/div>\n<div><a href=\"http:\/\/3.bp.blogspot.com\/-R5UaN0bSK6w\/Utxr-aeWQyI\/AAAAAAAAQOw\/iQ7u-AOAlkc\/s576\/polarity2.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" alt=\"\" src=\"http:\/\/i2.wp.com\/3.bp.blogspot.com\/-R5UaN0bSK6w\/Utxr-aeWQyI\/AAAAAAAAQOw\/iQ7u-AOAlkc\/s576\/polarity2.png?w=456\" width=\"456\" height=\"283\" border=\"0\" \/><\/a><\/div>\n<div dir=\"ltr\">Through the variation, the trendline reveals two troughs (calls to action, if you will) along with the increasing positivity.<\/p>\n<p><a href=\"http:\/\/en.wikipedia.org\/wiki\/Readability_test\" target=\"_blank\" rel=\"noopener\">Readability tests<\/a> are typically based on syllables, words, and sentences in order to approximate the grade level required to comprehend a text. qdap offers several of the most popular formulas, of which I chose the <a href=\"http:\/\/en.wikipedia.org\/wiki\/Automated_Readability_Index\" target=\"_blank\" rel=\"noopener\">Automated Readability Index<\/a>.<\/div>\n<div dir=\"ltr\">\n<pre>\nsentences[, readability := automated_readability_index(speech, sentence.num)\n          $Automated_Readability_Index]<\/pre>\n<\/div>\n<div dir=\"ltr\">By graphing similarly to the above polarity chart, I show readability to be mostly constant throughout the speech, though it varies within each section. This makes sense, as one generally avoids too many simple or complex sentences in a row.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">\n<pre>\nCustomScatterPlot(ggplot(sentences, aes(pct.complete, readability)) +\n                    ylab(\"Automated Readability Index\") +\n                    ggtitle(\"Readability of I Have a Dream speech\"))<\/pre>\n<\/div>\n<div dir=\"ltr\"><\/div>\n<div><a href=\"http:\/\/2.bp.blogspot.com\/-c3m9E59bfJY\/UtxuezrttLI\/AAAAAAAAQPE\/hZ_F34tFYsI\/s576\/readability.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" alt=\"\" src=\"http:\/\/i0.wp.com\/2.bp.blogspot.com\/-c3m9E59bfJY\/UtxuezrttLI\/AAAAAAAAQPE\/hZ_F34tFYsI\/s576\/readability.png?w=456\" width=\"456\" height=\"283\" border=\"0\" \/><\/a><\/div>\n<h2>\nScraping Google search hits<\/h2>\n<div dir=\"ltr\">Google search results can serve as a useful indicator of public opinion, if you know what to look for. Last week I had the pleasure of meeting <a href=\"http:\/\/sethsd.com\/\" target=\"_blank\" rel=\"noopener\">Seth Stephens-Davidowitz<\/a>, a fellow analyst at Google who has used search data to research several topics, such as <a href=\"http:\/\/campaignstops.blogs.nytimes.com\/2012\/06\/09\/how-racist-are-we-ask-google\/\" target=\"_blank\" rel=\"noopener\">quantifying the effect of racism on the 2008 presidential election<\/a> (Obama did worse in states with higher racist query volume). There\u2019s a lot of room for exploring historically difficult topics with this data, so I thought I\u2019d use it to identify the most memorable pieces of I Have a Dream.<\/div>\n<div dir=\"ltr\">Fortunately, I was able to build off of a function from <a href=\"http:\/\/goo.gl\/TXvTxP\" target=\"_blank\" rel=\"noopener\">theBioBucket\u2019s blog post<\/a> to count Google hits for a query.<\/div>\n<div dir=\"ltr\">\n<pre>\nGoogleHits &lt;- function(query){\n  require(XML)\n  require(RCurl)\n  \n  url &lt;- paste0(\"https:\/\/www.google.com\/search?q=\", gsub(\" \", \"+\", query))\n  \n  CAINFO = paste0(system.file(package=\"RCurl\"), \"\/CurlSSL\/ca-bundle.crt\")\n  script &lt;- getURL(url, followlocation=T, cainfo=CAINFO)\n  doc &lt;- htmlParse(script)\n  res &lt;- xpathSApply(doc, '\/\/*\/div[@id=\"resultStats\"]', xmlValue)\n  return(as.numeric(gsub(\"[^0-9]\", \"\", res)))\n}<\/pre>\n<\/div>\n<div dir=\"ltr\">From there I needed to pass each sentence to the function, stripped of punctuation and grouped in brackets, and with \u201cmlk\u201d added to ensure it related to the speech.<\/div>\n<pre>\nsentences[, google.hits := GoogleHits(paste0(\"[\", gsub(\"[,;!.]\", \"\", speech), \n                                             \"] mlk\"))]<\/pre>\n<\/div>\n<div dir=\"ltr\">A quick plot reveals that there\u2019s a huge difference between the most-quoted sentences and the rest of the speech, particularly the top seven (though really six as one is a duplicate). Do these top sentences align with your expectations?<\/div>\n<div dir=\"ltr\">\n<pre>\nggplot(sentences, aes(pct.complete, google.hits \/ 1e6)) +\n  geom_line(color=\"grey40\") + # Lighten dots\n  xlab(\"Percent complete (by syllable count)\") + \n  scale_x_continuous(labels = percent) + my.theme +\n  ylim(0, max(sentences$google.hits) \/ 1e6) +\n  ylab(\"Sentence memorability (millions of Google hits)\") +\n  ggtitle(\"Memorability of I Have a Dream speech\")<\/pre>\n<div><a href=\"http:\/\/i0.wp.com\/3.bp.blogspot.com\/-cXUYKqMa7zQ\/UtzEnCohkxI\/AAAAAAAAQP4\/aK3eyzvuUC8\/s576\/memorability1.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" alt=\"\" src=\"http:\/\/i0.wp.com\/3.bp.blogspot.com\/-cXUYKqMa7zQ\/UtzEnCohkxI\/AAAAAAAAQP4\/aK3eyzvuUC8\/s576\/memorability1.png?w=456\" width=\"456\" height=\"283\" border=\"0\" \/><\/a><\/div>\n<div><\/div>\n<\/div>\n<div dir=\"ltr\">\n<pre>\nhead(sentences[order(-google.hits)]$speech, 7)<\/pre>\n<pre>\n[1] \"free at last!\"\n[2] \"I have a dream today.\"\n[3] \"I have a dream today.\"\n[4] \"This is our hope.\"\n[5] \"And if America is to be a great nation this must become true.\"\n[6] \"I say to you today, my friends, so even though we face the difficulties of today and tomorrow, I still have a dream.\"\n[7] \"We cannot turn back.\"<\/pre>\n<\/div>\n<p>Plotting Google hits on a log scale reduces skew and allows us to work on a ratio scale.<\/p>\n<pre>\nsentences[, log.google.hits := log(google.hits)]\n\nCustomScatterPlot(ggplot(sentences, aes(pct.complete, log.google.hits)) +\n                    ylab(\"Memorability (log of sentence's Google hits)\") +\n                    ggtitle(\"Memorability of I Have a Dream speech\"))<\/pre>\n<div><a href=\"https:\/\/images-blogger-opensocial.googleusercontent.com\/gadgets\/proxy?url=http%3A%2F%2F2.bp.blogspot.com%2F-1XyC0wdnYqI%2FUtxuxCOmiwI%2FAAAAAAAAQPM%2FHuIwkLIyZoI%2Fs1600%2Fmemorability.png&amp;container=blogger&amp;gadget=a&amp;rewriteMime=image%2F*\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" alt=\"\" src=\"http:\/\/i0.wp.com\/2.bp.blogspot.com\/-1XyC0wdnYqI\/UtxuxCOmiwI\/AAAAAAAAQPM\/HuIwkLIyZoI\/s576\/memorability.png?w=456\" width=\"456\" height=\"283\" border=\"0\" \/><\/a><\/div>\n<div><\/div>\n<h2>\nWhat makes a passage memorable? A linear regression approach<\/h2>\n<div dir=\"ltr\">With several metrics for each sentence, along with the natural outcome variable of log(Google hits), I ran a linear regression to determine what makes a sentence memorable. I pruned the regressor list using the stepAIC backward selection technique, which minimizes the <a href=\"http:\/\/en.wikipedia.org\/wiki\/Akaike_information_criterion\" target=\"_blank\" rel=\"noopener\">Akaike Information Criterion<\/a> and leads to a more parsimonious model. Finally, based on preliminary model results, I added polynomials of readability and excluded word count, syllable count, and syllables per word (readability is largely based on these factors).<\/div>\n<div dir=\"ltr\">\n<pre>\nlibrary(MASS) # For stepAIC\ngoogle.lm &lt;- stepAIC(lm(log(google.hits) ~ poly(readability, 3) + pol +\n                          pct.complete.100, data=sentences))<\/pre>\n<\/div>\n<div dir=\"ltr\">stepAIC returns the optimal model, which can be summarized like any lm object.<\/div>\n<div dir=\"ltr\">\n<pre>\nsummary(google.lm)<\/pre>\n<pre>\nCall:\nlm(formula = log(google.hits) ~ poly(readability, 3) + pct.complete.100, \n    data = sentences)\n\nResiduals:\n    Min      1Q  Median      3Q     Max \n-4.2805 -1.1324 -0.3129  1.1361  6.6748 \n\nCoefficients:\n                        Estimate Std. Error t value Pr(&gt;|t|)    \n(Intercept)            11.444037   0.405247  28.240  &lt; 2e-16 ***\npoly(readability, 3)1 -12.670641   1.729159  -7.328 1.75e-10 ***\npoly(readability, 3)2   8.187941   1.834658   4.463 2.65e-05 ***\npoly(readability, 3)3  -5.681114   1.730662  -3.283  0.00153 ** \npct.complete.100        0.013366   0.006848   1.952  0.05449 .  \n---\nSignif. codes:  0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1\n\nResidual standard error: 1.729 on 79 degrees of freedom\nMultiple R-squared:  0.5564, Adjusted R-squared:  0.534 \nF-statistic: 24.78 on 4 and 79 DF,  p-value: 2.605e-13<\/pre>\n<\/div>\n<div dir=\"ltr\">Four significant regressors explained 56% of the variance (R2=0.5564): a third degree polynomial of readability, along with pct.complete.100 (where the sentence was in the speech); polarity was not significant.<\/div>\n<div dir=\"ltr\"><\/div>\n<div dir=\"ltr\">\nThe effect of pct.complete can be calculated by exponentiating the coefficient, since I log-transformed the outcome variable:<\/div>\n<div dir=\"ltr\">\n<pre>exp(google.lm$coefficients[\"pct.complete.100\"])<\/pre>\n<pre>pct.complete.100\n        1.013456<\/pre>\n<\/div>\n<p>This result can be interpreted as the following: a <b>1%<\/b> increase in the location of a sentence in the speech was associated with a <b>1.3%<\/b> increase in search hits.<br \/>\nInterpreting the effect of readability is not as straightforward, since I included polynomials. Rather than compute an average effect, I graphed predicted Google hits for values of readability&#8217;s observed range, holding pct.complete.100 at its mean.<\/p>\n<pre>\nnew.data &lt;- data.frame(readability=seq(min(sentences$readability), \n                                       max(sentences$readability), by=0.1),\n                       pct.complete.100=mean(sentences$pct.complete.100))\n\nnew.data$pred.hits &lt;- predict(google.lm, newdata=new.data)\n\nggplot(new.data, aes(readability, pred.hits)) + \n  geom_line(color=\"royalblue\", size=1.4) + \n  xlab(\"Automated Readability Index\") +\n  ylab(\"Predicted memorability (log Google hits)\") +\n  ggtitle(\"Predicted memorability ~ readability\") +\n  my.theme<\/pre>\n<div><a href=\"http:\/\/i0.wp.com\/3.bp.blogspot.com\/-okvwR8rbzlY\/Uty9EeSyg9I\/AAAAAAAAQPo\/kyxQ_qHXuOA\/s576\/memorability_readability.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" alt=\"\" src=\"http:\/\/i0.wp.com\/3.bp.blogspot.com\/-okvwR8rbzlY\/Uty9EeSyg9I\/AAAAAAAAQPo\/kyxQ_qHXuOA\/s576\/memorability_readability.png?w=456\" width=\"456\" height=\"283\" border=\"0\" \/><\/a><\/div>\n<p>This cubic relationship indicates that predicted memorability falls considerably until about grade level 10, at which point it levels off (very few passages have readability exceeding 25).<\/p>\n<h2>\nConclusion<\/h2>\n<div dir=\"ltr\">R tools from qdap to ggplot2 have uncovered some of MLK\u2019s brilliance in I Have a Dream:<\/div>\n<p>&nbsp;<\/p>\n<div dir=\"ltr\"><\/div>\n<ul>\n<li>The speech starts and (especially) ends on a positive note, with a positive middle section filled with two troughs to vary the tone.<\/li>\n<li>While readability\/complexity varies considerably within each small section, the overall level is fairly consistent throughout the speech.<\/li>\n<li>Readability and placement were the strongest drivers of memorability (as quantified by Google hits): sentences below grade level 10 were more memorable, as were those occurring later in the speech.<\/li>\n<\/ul>\n<p>To a degree, these were intuitive findings&#8211;the ebb and flow of intensity and sentiment is a powerful rhetorical device. While we may never be able to fully deconstruct the meaning of this speech, techniques explored here can provide brief insight into the genius of MLK and the power of his message.<\/p>\n<div dir=\"ltr\">\nThanks for reading, and enjoy your MLK day!<\/div>\n<h2>\nAcknowledgments<\/h2>\n<div>\n<ul>\n<li>Special thanks to <a href=\"http:\/\/www.blogger.com\/profile\/16005153347460476695\" target=\"_blank\" rel=\"noopener\">Ben Ogorek<\/a> for\u00a0guidance on some of the statistics here, and for a thorough review.<\/li>\n<li>Special thanks to <a href=\"http:\/\/www.linkedin.com\/in\/mindygreenberg\" target=\"_blank\" rel=\"noopener\">Mindy Greenberg<\/a> for reviewing and always pushing my boundaries of conciseness and clarity.<\/li>\n<li>Thanks to <a href=\"http:\/\/www.linkedin.com\/in\/joshkraut\" target=\"_blank\" rel=\"noopener\">Josh Kraut<\/a> for offering a ggplot2 lesson at work, inspiring me to use it here.<\/li>\n<\/ul>\n<h2>\nResources<\/h2>\n<\/div>\n<div>\n<ul>\n<li><a href=\"https:\/\/github.com\/analyzestuff\/posts\/blob\/master\/i_have_a_dream\/i_have_a_dream.R\" target=\"_blank\" rel=\"noopener\">Full code<\/a><\/li>\n<li><a href=\"https:\/\/docs.google.com\/spreadsheets\/d\/1FpRDDuepY3fBimlECkGt68OEOzIc-76U7Wtr_-Kg-is\/edit#gid=658729262\" target=\"_blank\" rel=\"noopener\">Spreadsheet with sentence-level data<\/a><\/li>\n<\/ul>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>This article was first published on analyze stuff. It has been contributed to Anything but R-bitrary as the second article in its introductory series. By&hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[],"class_list":["post-549","post","type-post","status-publish","format-standard","hentry","category-r"],"_links":{"self":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts\/549","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/comments?post=549"}],"version-history":[{"count":0,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts\/549\/revisions"}],"wp:attachment":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/media?parent=549"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/categories?post=549"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/tags?post=549"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}